Telescope Fun

Very recently I got a new telescope that will allow me to explore both astral photography and potential robotic control.

They already make ‘robot control’ telescopes, but they cost a lot. I’m looking more at the budget friendly arduino based concept. I certainly have plenty of experience with arduino thanks to my COMP444 course I wrote and teach at AU, so putting a stepper motor on a telescope mount should be quite feasible.

To that end I bought the Celestron Powerseeker 127EQ telescope. It’s relatively inexpensive and still has a proper (if inexpensive) equestrian mount which allows you to track an object as it moves through the night sky with one rotational control. It’s that control that I would be wanting to automate at first.

The other reason for the Celestron 127EQ is that it has a large enough mirror to provide between 50x and 250x power for astral photography. It comes with an eyepiece assembly that accepts 1.25in lenses but is also threaded for 42mm camera adapters. This provides flexibility to put a T-mount on the camera and then use T-adapter eyepieces, direct connection or a Barlow lens to connect the camera (DLSR) to the telescope.

I had to purchase the T-mount and other bits (adapter & barlow) separately as it took some time to determine exactly which ones would work best with my DLSR.

One thing I instantly loved about this telescope is that I was able to assemble it easily and quickly, and then use it immediately. It was easy to aim the telescope across the road at trees and using the low power lens (50x) I could quickly focus on the leaves. Better still, when the camera T-mount and adapters arrived a couple of days later, I was able to easily swap out the eyepiece for the camera and take photos of the leaves. Nothing persuades further exploration than immediate success!

I’ve had telescopes before, but they always frustrated. The worst was an automated telescope I got on points many years ago. It was ‘computer controlled’, but rather than equatorial mount it was powered by simple x-y motors that needed computer interpretation to actually work. Otherwise you had to push ‘up-down-left-right’ buttons to move the finder, which is fine for land but terrible for astronomy. Set-up required finding several stars under computer guidance before it would work, and some of the stars it needed were simply not visible in our skies. To this day it’s never really worked.

So, needless to say, having a telescope that worked ‘first time’ was a joy. Night set-up is even easier. One leg is set to point true N. You set the azimuth based on latitude (49deg N in my case) and the telescope should point to the north star ‘out of the box’, so-to-speak. If not, you tweak the azimuth until you are pointing at the north star, and the telescope is then set. From then on, simply go outside, point the one leg due N and the telescope is ready. Simple and elegant!

Right now it’s too cloudy and too cold (-5 overnight) to be doing much outside after dark, but even now I’ll probably go outside on a clear night just to find Polaris and then photograph the moon. I am looking forward to taking some better pictures of Mars, Saturn and Jupiter as well.

Updates to Ubuntu & WordPress

I’ve been running on Ubuntu 18.04LTS since it came out. Ubuntu 20.04LTS has been out for quite a while now, and I did update one server (the JupyterHub server) but not my other two. Tonight I decided to go for it. One server (this one) has both Apache2 and Tomcat running, but I figured ‘what the heck’. After applying the latest updates to 18.04 I then ran ‘do-release-upgrade’. After quite a long time, the upgrade was complete without errors and the system restarted.

After the system was running, I checked both Apache (my web pages) and Tomcat, and everything was working fine. I then checked WordPress, and it was NOT running fine. Instead of my WP site, I saw raw php code.

After some googling, it was clear that php.ini needed a setting changed. I did that but that didn’t fix the problem. Further reading directed me to the ‘mods-enabled’ directory for apache, where the php configuration file indicated the needed change. A quick edit and after restarting apache, this site was working again.

I decided to try updating WordPress. Last time I tried it failed because too many things in Ubuntu 18.04LTS were ‘too old’ for WP 5.3.2. Version 5.2.5 worked, and that was the version here when I started. The current version is 6.1.1 so I downloaded it and ran my usual update process, and was delighted to see it worked. I then updated all the plugins and now the site is again up to date.

I also updated the other server, which just runs Tomcat, and it also updated without incident to Ubuntu 20.04LTS.

Someone DOS’d my BSD 2.11 Server

This morning I noticed a lot of activity on the front panel of my PiDP11 (PDP/11 replica). This is not normal, so I had a quick peek.

The machine is a Raspberry Pi (3) running Raspbien and hosting SIMH which then simulates a PDP/11. The PiDP11 consists of circuitry, switches and LEDs that simulate the front panel of a PDP11. The LEDs show activity of the simulation similar to the actual front panel of a real PDP11, so it’s a decent snapshot of actual system activity.

There is a specific LED pattern to system ‘idle’, and other patterns when the system is active. In this case, the ‘active’ pattern was continuous for several minutes.

This is unusual because the only program running on the PiDP11 besides the BSD 2.11 operating system is a small C Program ‘httpd.c’ which runs a simple HTTP web server. The actual web page served is a simple HTML page of text and one photo. Normal access shows activity for several seconds (less than 10) and then the idle pattern returns.

In this case the active pattern continued for several minutes. There is no need to ‘hit’ the web page repeatedly unless mischief is afoot.

I logged on to the R-Pi and then to the SIMH-PDP system. Using ‘ps’ I could see unexpected programs running, so I exited to SIMH and ended the simulation. I then rebooted the R-Pi.

While the R-Pi was rebooting, I checked my firewall rules to confirm the machine/port was open to the world. I edited the config file to remove this connection and reset the firewall. After reset I confirmed the port was no longer open.

Later I checked the firewall logs, and confirmed that the attack was a simple DOS (denial of service) attack from a foreign country (in the ‘far east’). Fortunately I caught it very early and killed it immediately.

However, the BSD 2.11 web server is no longer accessible from outside my home. Such is the price exacted by ‘bad actors’ seeking to cause mischief.

Modula-2 Didn’t Make the Cut

As the title says, I’m pretty much done with Modula-2. It had promise, and was even fun to convert a few programs from PL/I and FORTRAN. But in the end, it just didn’t cut it for me.

Why? In a word, I/O. More specifically, the input/output for Modula-2, at least the CP/M Z80 version I had, was just too primitive to use.

What I have discovered while running and modifying some of my old Engineering programs in FORTRAN (and into PL/I or C) is that in many cases the actual calculation code pales in comparison to the input/output code. Many of the engineering programs produce a LOT of output, and it really needs to be readable to be of any use.

FORTRAN was my first computer language, and it was also the primary scientific / engineering language of my first professional jobs. Some of those programs produced fan-fold paper outputs that were several INCHES thick. Having a readable output format was critical.

What I’ve discovered lately is that PL/I is actually a better language for formatting output, at least when the FORTRAN compiler does not support the ‘T’ format (tab). PL/I has a tab (Column(x)) so it’s quicker and nicer to use for those programs.

Also, getting input from keyboard (console) or file in either FORTRAN or PL/I is not only quite easy, it’s easy to format, thus allowing for very concise readable input files.

Not so Modula-2. The input libraries are quite rudimentary. For example, reading real numbers is done by a routine that reads a full line, then parses ONE number from that line. To do anything else requires rewriting the library.

As a result, a simple input data file such as ‘-23.2 10.0 -44.8’ becomes one number per line. Now imaging a file with 5-10 real numbers per line and you have an unreadable input file.

Likewise the output was pretty much ‘unformatted’. You could apply some formatting, but not much. As a result, programs produced rather messy ouput. It’s fine for a simple program, but not for anything of substance.

It’s funny, because I remember the original criticism of Pascal, which is the pre-cursor language to Modula-2 by the same dude. Formal Pascal had no input or output as part of the original language spec, because it was supposed to be a ‘pure academic’ language. It wasn’t until Borland created Turbo Pascal and added a complete set of I/O routines that Pascal became a force in the 1980s.

Funny that I’m facing a similar situation in 2020 with Modula-2 on my Z80.

Windows XP Professional – unsung hero O/S

I’ve been writing of my adventures with a Z80 singleboard computer (kit supplied by CPUVille) for some time now. I’ve also mentioned how I had to ditch the flakey USB-serial connections and return to ‘real’ RS232 connections using an old Toshiba laptop.

What I may not have mentioned is the O/S on that Toshiba. It came with Windows XP Professional, and is still running that operating system all these decades later. It simply works.

I originally kept the Toshiba because it was the last laptop I owned that had a dedicated DB-9 Serial port connection on the back. Everything I’ve owned since removed the printer (DB-25) and serial (DB-9) connectors in favor of USB ports. While USB is great for almost everything, it is not always great for serial communications. Frankly, the USB-serial chips in the cables rely on flaky drivers that don’t always work.

I kept the Toshiba with it’s serial port to run Fuji controller software, which talked via serial connection (RS232-RS485) to my two Fuji glassblowing controllers (PXR3 series). The software from Fuji ran on Windows 95 thru XP, but not on newer versions. As a result, the Toshiba remained the ‘Fuji controller laptop’ complete with Win XP ever since.

The thing is, it still ‘just works’. After many years on a shelf, I installed the battery (it was stored separate), found the charger and plugged it in. After pressing ‘power on’, it simply worked. I found an RS232 cable (DB-9 ends) and plugged it into the Z80 singleboard. I downloaded a decent terminal program (Teraterm) and again, things just worked. Not only that, but they have continued to work ever since.

The laptop has a network adapter (not wifi) so I plugged it in, and was immediately able to access my shared data folders so I can edit on my main development PC (Windows 7) and access the files on the Toshiba to sent via Teraterm to the Z80. All quite easy and slick.

Using modern windows ‘Remote Desktop Connection’ (RDP), I am able to fully operate the Toshiba via a remote RDP window on my devel PC. Windows XP supported RDP, and it hasn’t changed in all this time – or at least, it’s backward compatible to XP.

The reason I was thinking about this recently is that a couple of days ago we caught a show on Knowledge Network about the 1990’s. This episode was all about tech in the 90s, and was a real trip down memory lane for me. In the 90s I was teaching C programming at SAIT in Calgary in the evening (Continuing Education courses) as well as full-time consulting in the daytime. I worked on some of the first computers to get Windows 95 when it came out. I remember programming for Windows 95. I was ‘there’ when Win 95 led to Win98, then Win ME (millenium edition), or “meh” (or mill-enema edition) as we often said. Win ME was a horrible, rushed OS that could not die fast enough. But Windows XP – that was beautiful, at the time. It simply worked. Most of the driver glitches had been tamed, and the PC slot and USB ports worked. The PC slot was quite cool for the time – I had a TV card (turned the laptop into a real TV) and a Wifi card, and several others. They were expensive but cool. And now they are long-gone history.

Still, it’s nice to be using something for a specific purpose and have it still work perfectly after all these years.

And the New Thing is… Modula-2

I know. Modula-2 is not new. In fact, it’s pretty much another ‘dead language’ in computing.

Modula-2 was Nicholas Wirth’s second foray into computer programming languages, the first being the much more successful PASCAL.

So why learn (and write) Modula-2 instead of Pascal? Well, it’s simple. I hate Pascal.

I’ve hated Pascal ever since the Borland Turbo-Pascal came out and everyone in the programming community was shoving it down my throat in the mid-1980s. I was, at the time, working at a consulting firm writing and maintaining FORTRAN programs on IBM, VAX, CRAY and several other machines. I was hearing rumblings from the Unix community about this ‘thing’ called C, but was at that time still several years from venturing into C myself.

But this Turbo-Pascal was everywhere in the micro-computing community. You simply could not escape it. Worse, to me it seemed a dumb language; full of ‘:=’ and other arcane structural things that made little sense to a FORTRAN programmer.

I didn’t want to learn it, but the constant barrage of Pascal stuff was deafening.

Eventually I moved on to C programming, and Borland’s Turbo-C was wonderful for micro-computers. I went on to start my own consulting practice where C programming became my bread and butter for many years. After that I branched out to C++ and then Java, but I managed the entire time to ‘omit’ Pascal.

Now I’m back playing with Z80 computers and interested in learning ‘stuff’ for the fun of it. While looking around for the next ‘programming thing’ after FORTRAN (F77), PL/I and C for my Z80, I discovered a really great working Modula-2 compiler. It was complete. There was full documentation. There were (a very few) example programs. I was set.

The cool thing about the Modula-2 docs is there’s even a big section comparing differences between Modula-2 and Pascal. Funny, I still don’t miss Pascal.

But now Modula-2 is the ‘this week and/or month’ language, so on I go. I’ve already managed to covert one fairly simply PL/I program, and just today I managed to get it to read from, and write to files. Next I’ll try some more sophisticated programs just to see how it compares to the other languages I’ve been enjoying.

I do love PL/I

After some weeks playing with old FORTRAN programs and converting them into PL/I on my Z80 singleboard (SB) computer (kit from CPUVille), I can say without reservation that I quite love PL/I.

The language is now dead, and that’s a shame. It certainly is one of the true ‘structured programming’ languages of the 20th century, and definitely fit in with the horrid old ‘waterfall project development’ life cycle. However, the language itself is quite lovely.

I took many of my old Engineering FORTRAN programs from the ’80s and had them running in F77 on the Z80 SB. They worked almost exactly as they had in the 80s, though quite a bit slower (Z80 vs. CDC Cyber or Honeywell Multics). The speed difference between Z80 and a modern Intel I7 PC using the same F77 compiler was astounding.

But it wasn’t until I converted them to PL/I that I found the PL/I programs, while about the same speed of compile/link/execute were just… nicer. From an aesthetic standpoint, the structure of the PL/I programs was clean and … nice. But the things that set PL/I apart from F77 were the file IO, and the outputs. File IO in FORTRAN works, but was messy. File IO in PL/I was simple and clean. The F77 documentation for Z80 wasn’t any help either. I ended up reaching out to online forums for assistance getting F77 file IO to read CP/M named files. Once I got the ‘trick’, it was easy. But in PL/I, the documentation was clear, easy to follow, and correct.

Even with both IO systems working, the output was where PL/I really shone. Fortran has several format specifiers for real number output, but PL/I outshines Fortran on every level.

The output from my FORTRAN programs was … nice. The same output from my PL/I programs was beautiful. Now in full disclosure, the F77 compiler would not accept the ‘T’ (or Tab) format specifier, so I had to resort to counting spaces. The PL/I formatted output accepted ‘Column(x)’ and made it so much easier.

I found no fundamental difference in double precision implementations in the two compilers, and both were easy to use once you learned how.

However, I still really enjoyed writing the PL/I programs. It was just a lot of fun and very satisfying.

Now on to new things…

More PL/I fun (with my Z80)

I received a couple of excellent comments regarding my previous blog posts on PL/I and it’s compiler bug with double precision and the ‘**’ operator. I replaced ‘**’ with a dead simple ‘doexp()’ function that simply did serial multiplications using a loop. It was not very sophisticated, nor very efficient.

Ed suggested I search for “exp_by_squaring_iterative”, which I did. It’s a very simple and elegant recursive method for calculating y to the power x, and was easy to implement in PL/I, which supports recursion. After a quick test to prove ‘I got it right’, I replaced ‘doexp()’ with ‘powsq()’ (my name for it) in both fracture programs as well as the concentration programs.

Tests with the concentration programs proved the new method is almost twice as fast as the brute-force function, which is very rewarding.

The next problem I ended up tackling (and am still working on) turns out to be caused by the ‘weirdness’ of 2D arrays. Basically, different programming languages order arrays differently. C (and PL/I) order 2D arrays by row-column, while FORTRAN orders by column-row, when the array is populated by either BLOCK DATA (FORTRAN) or {} (c) or ‘static input ()’ (PL/I).

Figuring out that the original FORTRAN program loaded the arrays in a completely different manner than PL/I has caused me no little headache. I ended up having to debug both the FORTRAN code to see how the array was stored as well as the PL/I program.

Still, I’m having fun and still enjoying my foray into PL/I.

Once again I broke a compiler…

Back when I was writing FORTRAN ‘for real’ (a.k.a. in a production environment), I managed to find a bug in the FORTRAN compiler that was confirmed and verified independently by the compiler creator and supplier.

Not that I was too happy about it as the bug was totally reproducible and quite severe. It was also just obscure enough that it was never fixed. We worked around it and then went on to other things.

Well, I’ve found a bug in another compiler. This one dates from the same time frame, but is the Digital Research PL/I compiler I’ve been playing with on my Z80 single board.

Now, disclaimer: this ‘bug’ may well be a known and documented condition, but it’s not in the DR written PL/I compiler manual, and any other documentation has probably been lost over the ages.

So with that, I mentioned in my last post that I had to build my own ‘doexp()’ subroutine for the concentration program because squaring a negative number was failing ‘Error(3)’.

Well, it happened again. This time I was converting an old fracture program from FORTRAN and it was failing with OVERFLOW(1) errors. After some serious tracing with debug prints, I found it was again the ‘**’ exponent function that was the cause. Looking at the numbers coming from the program, there was NO WAY it could be a real calculation problem (-.0152534 ** 2 is NOT an overflow!!!) so it had to be the ‘**’ again.

To verify I grabbed the ‘doexp()’ subrouting from the concentration program and popped into the fracture program, and sure enough it runs.

As there is no way I can examine the source of the built-in function, I’ll just have to use my ‘doexp()’ from now on.

At least the program runs now. 🙂

It matters to me (or the lost art of program tuning)

In my last post I talked about Pl/I programming, which I’ve again fallen in love with, and the speed of my small engineering programs on my Z80 single board computer compared to a modern PC.

I was actually incorrect in my last post – the program is not 4000 times slower than a PC. In fact, the PC version was so quick one could not really time it by hand. There are tools on Unix boxes to time program execution, but I didn’t use them. However, a more realistic estimate of the larger data set would be about 1/2 second, not 1 second. That means my little Z80 program was 4000 x 2 or 8000 time slower (at least).

But something else about that program has been bothering me for the past few days. I mentioned in my last post how the ‘exp’ (or ‘**’) operation had a bug that prevented the double precision version working with negative numbers. Normally, there’s no problem – for example square -2 and you get +4. Cube -2 and you get -8. It’s all just serial multiplication, so it bothered me that it was not working. My fix was to add ‘abs’ before the term to remove any negative value; that worked, but I wasn’t happy with the fix as it wasn’t ‘totally correct’ from an aesthetic standpoint.

The solution was to return to my programming roots. Back in the early 1980’s I worked writing Reservoir Simulation programs. These huge FORTRAN programs could consume every calculation cycle of very, very large computers. Optimization beyond what a compiler was capable of became essential. There were many tricks we used to squeeze every bit of performance out of such programs.

One technique involved removing subroutine calls when possible by ‘in lining’ code. It’s ugly, and counter to all ‘structured programming’ rules, but it does work. Another technique analyzed operations; division is much more ‘expensive’ (in CPU cycles) than multiplication, with addition being about the cheapest. Loops have set-up time, so ‘unrolling’ loops might also be done in simple cases.

My analysis of this particular PL/I program showed the line with the error in it had multiplication, division, and that exponentiation. Now I added a call to the ‘abs’ function. I knew there was a way to fix both and make it run faster… convert ‘exp’ (or ‘**’) into serial multiplication and remove the ‘abs’ call. It would not only fix the negative value error (multiply always works) but remove two function calls and their related overhead (push stack, pop stack).

I decided to write a new function ‘doexp()’ which used a loop to perform the serial multiplication. I knew I was adding the overhead of a new function, but removing two function calls and a (supposed) complex general-purpose exponentiation routine. It won’t work for fractional exponents, but this program was always limited to whole number exponents anyway.

The new function coded, I tried the smaller data sets. All reported a roughly 1.5 time speed improvement. I then ran the big data set, and early timing of the time steps shows it’s running 2 times faster than the original program. The entire job, which would have taken 2.8 days, will now run in 1.4 days. For program tuning, this is a huge improvement.

As the post title says, it matters to me. It was also great fun to see it work.