Modula-2 Didn’t Make the Cut

As the title says, I’m pretty much done with Modula-2. It had promise, and was even fun to convert a few programs from PL/I and FORTRAN. But in the end, it just didn’t cut it for me.

Why? In a word, I/O. More specifically, the input/output for Modula-2, at least the CP/M Z80 version I had, was just too primitive to use.

What I have discovered while running and modifying some of my old Engineering programs in FORTRAN (and into PL/I or C) is that in many cases the actual calculation code pales in comparison to the input/output code. Many of the engineering programs produce a LOT of output, and it really needs to be readable to be of any use.

FORTRAN was my first computer language, and it was also the primary scientific / engineering language of my first professional jobs. Some of those programs produced fan-fold paper outputs that were several INCHES thick. Having a readable output format was critical.

What I’ve discovered lately is that PL/I is actually a better language for formatting output, at least when the FORTRAN compiler does not support the ‘T’ format (tab). PL/I has a tab (Column(x)) so it’s quicker and nicer to use for those programs.

Also, getting input from keyboard (console) or file in either FORTRAN or PL/I is not only quite easy, it’s easy to format, thus allowing for very concise readable input files.

Not so Modula-2. The input libraries are quite rudimentary. For example, reading real numbers is done by a routine that reads a full line, then parses ONE number from that line. To do anything else requires rewriting the library.

As a result, a simple input data file such as ‘-23.2 10.0 -44.8’ becomes one number per line. Now imaging a file with 5-10 real numbers per line and you have an unreadable input file.

Likewise the output was pretty much ‘unformatted’. You could apply some formatting, but not much. As a result, programs produced rather messy ouput. It’s fine for a simple program, but not for anything of substance.

It’s funny, because I remember the original criticism of Pascal, which is the pre-cursor language to Modula-2 by the same dude. Formal Pascal had no input or output as part of the original language spec, because it was supposed to be a ‘pure academic’ language. It wasn’t until Borland created Turbo Pascal and added a complete set of I/O routines that Pascal became a force in the 1980s.

Funny that I’m facing a similar situation in 2020 with Modula-2 on my Z80.

And the New Thing is… Modula-2

I know. Modula-2 is not new. In fact, it’s pretty much another ‘dead language’ in computing.

Modula-2 was Nicholas Wirth’s second foray into computer programming languages, the first being the much more successful PASCAL.

So why learn (and write) Modula-2 instead of Pascal? Well, it’s simple. I hate Pascal.

I’ve hated Pascal ever since the Borland Turbo-Pascal came out and everyone in the programming community was shoving it down my throat in the mid-1980s. I was, at the time, working at a consulting firm writing and maintaining FORTRAN programs on IBM, VAX, CRAY and several other machines. I was hearing rumblings from the Unix community about this ‘thing’ called C, but was at that time still several years from venturing into C myself.

But this Turbo-Pascal was everywhere in the micro-computing community. You simply could not escape it. Worse, to me it seemed a dumb language; full of ‘:=’ and other arcane structural things that made little sense to a FORTRAN programmer.

I didn’t want to learn it, but the constant barrage of Pascal stuff was deafening.

Eventually I moved on to C programming, and Borland’s Turbo-C was wonderful for micro-computers. I went on to start my own consulting practice where C programming became my bread and butter for many years. After that I branched out to C++ and then Java, but I managed the entire time to ‘omit’ Pascal.

Now I’m back playing with Z80 computers and interested in learning ‘stuff’ for the fun of it. While looking around for the next ‘programming thing’ after FORTRAN (F77), PL/I and C for my Z80, I discovered a really great working Modula-2 compiler. It was complete. There was full documentation. There were (a very few) example programs. I was set.

The cool thing about the Modula-2 docs is there’s even a big section comparing differences between Modula-2 and Pascal. Funny, I still don’t miss Pascal.

But now Modula-2 is the ‘this week and/or month’ language, so on I go. I’ve already managed to covert one fairly simply PL/I program, and just today I managed to get it to read from, and write to files. Next I’ll try some more sophisticated programs just to see how it compares to the other languages I’ve been enjoying.

I do love PL/I

After some weeks playing with old FORTRAN programs and converting them into PL/I on my Z80 singleboard (SB) computer (kit from CPUVille), I can say without reservation that I quite love PL/I.

The language is now dead, and that’s a shame. It certainly is one of the true ‘structured programming’ languages of the 20th century, and definitely fit in with the horrid old ‘waterfall project development’ life cycle. However, the language itself is quite lovely.

I took many of my old Engineering FORTRAN programs from the ’80s and had them running in F77 on the Z80 SB. They worked almost exactly as they had in the 80s, though quite a bit slower (Z80 vs. CDC Cyber or Honeywell Multics). The speed difference between Z80 and a modern Intel I7 PC using the same F77 compiler was astounding.

But it wasn’t until I converted them to PL/I that I found the PL/I programs, while about the same speed of compile/link/execute were just… nicer. From an aesthetic standpoint, the structure of the PL/I programs was clean and … nice. But the things that set PL/I apart from F77 were the file IO, and the outputs. File IO in FORTRAN works, but was messy. File IO in PL/I was simple and clean. The F77 documentation for Z80 wasn’t any help either. I ended up reaching out to online forums for assistance getting F77 file IO to read CP/M named files. Once I got the ‘trick’, it was easy. But in PL/I, the documentation was clear, easy to follow, and correct.

Even with both IO systems working, the output was where PL/I really shone. Fortran has several format specifiers for real number output, but PL/I outshines Fortran on every level.

The output from my FORTRAN programs was … nice. The same output from my PL/I programs was beautiful. Now in full disclosure, the F77 compiler would not accept the ‘T’ (or Tab) format specifier, so I had to resort to counting spaces. The PL/I formatted output accepted ‘Column(x)’ and made it so much easier.

I found no fundamental difference in double precision implementations in the two compilers, and both were easy to use once you learned how.

However, I still really enjoyed writing the PL/I programs. It was just a lot of fun and very satisfying.

Now on to new things…

More PL/I fun (with my Z80)

I received a couple of excellent comments regarding my previous blog posts on PL/I and it’s compiler bug with double precision and the ‘**’ operator. I replaced ‘**’ with a dead simple ‘doexp()’ function that simply did serial multiplications using a loop. It was not very sophisticated, nor very efficient.

Ed suggested I search for “exp_by_squaring_iterative”, which I did. It’s a very simple and elegant recursive method for calculating y to the power x, and was easy to implement in PL/I, which supports recursion. After a quick test to prove ‘I got it right’, I replaced ‘doexp()’ with ‘powsq()’ (my name for it) in both fracture programs as well as the concentration programs.

Tests with the concentration programs proved the new method is almost twice as fast as the brute-force function, which is very rewarding.

The next problem I ended up tackling (and am still working on) turns out to be caused by the ‘weirdness’ of 2D arrays. Basically, different programming languages order arrays differently. C (and PL/I) order 2D arrays by row-column, while FORTRAN orders by column-row, when the array is populated by either BLOCK DATA (FORTRAN) or {} (c) or ‘static input ()’ (PL/I).

Figuring out that the original FORTRAN program loaded the arrays in a completely different manner than PL/I has caused me no little headache. I ended up having to debug both the FORTRAN code to see how the array was stored as well as the PL/I program.

Still, I’m having fun and still enjoying my foray into PL/I.

Once again I broke a compiler…

Back when I was writing FORTRAN ‘for real’ (a.k.a. in a production environment), I managed to find a bug in the FORTRAN compiler that was confirmed and verified independently by the compiler creator and supplier.

Not that I was too happy about it as the bug was totally reproducible and quite severe. It was also just obscure enough that it was never fixed. We worked around it and then went on to other things.

Well, I’ve found a bug in another compiler. This one dates from the same time frame, but is the Digital Research PL/I compiler I’ve been playing with on my Z80 single board.

Now, disclaimer: this ‘bug’ may well be a known and documented condition, but it’s not in the DR written PL/I compiler manual, and any other documentation has probably been lost over the ages.

So with that, I mentioned in my last post that I had to build my own ‘doexp()’ subroutine for the concentration program because squaring a negative number was failing ‘Error(3)’.

Well, it happened again. This time I was converting an old fracture program from FORTRAN and it was failing with OVERFLOW(1) errors. After some serious tracing with debug prints, I found it was again the ‘**’ exponent function that was the cause. Looking at the numbers coming from the program, there was NO WAY it could be a real calculation problem (-.0152534 ** 2 is NOT an overflow!!!) so it had to be the ‘**’ again.

To verify I grabbed the ‘doexp()’ subrouting from the concentration program and popped into the fracture program, and sure enough it runs.

As there is no way I can examine the source of the built-in function, I’ll just have to use my ‘doexp()’ from now on.

At least the program runs now. 🙂

It matters to me (or the lost art of program tuning)

In my last post I talked about Pl/I programming, which I’ve again fallen in love with, and the speed of my small engineering programs on my Z80 single board computer compared to a modern PC.

I was actually incorrect in my last post – the program is not 4000 times slower than a PC. In fact, the PC version was so quick one could not really time it by hand. There are tools on Unix boxes to time program execution, but I didn’t use them. However, a more realistic estimate of the larger data set would be about 1/2 second, not 1 second. That means my little Z80 program was 4000 x 2 or 8000 time slower (at least).

But something else about that program has been bothering me for the past few days. I mentioned in my last post how the ‘exp’ (or ‘**’) operation had a bug that prevented the double precision version working with negative numbers. Normally, there’s no problem – for example square -2 and you get +4. Cube -2 and you get -8. It’s all just serial multiplication, so it bothered me that it was not working. My fix was to add ‘abs’ before the term to remove any negative value; that worked, but I wasn’t happy with the fix as it wasn’t ‘totally correct’ from an aesthetic standpoint.

The solution was to return to my programming roots. Back in the early 1980’s I worked writing Reservoir Simulation programs. These huge FORTRAN programs could consume every calculation cycle of very, very large computers. Optimization beyond what a compiler was capable of became essential. There were many tricks we used to squeeze every bit of performance out of such programs.

One technique involved removing subroutine calls when possible by ‘in lining’ code. It’s ugly, and counter to all ‘structured programming’ rules, but it does work. Another technique analyzed operations; division is much more ‘expensive’ (in CPU cycles) than multiplication, with addition being about the cheapest. Loops have set-up time, so ‘unrolling’ loops might also be done in simple cases.

My analysis of this particular PL/I program showed the line with the error in it had multiplication, division, and that exponentiation. Now I added a call to the ‘abs’ function. I knew there was a way to fix both and make it run faster… convert ‘exp’ (or ‘**’) into serial multiplication and remove the ‘abs’ call. It would not only fix the negative value error (multiply always works) but remove two function calls and their related overhead (push stack, pop stack).

I decided to write a new function ‘doexp()’ which used a loop to perform the serial multiplication. I knew I was adding the overhead of a new function, but removing two function calls and a (supposed) complex general-purpose exponentiation routine. It won’t work for fractional exponents, but this program was always limited to whole number exponents anyway.

The new function coded, I tried the smaller data sets. All reported a roughly 1.5 time speed improvement. I then ran the big data set, and early timing of the time steps shows it’s running 2 times faster than the original program. The entire job, which would have taken 2.8 days, will now run in 1.4 days. For program tuning, this is a huge improvement.

As the post title says, it matters to me. It was also great fun to see it work.

Just How Slow is my Z80 single board computer?

Officially, the Z80 single board computer from CPUville has a 1.8432 MHz crystal clock. Compare that to a modern PC running multi-megahertz clocks and it’s pretty slow indeed. But sometimes the real test is in running ‘real programs’.

As stated in prior posts, I’ve been playing with PL/I on my Z80 single board computer under CP/M 2.2, and I love it. PL/I is turning out to be the amazing and fun language I learned back in 1980, and it’s a blast. One thing I really love about PL/I is the total control over output, done easily with output formatting capabilities, plus the really great error handling.

I had an obscure bug in my latest program; a multi-phase concentration-with-time program from my Engineering graduate days. It would not run on one of the numerous test data sets, instead crashing immediately with ‘Error 3’. Looking up the error, it says: ‘A transcendental function argument is out of-range.’ Some help. However, transcendental functions include exponentiation, and the program had just such a line. Typical debugging means putting in a bunch of print statements and then wading through reams of output to try and trace the problem.

In PL/I, you can use the ‘on’ construct to trap errors. So by adding a ‘on error(3) begin… end;’ I was able to immediately isolate the subroutine where the error occurred and yes, it was the exponentiation line. Now adding some print statements made sense, and I quickly found that in double-precision, squaring a negative number was causing the problem. Now squaring a negative number is legal (the result is positive), so I had to find a fix. Fortunately using the ‘abs’ function solved the problem for this case as no real data (i.e. fluid concentration) should be negative in the runs. With the problem fixed, time to run all the example cases.

All but one ran in decent time. But problem 10, which has 600 time steps, was still going after two days. With no console output, I wasn’t sure if the program had crashed or was just taking a long time.

A few more print statements (with a debug flag to turn them off later), and a stopwatch, and I found this particular run was taking 6.44 seconds per time step. With 600 time steps in the run, that’s 4040 seconds or 67.3 hours (2.8 days!). No wonder I thought it was taking a while.

So about slowness… when I was wondering if the run was ‘broken’ or not, I ran the test case on my Win7 PC (in FORTRAN, but close enough). It was so fast you could not really time it. Let’s say for argument sake it took 1 second. That makes the Z80 4040 TIMES SLOWER!

Oh well, this is all about fun, so waiting for the full run on the Z80 will be something to wait for. Until next time…

The Continuing Saga of the Z80 Singleboard Computer

I’ve already posted about the fun I’m having with the Z80 singleboard computer (kit from CPUville) recently.

In addition to a FORTRAN compiler (Microsoft F80), I added the High Tech C compiler. I’ve written programs in FORTRAN, C and 8080 Assembler. I’ve used both the CP/M 2.2 ASM assembler and the M80 assembler that came with the F80 compiler. Except for one instance where my port reading assembly program won’t actually read the port, it’s been fun and games.

I’ve even created assembler programs that can be called from FORTRAN (the aforementioned port reading routine).

Last week, while exploring the various archives of CP/M software, especially compilers, I spied the DEC PL/I compiler. That looked really promising.

Back at my first job after my B.Sc., I worked at an IBM shop that sent me on a PL/I course. Afterward I spent the next year writing software in PL/I for a pair of IBM 3033 mainframes. It was all great fun.

Finding a working PL/I compiler was too good to pass up, so I grabbed the archive and beamed them to the Z80. After a bit of digging, I found my 1980 PL/I reference book, “PL/I Structured Programming” by Joan K. Hughes (2nd edition, Wiley, 1973). After reading through it to refresh my memory, I started building a few PL/I programs, following the examples in the book and then the chapter problems.

Some features of the IBM compiler were not available in the DEC (CP/M) version, but I had the DEC PL/I documentation to help me with the transition. Eventually I had written several working PL/I programs.

The past few days I’ve been playing with a Taylor series program for calculating Sin(x) (x in radians). I have the program working, but the answers diverge from ‘actual’ values in the range PI to 2PI. I had full debugging in the code, but could not really see the reason.

I decided to try converting the PL/I program to C, and then running it on a modern C compiler on one of my Ubuntu 18.04 servers.

SURPRISE!!! The C program has the exact same divergence! Even switching from ‘float’ to ‘double’ didn’t remove the divergence in the C program on a modern machine. I’ll definitely have to investigate further.

Just for fun I then took the working C program and beamed the code over to the Z80. The High Tech C compiler is sound, so it compiled the program easily. The run on the Z80 gave the exact same answers (with a small nod to precision on various platforms) as both the C/Linux version and the PL/I CP/M version. It’s either a really difficult to find coding mistake in my work, or a real phenomenon. As I said, I’ll have to investigate.

Where it all gets cute is timings. On the Linux box (big AMD 6-core server with loads of memory) the C program runs so fast it would be timed in milliseconds were I to try. Certainly faster than one could manually time it. The PL/I program on the Z80 takes 3min, 40.15 seconds to run. What was a surprise is the C program on the Z80 took 5min, 34.40 seconds! I never expected a C program to be that much slower than the PL/I program.

Now that I have FORTRAN, PL/I, C and Assembler all working, time to continue playing.

One last thing: I found a printing “bug” in the PL/I textbook. The formula for the Sin(x) Taylor series has two major errors. First, the terms have a denominator that is (2n+1)! (factorial), or 3!, 5!, 7!, 9! in the expanded formula found in the book. But some typesetter must have thought that an error, as the book has replace the ! with 1, giving denominators of 31, 51, 71 and 91. Not a small error when you are coding!. The other error is the terms alternate in sign (-1**n) so x-a+b-c+d and so on. The book had all + signs.

When debugging the massively incorrect results, I simply did a google search on ‘series solution of sin(x)’ and found the correct formula, then coded that. It is that corrected formula that still diverges from actual results for values of ‘x’ greater than PI.

More Fun with the CPUville Z80 Single board

During the Christmas break I built the CPUville Z80 single board, plus the ‘slow board’ which is really a really nice ‘blinkenlights’ display board for the Z80 single board. That was a fun build.

I also built the CPUville 8-bit computer (3 boards) plus register display board and added that to a separately built Z80 single board (with Z80 replaced by the 8-bit boards). That was also a fun build.

But the real fun began with the Z80 single board once I added an IDE 40-pin to CF (compact flash) controller board with a 4+ GB CF card. Following the CPUville instructions, I was able to modify/compile/install CPM 2.2 on the CF card giving me 4 large “hard disk” CP/M partitions.

THEN… I started playing. After reading a few CP/M manuals, I began to learn my way around the system. ED was perhaps the hardest to learn, only because the first manual neglected to mention that the display buffer was NOT filled ‘on entry’. One has to type ‘#A’ to load it with the file contents before you can see/edit anything.

I started with a few Z80 (or 8080) assembler programs, then found and loaded FORTRAN (F80). I then spent days playing with my FORTRAN programs that I wrote in the 1980’s during my Engineering degree and post-grad courses. Interestingly, they compiled and linked easier than when I tried them on the PiDP8 replica I built several years ago. The version of FORTRAN in F80 was just a bit more modern than the FORTRAN IV on the PiDP8, making things much easier and more fun.

Last week I found an loaded High Tech C compiler on the Z80. I compiled a few C programs from my earlier C programming days, as well as a few versions of the “calculate PI to N digits” programs. Again, tons of fun.

The interesting bits came trying to install the C compiler. It’s a lot of files, and when I tried loading them individually via “PCGET”, they crashed the terminal program. Seeking a better solution, I tried LZH unpack programs (didn’t work on modern LZH files), and eventually found that using modern WINZIP and an old CP/M UNZIP18.COM program, I was able to load whole groups of files to the Z80 and then unzip them in place. The only condition is that the CP/M unzip does not understand ‘modern’ zip methods, so you must zip them on the PC (Windows 7 in my case) with NO COMPRESSION.

The other ‘gotcha’ I discovered tonight is that you must be sure the ZIP files are named in CAPITAL letters. If you unzip lowercase named files on the Z80, they remain lowercase and kind of ‘disappear’ to CP/M. I could not even delete them until I asked on the ‘comp.os.cpm’ google group and was told about NSWEEP (or NSWP.COM). That program was able to delete them easily. I then rebuilt the zip with uppercase file names and it was fine.

So onward and upward with this wonderful true Z80 computer running CP/M 2.2, with FORTRAN, C and 8080/Z80 assembler.

Glassblowing update – April & May 2018

On April 4 I started to blow glass, but the furnace was acting up. Ramping from 1900 to 2100, it went to 2000 and then really didn’t get any hotter. The temp readings were acting up and not settling down, so I set it back to 1900. April 5 I tried again. This time I was watching and saw the temperature reading go past 2000 no problem, but at about 2050 it started to “go unstable”. Eventually it read UUUU which means “no reading, upper limit”.

The only reasonable causes were; broken thermocouple, faulty wiring or connections, or controller failure. The sane response was to shut the furnace off and do a complete check of all components.

After turning the furnace off, I noted the crucible was welded to the maintenance lid by spilled glass. It would not budge, even hot. In an effort to free the crucible from the lid, I blocked the maintenance lid up a bit (about 1/2inch) and left it.

Sure enough, when I returned 12 hours later the crucible was free.

Once the furnace was completely cold I removed the crucible, lids and alumina board to have a look. The crucible is in excellent shape, though full of glass. The kanthal heater wires also look in perfect shape, which is amazing for the age of the furnace. The lids were all good, but there was glass on the top lip of the crucible and on the bottom of the rammable gathering ring. This was what welded the crucible to the ring and thus the lid.

I’ll have to remove the glass carefully so the crucible doesn’t weld next time I run hot.

The next steps are to inspect all the wiring and connections, and then to make sure all connections are tight. The most likely cause of the temperature readout issue is a loose thermocouple connection as these are usually pretty robust if not touched. Only if the connections are all tight will I start further tear down.

My action plan for the late spring (May-June) is to first check the wiring connections. Second is to remove the grog in the base of the hot box and sieve it so it’s clean, then reinstall the two lids and take the furnace up to 2100F to check the wiring and controller. With the crucible out of the furnace, this can be done much quicker as the crucible is the limiting factor on temperature rise.

While  this is going on, I’ll also clean up the gathering port ring and crucible lip. If the heating test is good, I’ll install the ring and crucible to the again cold furnace and start it up.