Someone DOS’d my BSD 2.11 Server

This morning I noticed a lot of activity on the front panel of my PiDP11 (PDP/11 replica). This is not normal, so I had a quick peek.

The machine is a Raspberry Pi (3) running Raspbien and hosting SIMH which then simulates a PDP/11. The PiDP11 consists of circuitry, switches and LEDs that simulate the front panel of a PDP11. The LEDs show activity of the simulation similar to the actual front panel of a real PDP11, so it’s a decent snapshot of actual system activity.

There is a specific LED pattern to system ‘idle’, and other patterns when the system is active. In this case, the ‘active’ pattern was continuous for several minutes.

This is unusual because the only program running on the PiDP11 besides the BSD 2.11 operating system is a small C Program ‘httpd.c’ which runs a simple HTTP web server. The actual web page served is a simple HTML page of text and one photo. Normal access shows activity for several seconds (less than 10) and then the idle pattern returns.

In this case the active pattern continued for several minutes. There is no need to ‘hit’ the web page repeatedly unless mischief is afoot.

I logged on to the R-Pi and then to the SIMH-PDP system. Using ‘ps’ I could see unexpected programs running, so I exited to SIMH and ended the simulation. I then rebooted the R-Pi.

While the R-Pi was rebooting, I checked my firewall rules to confirm the machine/port was open to the world. I edited the config file to remove this connection and reset the firewall. After reset I confirmed the port was no longer open.

Later I checked the firewall logs, and confirmed that the attack was a simple DOS (denial of service) attack from a foreign country (in the ‘far east’). Fortunately I caught it very early and killed it immediately.

However, the BSD 2.11 web server is no longer accessible from outside my home. Such is the price exacted by ‘bad actors’ seeking to cause mischief.

Modula-2 Didn’t Make the Cut

As the title says, I’m pretty much done with Modula-2. It had promise, and was even fun to convert a few programs from PL/I and FORTRAN. But in the end, it just didn’t cut it for me.

Why? In a word, I/O. More specifically, the input/output for Modula-2, at least the CP/M Z80 version I had, was just too primitive to use.

What I have discovered while running and modifying some of my old Engineering programs in FORTRAN (and into PL/I or C) is that in many cases the actual calculation code pales in comparison to the input/output code. Many of the engineering programs produce a LOT of output, and it really needs to be readable to be of any use.

FORTRAN was my first computer language, and it was also the primary scientific / engineering language of my first professional jobs. Some of those programs produced fan-fold paper outputs that were several INCHES thick. Having a readable output format was critical.

What I’ve discovered lately is that PL/I is actually a better language for formatting output, at least when the FORTRAN compiler does not support the ‘T’ format (tab). PL/I has a tab (Column(x)) so it’s quicker and nicer to use for those programs.

Also, getting input from keyboard (console) or file in either FORTRAN or PL/I is not only quite easy, it’s easy to format, thus allowing for very concise readable input files.

Not so Modula-2. The input libraries are quite rudimentary. For example, reading real numbers is done by a routine that reads a full line, then parses ONE number from that line. To do anything else requires rewriting the library.

As a result, a simple input data file such as ‘-23.2 10.0 -44.8’ becomes one number per line. Now imaging a file with 5-10 real numbers per line and you have an unreadable input file.

Likewise the output was pretty much ‘unformatted’. You could apply some formatting, but not much. As a result, programs produced rather messy ouput. It’s fine for a simple program, but not for anything of substance.

It’s funny, because I remember the original criticism of Pascal, which is the pre-cursor language to Modula-2 by the same dude. Formal Pascal had no input or output as part of the original language spec, because it was supposed to be a ‘pure academic’ language. It wasn’t until Borland created Turbo Pascal and added a complete set of I/O routines that Pascal became a force in the 1980s.

Funny that I’m facing a similar situation in 2020 with Modula-2 on my Z80.

Windows XP Professional – unsung hero O/S

I’ve been writing of my adventures with a Z80 singleboard computer (kit supplied by CPUVille) for some time now. I’ve also mentioned how I had to ditch the flakey USB-serial connections and return to ‘real’ RS232 connections using an old Toshiba laptop.

What I may not have mentioned is the O/S on that Toshiba. It came with Windows XP Professional, and is still running that operating system all these decades later. It simply works.

I originally kept the Toshiba because it was the last laptop I owned that had a dedicated DB-9 Serial port connection on the back. Everything I’ve owned since removed the printer (DB-25) and serial (DB-9) connectors in favor of USB ports. While USB is great for almost everything, it is not always great for serial communications. Frankly, the USB-serial chips in the cables rely on flaky drivers that don’t always work.

I kept the Toshiba with it’s serial port to run Fuji controller software, which talked via serial connection (RS232-RS485) to my two Fuji glassblowing controllers (PXR3 series). The software from Fuji ran on Windows 95 thru XP, but not on newer versions. As a result, the Toshiba remained the ‘Fuji controller laptop’ complete with Win XP ever since.

The thing is, it still ‘just works’. After many years on a shelf, I installed the battery (it was stored separate), found the charger and plugged it in. After pressing ‘power on’, it simply worked. I found an RS232 cable (DB-9 ends) and plugged it into the Z80 singleboard. I downloaded a decent terminal program (Teraterm) and again, things just worked. Not only that, but they have continued to work ever since.

The laptop has a network adapter (not wifi) so I plugged it in, and was immediately able to access my shared data folders so I can edit on my main development PC (Windows 7) and access the files on the Toshiba to sent via Teraterm to the Z80. All quite easy and slick.

Using modern windows ‘Remote Desktop Connection’ (RDP), I am able to fully operate the Toshiba via a remote RDP window on my devel PC. Windows XP supported RDP, and it hasn’t changed in all this time – or at least, it’s backward compatible to XP.

The reason I was thinking about this recently is that a couple of days ago we caught a show on Knowledge Network about the 1990’s. This episode was all about tech in the 90s, and was a real trip down memory lane for me. In the 90s I was teaching C programming at SAIT in Calgary in the evening (Continuing Education courses) as well as full-time consulting in the daytime. I worked on some of the first computers to get Windows 95 when it came out. I remember programming for Windows 95. I was ‘there’ when Win 95 led to Win98, then Win ME (millenium edition), or “meh” (or mill-enema edition) as we often said. Win ME was a horrible, rushed OS that could not die fast enough. But Windows XP – that was beautiful, at the time. It simply worked. Most of the driver glitches had been tamed, and the PC slot and USB ports worked. The PC slot was quite cool for the time – I had a TV card (turned the laptop into a real TV) and a Wifi card, and several others. They were expensive but cool. And now they are long-gone history.

Still, it’s nice to be using something for a specific purpose and have it still work perfectly after all these years.

And the New Thing is… Modula-2

I know. Modula-2 is not new. In fact, it’s pretty much another ‘dead language’ in computing.

Modula-2 was Nicholas Wirth’s second foray into computer programming languages, the first being the much more successful PASCAL.

So why learn (and write) Modula-2 instead of Pascal? Well, it’s simple. I hate Pascal.

I’ve hated Pascal ever since the Borland Turbo-Pascal came out and everyone in the programming community was shoving it down my throat in the mid-1980s. I was, at the time, working at a consulting firm writing and maintaining FORTRAN programs on IBM, VAX, CRAY and several other machines. I was hearing rumblings from the Unix community about this ‘thing’ called C, but was at that time still several years from venturing into C myself.

But this Turbo-Pascal was everywhere in the micro-computing community. You simply could not escape it. Worse, to me it seemed a dumb language; full of ‘:=’ and other arcane structural things that made little sense to a FORTRAN programmer.

I didn’t want to learn it, but the constant barrage of Pascal stuff was deafening.

Eventually I moved on to C programming, and Borland’s Turbo-C was wonderful for micro-computers. I went on to start my own consulting practice where C programming became my bread and butter for many years. After that I branched out to C++ and then Java, but I managed the entire time to ‘omit’ Pascal.

Now I’m back playing with Z80 computers and interested in learning ‘stuff’ for the fun of it. While looking around for the next ‘programming thing’ after FORTRAN (F77), PL/I and C for my Z80, I discovered a really great working Modula-2 compiler. It was complete. There was full documentation. There were (a very few) example programs. I was set.

The cool thing about the Modula-2 docs is there’s even a big section comparing differences between Modula-2 and Pascal. Funny, I still don’t miss Pascal.

But now Modula-2 is the ‘this week and/or month’ language, so on I go. I’ve already managed to covert one fairly simply PL/I program, and just today I managed to get it to read from, and write to files. Next I’ll try some more sophisticated programs just to see how it compares to the other languages I’ve been enjoying.

I do love PL/I

After some weeks playing with old FORTRAN programs and converting them into PL/I on my Z80 singleboard (SB) computer (kit from CPUVille), I can say without reservation that I quite love PL/I.

The language is now dead, and that’s a shame. It certainly is one of the true ‘structured programming’ languages of the 20th century, and definitely fit in with the horrid old ‘waterfall project development’ life cycle. However, the language itself is quite lovely.

I took many of my old Engineering FORTRAN programs from the ’80s and had them running in F77 on the Z80 SB. They worked almost exactly as they had in the 80s, though quite a bit slower (Z80 vs. CDC Cyber or Honeywell Multics). The speed difference between Z80 and a modern Intel I7 PC using the same F77 compiler was astounding.

But it wasn’t until I converted them to PL/I that I found the PL/I programs, while about the same speed of compile/link/execute were just… nicer. From an aesthetic standpoint, the structure of the PL/I programs was clean and … nice. But the things that set PL/I apart from F77 were the file IO, and the outputs. File IO in FORTRAN works, but was messy. File IO in PL/I was simple and clean. The F77 documentation for Z80 wasn’t any help either. I ended up reaching out to online forums for assistance getting F77 file IO to read CP/M named files. Once I got the ‘trick’, it was easy. But in PL/I, the documentation was clear, easy to follow, and correct.

Even with both IO systems working, the output was where PL/I really shone. Fortran has several format specifiers for real number output, but PL/I outshines Fortran on every level.

The output from my FORTRAN programs was … nice. The same output from my PL/I programs was beautiful. Now in full disclosure, the F77 compiler would not accept the ‘T’ (or Tab) format specifier, so I had to resort to counting spaces. The PL/I formatted output accepted ‘Column(x)’ and made it so much easier.

I found no fundamental difference in double precision implementations in the two compilers, and both were easy to use once you learned how.

However, I still really enjoyed writing the PL/I programs. It was just a lot of fun and very satisfying.

Now on to new things…

More PL/I fun (with my Z80)

I received a couple of excellent comments regarding my previous blog posts on PL/I and it’s compiler bug with double precision and the ‘**’ operator. I replaced ‘**’ with a dead simple ‘doexp()’ function that simply did serial multiplications using a loop. It was not very sophisticated, nor very efficient.

Ed suggested I search for “exp_by_squaring_iterative”, which I did. It’s a very simple and elegant recursive method for calculating y to the power x, and was easy to implement in PL/I, which supports recursion. After a quick test to prove ‘I got it right’, I replaced ‘doexp()’ with ‘powsq()’ (my name for it) in both fracture programs as well as the concentration programs.

Tests with the concentration programs proved the new method is almost twice as fast as the brute-force function, which is very rewarding.

The next problem I ended up tackling (and am still working on) turns out to be caused by the ‘weirdness’ of 2D arrays. Basically, different programming languages order arrays differently. C (and PL/I) order 2D arrays by row-column, while FORTRAN orders by column-row, when the array is populated by either BLOCK DATA (FORTRAN) or {} (c) or ‘static input ()’ (PL/I).

Figuring out that the original FORTRAN program loaded the arrays in a completely different manner than PL/I has caused me no little headache. I ended up having to debug both the FORTRAN code to see how the array was stored as well as the PL/I program.

Still, I’m having fun and still enjoying my foray into PL/I.

Once again I broke a compiler…

Back when I was writing FORTRAN ‘for real’ (a.k.a. in a production environment), I managed to find a bug in the FORTRAN compiler that was confirmed and verified independently by the compiler creator and supplier.

Not that I was too happy about it as the bug was totally reproducible and quite severe. It was also just obscure enough that it was never fixed. We worked around it and then went on to other things.

Well, I’ve found a bug in another compiler. This one dates from the same time frame, but is the Digital Research PL/I compiler I’ve been playing with on my Z80 single board.

Now, disclaimer: this ‘bug’ may well be a known and documented condition, but it’s not in the DR written PL/I compiler manual, and any other documentation has probably been lost over the ages.

So with that, I mentioned in my last post that I had to build my own ‘doexp()’ subroutine for the concentration program because squaring a negative number was failing ‘Error(3)’.

Well, it happened again. This time I was converting an old fracture program from FORTRAN and it was failing with OVERFLOW(1) errors. After some serious tracing with debug prints, I found it was again the ‘**’ exponent function that was the cause. Looking at the numbers coming from the program, there was NO WAY it could be a real calculation problem (-.0152534 ** 2 is NOT an overflow!!!) so it had to be the ‘**’ again.

To verify I grabbed the ‘doexp()’ subrouting from the concentration program and popped into the fracture program, and sure enough it runs.

As there is no way I can examine the source of the built-in function, I’ll just have to use my ‘doexp()’ from now on.

At least the program runs now. 🙂

It matters to me (or the lost art of program tuning)

In my last post I talked about Pl/I programming, which I’ve again fallen in love with, and the speed of my small engineering programs on my Z80 single board computer compared to a modern PC.

I was actually incorrect in my last post – the program is not 4000 times slower than a PC. In fact, the PC version was so quick one could not really time it by hand. There are tools on Unix boxes to time program execution, but I didn’t use them. However, a more realistic estimate of the larger data set would be about 1/2 second, not 1 second. That means my little Z80 program was 4000 x 2 or 8000 time slower (at least).

But something else about that program has been bothering me for the past few days. I mentioned in my last post how the ‘exp’ (or ‘**’) operation had a bug that prevented the double precision version working with negative numbers. Normally, there’s no problem – for example square -2 and you get +4. Cube -2 and you get -8. It’s all just serial multiplication, so it bothered me that it was not working. My fix was to add ‘abs’ before the term to remove any negative value; that worked, but I wasn’t happy with the fix as it wasn’t ‘totally correct’ from an aesthetic standpoint.

The solution was to return to my programming roots. Back in the early 1980’s I worked writing Reservoir Simulation programs. These huge FORTRAN programs could consume every calculation cycle of very, very large computers. Optimization beyond what a compiler was capable of became essential. There were many tricks we used to squeeze every bit of performance out of such programs.

One technique involved removing subroutine calls when possible by ‘in lining’ code. It’s ugly, and counter to all ‘structured programming’ rules, but it does work. Another technique analyzed operations; division is much more ‘expensive’ (in CPU cycles) than multiplication, with addition being about the cheapest. Loops have set-up time, so ‘unrolling’ loops might also be done in simple cases.

My analysis of this particular PL/I program showed the line with the error in it had multiplication, division, and that exponentiation. Now I added a call to the ‘abs’ function. I knew there was a way to fix both and make it run faster… convert ‘exp’ (or ‘**’) into serial multiplication and remove the ‘abs’ call. It would not only fix the negative value error (multiply always works) but remove two function calls and their related overhead (push stack, pop stack).

I decided to write a new function ‘doexp()’ which used a loop to perform the serial multiplication. I knew I was adding the overhead of a new function, but removing two function calls and a (supposed) complex general-purpose exponentiation routine. It won’t work for fractional exponents, but this program was always limited to whole number exponents anyway.

The new function coded, I tried the smaller data sets. All reported a roughly 1.5 time speed improvement. I then ran the big data set, and early timing of the time steps shows it’s running 2 times faster than the original program. The entire job, which would have taken 2.8 days, will now run in 1.4 days. For program tuning, this is a huge improvement.

As the post title says, it matters to me. It was also great fun to see it work.

Just How Slow is my Z80 single board computer?

Officially, the Z80 single board computer from CPUville has a 1.8432 MHz crystal clock. Compare that to a modern PC running multi-megahertz clocks and it’s pretty slow indeed. But sometimes the real test is in running ‘real programs’.

As stated in prior posts, I’ve been playing with PL/I on my Z80 single board computer under CP/M 2.2, and I love it. PL/I is turning out to be the amazing and fun language I learned back in 1980, and it’s a blast. One thing I really love about PL/I is the total control over output, done easily with output formatting capabilities, plus the really great error handling.

I had an obscure bug in my latest program; a multi-phase concentration-with-time program from my Engineering graduate days. It would not run on one of the numerous test data sets, instead crashing immediately with ‘Error 3’. Looking up the error, it says: ‘A transcendental function argument is out of-range.’ Some help. However, transcendental functions include exponentiation, and the program had just such a line. Typical debugging means putting in a bunch of print statements and then wading through reams of output to try and trace the problem.

In PL/I, you can use the ‘on’ construct to trap errors. So by adding a ‘on error(3) begin… end;’ I was able to immediately isolate the subroutine where the error occurred and yes, it was the exponentiation line. Now adding some print statements made sense, and I quickly found that in double-precision, squaring a negative number was causing the problem. Now squaring a negative number is legal (the result is positive), so I had to find a fix. Fortunately using the ‘abs’ function solved the problem for this case as no real data (i.e. fluid concentration) should be negative in the runs. With the problem fixed, time to run all the example cases.

All but one ran in decent time. But problem 10, which has 600 time steps, was still going after two days. With no console output, I wasn’t sure if the program had crashed or was just taking a long time.

A few more print statements (with a debug flag to turn them off later), and a stopwatch, and I found this particular run was taking 6.44 seconds per time step. With 600 time steps in the run, that’s 4040 seconds or 67.3 hours (2.8 days!). No wonder I thought it was taking a while.

So about slowness… when I was wondering if the run was ‘broken’ or not, I ran the test case on my Win7 PC (in FORTRAN, but close enough). It was so fast you could not really time it. Let’s say for argument sake it took 1 second. That makes the Z80 4040 TIMES SLOWER!

Oh well, this is all about fun, so waiting for the full run on the Z80 will be something to wait for. Until next time…

Sin(x) Taylor Series, finally

In my last post I mentioned that I would turn on debugging to see where the program failed before resorting to writing a double precision version.

As it turns out, the program wasn’t really failing, it was the precision I set on the output format that caused the error (CONVERSION). The OVERFLOW error was simply a division that produced a number so small it was beyond single precision. The first was fixed with a simple format change, f(11.7) from f(7.3) and the second… well that needed double precision.

It is very easy to convert the program to double precision. A global replace on ‘float binary’ to ‘float binary(53)’ and it’s done. Since PL/I requires ALL variables to be declared, the above will get them all. Recompile & link, and it’s done.

Except… Digital Research (DR) PL/I for Z80 up to V1.3 explicitly does NOT support double precision; it’s stated quite clearly in the manual. I needed V1.4 for double precision support.

After searching the ‘net and coming up blank, I posed on ‘retrocomputingforum.com’, and immediately EdS came to the rescue with a link to DR PL/I 1.4. I downloaded it, installed it and the program compiled and linked.

Running the program with 11 terms is now not a problem. Using my higher precision formats I could run the program from 0 to 4Pi with 11 terms. Below I show the special cases (exact values of Pi) and you can see how the result differs from actual at 4Pi:

Special cases - sin( Pi):
sin x( 3.14) = 0.0000002 (calculated) 0.0000002 (actual) -0.0000000 (diff)
Special cases - sin( 2Pi):
sin x( 6.28) = -0.0000058 (calculated) -0.0000003 (actual) -0.0000055 (diff)
Special cases - sin( 3Pi):
sin x( 9.42) = -0.1299004 (calculated) -0.0000000 (actual) -0.1299003 (diff)
Special cases - sin( 4Pi):
sin x( 12.57) = -158.2498658 (calculated) -0.0000006 (actual) -158.2498652 (diff)

I now know that if I run 13 terms, it should be accurate to 4Pi, but I think I’ve effectively run the course on this program.

Next up: More conversions from FORTRAN to PL/I. Last time it was an Analytical Well Model program, which gave me a scare in PL/I V1.4 (compared to V1.3) and a couple of programs that “solve the continuity equation for composition variation with time and distance for a one dimensional, two phase, three component system”. Lots of fun. 🙂