Sin(x) Taylor Series – Revisited

I received a message on an excellent discussion group that I follow (retrocomputingforum.com) regarding my last post on the divergence of my calculated Sin(x) results vs. ‘actual’ using PL/I and my sin(x) program.

The gist of the post was that the Taylor series *should* converge to the correct answer for all values of x, so long as there were sufficient terms in the Taylor series. The poster (EdS) went on to describe some tests he had done showing practical limits for x based on the number of terms in the series. Sure enough, 5 terms started to diverge above Pi, while 9 terms was good to over 2Pi.

In response, I rewrote my Sin(x) program to ask for user input: first the upper range (calculating from 0 to nPi where n is input) and the number of Taylor series terms (from 5 to 13). Using the new program, I calculated Sin(x) from 0 to 2Pi with 9 terms, and the results were accurate until close to 2Pi, instead of diverging almost immediately above Pi.

I tried 11 terms to 4Pi, but the program terminated with OVERFLOW error. I then tried going to 4Pi using 9 terms, but received a CONVERSION error. Both indicate the program is at the limit of the PL/I compiler floating-point precision, which is 24 bits for versions 1.0 and 1.3 (float binary(24)). I read in the docs that version 1.4 allows double precision (float binary(53)), so I found a copy and have now installed that.

Before I create a double-precision version of the program, I’ve turned on all debugging just to see where in the taylor series calculations the program is failing.

More to come…

Z80, Taylor Series for sin(x) and why limits matter

I have been thinking about why the results for my calculations of sin(x) using the well published Taylor series have been inaccurate for values of ‘x’ between PI and 2PI.

I think, reading between the lines of many, many posts on the subject, I have an answer.

The series appears to be accurate between -PI and PI. I cannot find a definitive statement to this effect, but it certainly appears to be the case based on a lot of discussion, questions and answers, and ‘chatter’.

By now I’ve tested the algorithm using single precision (on my Ubuntu linux box), double precision, and even swapping out my coded ‘power’ method for a library method. In all cases, the results were the same. Between PI and 2PI, results diverged from ‘actual’.

Note: PL/I on the Z80 does not have a ‘pow’ method which is why I had to write one. Having written in for the PL/I program, I kept it in my C programs (single and double precision) for comparative equality between the C and PL/I verions.

I recoded the program to calculate between -PI and PI, and also added a difference output (calc – actual) to see the actual divergence. Using -Pi to PI, the results are all much closer for the entire range.

My conclusion, barring new information, is that the published Taylor series for calculating values of sin(x) for ‘x’ in radians is accurate in the range of ‘x’ between -PI and PI.

The Continuing Saga of the Z80 Singleboard Computer

I’ve already posted about the fun I’m having with the Z80 singleboard computer (kit from CPUville) recently.

In addition to a FORTRAN compiler (Microsoft F80), I added the High Tech C compiler. I’ve written programs in FORTRAN, C and 8080 Assembler. I’ve used both the CP/M 2.2 ASM assembler and the M80 assembler that came with the F80 compiler. Except for one instance where my port reading assembly program won’t actually read the port, it’s been fun and games.

I’ve even created assembler programs that can be called from FORTRAN (the aforementioned port reading routine).

Last week, while exploring the various archives of CP/M software, especially compilers, I spied the DEC PL/I compiler. That looked really promising.

Back at my first job after my B.Sc., I worked at an IBM shop that sent me on a PL/I course. Afterward I spent the next year writing software in PL/I for a pair of IBM 3033 mainframes. It was all great fun.

Finding a working PL/I compiler was too good to pass up, so I grabbed the archive and beamed them to the Z80. After a bit of digging, I found my 1980 PL/I reference book, “PL/I Structured Programming” by Joan K. Hughes (2nd edition, Wiley, 1973). After reading through it to refresh my memory, I started building a few PL/I programs, following the examples in the book and then the chapter problems.

Some features of the IBM compiler were not available in the DEC (CP/M) version, but I had the DEC PL/I documentation to help me with the transition. Eventually I had written several working PL/I programs.

The past few days I’ve been playing with a Taylor series program for calculating Sin(x) (x in radians). I have the program working, but the answers diverge from ‘actual’ values in the range PI to 2PI. I had full debugging in the code, but could not really see the reason.

I decided to try converting the PL/I program to C, and then running it on a modern C compiler on one of my Ubuntu 18.04 servers.

SURPRISE!!! The C program has the exact same divergence! Even switching from ‘float’ to ‘double’ didn’t remove the divergence in the C program on a modern machine. I’ll definitely have to investigate further.

Just for fun I then took the working C program and beamed the code over to the Z80. The High Tech C compiler is sound, so it compiled the program easily. The run on the Z80 gave the exact same answers (with a small nod to precision on various platforms) as both the C/Linux version and the PL/I CP/M version. It’s either a really difficult to find coding mistake in my work, or a real phenomenon. As I said, I’ll have to investigate.

Where it all gets cute is timings. On the Linux box (big AMD 6-core server with loads of memory) the C program runs so fast it would be timed in milliseconds were I to try. Certainly faster than one could manually time it. The PL/I program on the Z80 takes 3min, 40.15 seconds to run. What was a surprise is the C program on the Z80 took 5min, 34.40 seconds! I never expected a C program to be that much slower than the PL/I program.

Now that I have FORTRAN, PL/I, C and Assembler all working, time to continue playing.

One last thing: I found a printing “bug” in the PL/I textbook. The formula for the Sin(x) Taylor series has two major errors. First, the terms have a denominator that is (2n+1)! (factorial), or 3!, 5!, 7!, 9! in the expanded formula found in the book. But some typesetter must have thought that an error, as the book has replace the ! with 1, giving denominators of 31, 51, 71 and 91. Not a small error when you are coding!. The other error is the terms alternate in sign (-1**n) so x-a+b-c+d and so on. The book had all + signs.

When debugging the massively incorrect results, I simply did a google search on ‘series solution of sin(x)’ and found the correct formula, then coded that. It is that corrected formula that still diverges from actual results for values of ‘x’ greater than PI.

When old beats modern (a computer story)

If you asked me to comment on whether an old computer technology could beat modern technology, I’d give the obvious answer: no.

Except my recent explorations with the two have proven that in some cases, the exact opposite is true. In my case, I’ve been playing with a Z80 single board computer whose design is based on a design from the 1970s. It’s a solid design, and the implementation by “CPUville” is awesome.

The only bit of “new” in the system is a wonderful little device known as an IDE to SD card interface. The CPUville Z80 singleboard has an IDE interface and connector which accepts this $10 board, which then accepts a SD card to act as hard disk for the system. It all works, and works exceptionally well. I end up with 4 large hard drives for the Z80, which is running CP/M 2.2

And that is where the magic, and indeed the beating takes place. No, the Z80 is not faster than a modern computer. It’s much, much slower. But CP/M was an almost fully realized operating system with a rich user community and a lot of available software, and that is where the difference lies in what I do.

I love writing programs, and especially in the older languages – FORTRAN, C and now PL/I. I used PL/I in my very first job in 1980 and 1981 in an IBM mainframe shop, and quite enjoyed it.

On other systems, getting compilers has been difficult, but for CP/M 2.2 there are extremely good compilers for FORTRAN (Microsoft’s F80), C (High Tech C) and now PL/I (Digital Equipment’s version). What is even more wonderful, they all work very well and compile my old programs nicely.

Of course there are quirks and things one must learn (or re-learn) but it’s all fun.

Which brings me to the “beats modern” part of this post. You see, at this moment I can’t compile C programs on my Windows 7 PC. I try, but the compiler I’m using (MinGW) is 32 bit and my version of Win7 is pure 64 bit, and the two currently hate each other. I know that I will eventually fix the problem, but I’m not in a hurry because I have several Linux boxes plus a Macbook, so I have C compilers available.

Getting FORTRAN was a bit tougher, but eventually I found a nice set of FORTRAN compilers for the Windows machine that work well. But with PL/I I’ve currently hit a wall.

I found a Linux PL/I compiler, but the archive is broken (unreadable). I cannot find a PL/I compiler for Win7.

But equally interesting is the fact that getting the compiler running on the Z80 CP/M system was just … easier. Essentially, the three compilers not only “just work”, they all tend to work in very similar ways. I have a good manual for FORTRAN, plus one for C, so I was able to get going quickly. But I have no PL/I manual for using it. I have a PL/I programming manual, but not one for ‘how to compile & link’. What is really cool is that knowing how to compile & link with FORTRAN and C, I just type the same commands into PL/I and it worked. I’m sure there are options I don’t know about, but basic operations are working fine.

And I’m loving it. Now if only I could get all these tools working as well on my other platforms.

More Fun with the CPUville Z80 Single board

During the Christmas break I built the CPUville Z80 single board, plus the ‘slow board’ which is really a really nice ‘blinkenlights’ display board for the Z80 single board. That was a fun build.

I also built the CPUville 8-bit computer (3 boards) plus register display board and added that to a separately built Z80 single board (with Z80 replaced by the 8-bit boards). That was also a fun build.

But the real fun began with the Z80 single board once I added an IDE 40-pin to CF (compact flash) controller board with a 4+ GB CF card. Following the CPUville instructions, I was able to modify/compile/install CPM 2.2 on the CF card giving me 4 large “hard disk” CP/M partitions.

THEN… I started playing. After reading a few CP/M manuals, I began to learn my way around the system. ED was perhaps the hardest to learn, only because the first manual neglected to mention that the display buffer was NOT filled ‘on entry’. One has to type ‘#A’ to load it with the file contents before you can see/edit anything.

I started with a few Z80 (or 8080) assembler programs, then found and loaded FORTRAN (F80). I then spent days playing with my FORTRAN programs that I wrote in the 1980’s during my Engineering degree and post-grad courses. Interestingly, they compiled and linked easier than when I tried them on the PiDP8 replica I built several years ago. The version of FORTRAN in F80 was just a bit more modern than the FORTRAN IV on the PiDP8, making things much easier and more fun.

Last week I found an loaded High Tech C compiler on the Z80. I compiled a few C programs from my earlier C programming days, as well as a few versions of the “calculate PI to N digits” programs. Again, tons of fun.

The interesting bits came trying to install the C compiler. It’s a lot of files, and when I tried loading them individually via “PCGET”, they crashed the terminal program. Seeking a better solution, I tried LZH unpack programs (didn’t work on modern LZH files), and eventually found that using modern WINZIP and an old CP/M UNZIP18.COM program, I was able to load whole groups of files to the Z80 and then unzip them in place. The only condition is that the CP/M unzip does not understand ‘modern’ zip methods, so you must zip them on the PC (Windows 7 in my case) with NO COMPRESSION.

The other ‘gotcha’ I discovered tonight is that you must be sure the ZIP files are named in CAPITAL letters. If you unzip lowercase named files on the Z80, they remain lowercase and kind of ‘disappear’ to CP/M. I could not even delete them until I asked on the ‘comp.os.cpm’ google group and was told about NSWEEP (or NSWP.COM). That program was able to delete them easily. I then rebuilt the zip with uppercase file names and it was fine.

So onward and upward with this wonderful true Z80 computer running CP/M 2.2, with FORTRAN, C and 8080/Z80 assembler.

The Wonderful World of Old

I’ve written before about the fun I’ve been having building and running old hardware systems, such as a PDP-8i and a PDP-11 replica. These both use faithful scale recreations of the front panel of the machine, complete with ‘blinkenlights’ and switches. Both use the Raspberry Pi (3B+) running a program called SIMH to faithfully recreate the hardware. The PiDP-8 runs the DEC OS, while my PiDP-11 runs BSD 2.11 on top of SIMH.

Both have been much fun. The PiDP-8 I used to run some of my 80’s FORTRAN programs, while the PiDP11 ran C and a simple web server.

I also built an Altair 8800 replica called the Altairduino that gets it’s power from an Arduino board, again running simulation software to mimic the Altair. I confess I haven’t done much with this system, even though it came with an SD card full of software including CP/M.

But this winter I spent some time building two kits that really brought back my enthusiasm for the Z80, and actually has me learning and running CP/M for the first time. (I was a TRSDOS then NewDOS TRS-80 user in the ’80s).

Both kits come from ‘CPUville’, a fellow who designed, built and now sells kits for the Z80 single board computer of Byte fame. The first kit was a single board Z80 system, complete with IDE interface and true RS-232 serial port. It runs a true Zilog Z80 CPU at 1.8x MHz, which was fast ‘in the day’. It supports a second ‘display’ board that offers all the LEDs and switches to see and interact with the Z80 in real time.

The second kit starts with the Z80 single board, but then replaces the Z80 with a set of 3 8-bit CPU boards that use discrete logic chips instead of a single processor. I topped it off with it’s own display board (LEDs and switches) and it’s a fully functional 8-bit computer.

But the real fun came when I started playing with the Z80 single board and installed CP/M on an IDE->SD card ‘hard disk’. With 2+GB to play with, it’s like a world of CP/M disks all in one. I started by installing CP/M 2.2, then Microsoft FORTRAN 80, and today HiTemp’s C compiler.

There have been frustrations, such as learning to use the ED editor, and other OS programs (PIP anyone?). But the ‘proof in the pudding’ as they say has been in how well it runs my ’80s vintage FORTRAN programs. Even though it’s only an 8-bit computer, and thus suffers from a severe lack of precision – the integer is only 2 bits long – it has successfully run most of my programs. There are a couple that are simply too big for the 64K RAM, but otherwise it’s been a blast.

Today I played with compression/archive programs (to get the C compiler installed) and now it’s happily calculating the first 1000 digits of PI.

It’s slow, but what I love the most about it is that it is NOT a replica, nor a simulation – it’s a real, honest-to-goodness Zilog Z80 on a single board talking to an older laptop (with real RS-232 port in the back) via non-simulated SERIAL interface (actual 9600 baud) and I couldn’t be happier.

Jupyterhub Maintenance Nightmare

I just finally finished what was a week-long nightmare involving Jupyterhub.

It started simply enough. I have always kept my own documentation on “how to do” various things. In this case I had recorded steps to update all the packages in Jupyterhub on an ongoing basis. Jupyterhub lives inside something called Anaconda, or conda for short. Upgrading is supposed to be as simple as “conda upgrade xxx” where xxx is the package to upgrade.

I have been using this process since first installing Jupyterhub on a server back in January 2019.

Last week I started working through the 8 or so packages that I normally update, and one of them failed the update with a blizzard of messages and warnings. But that wasn’t the worst part. The worst part was that it also killed conda dead. Even typing “conda -h”, which should print out a simple help file, instead resulted in a page of error messages warning of missing libraries before failing. Nothing I tried would work – conda was dead.

I grabbed my notes from January, and started to reinstall anaconda/conda. It’s really quite simple: delete the anaconda3 directory and reinstall.

It all worked until I came to the last package; xeus-cling, which provides the C++ support I require. That installation failed with (again) a set of rather bizarre “missing stuff” messages.

I left it at that… after all everything else worked, so I left a message on the xeus-cling github page and waited.

This week I got a reply: conda 4.7.9 was broken. Conda 4.7.10 worked with xeus-cling. I checked, and sure enough I was on version 4.7.9. I updated (conda upgrade –all) and tried xeus-cling. It did not work. Instead, it just hung trying to resolve the ‘environment’.

Eventually, after several trial-and-error sessions, I resolved the situation and once again have a perfectly good, working Jupyterhub. I’m very glad that reinstalling anaconda3 is so simple, as this was done several times until I got the order of things correct.

In a nutshell, you must install anaconda3 (from 2019.03 script) (v 4.6.11), then install xeus-cling before anything else. Only in this way can you avoid the timeout problem with xeus-cling and conda 4.7.10. This is because all installs via conda update conda to the latest version.

Once xeus-cling is installed, everything else installs just fine, and the system is back up and running. Still, it was a nightmare figuring out what was wrong, and I can’t say I’m very impressed with conda for breaking things which were working in pre-production environment.

Ubuntu LAMP & WordPress

As discussed in a prior post https://jrcrofter.huntrods.com/updating-wordpress-just-got-messy-really-messy/ I was able to install LAMP (Linux, Apache, Mysql & PHP) on my home server that had been converted from Solaris 10 to Ubuntu 18 about a month ago.

It was converted to run the latest version of Tomcat with my application. The conversion wasn’t strictly necessary, but Solaris 10 is long in the tooth and updating the Java virtual machine (JVM) and MySQL had become nigh-on impossible. On the other hand, installing the latest versions on Linux was almost easy. I chose Ubuntu 18.04 because I’ve like Ubuntu since the very first; it’s easy to install and maintain, and seems a very robust Linux distro.

With Apache2 installed and running, my web sites were in good shape… except some things like WordPress started to complain because they weren’t ‘secure’.

Securing Apache means moving from HTTP to HTTPS, which in turn requires getting a security certificate. Fortunately, I’d already switched to LetsEncrypt for my security certs, so I was comfortable with the process. Likewise, my application was already using HTTPS, so I’m also comfortable with that process.

What was new was a) moving my application to another port so that the default HTTPS port (443) was free for my Apache web pages, and getting HTTPS running on Apache2.

As it turns out, it was quite easy. Following a good guide, I installed and used Certbot to obtain new LetsEncrypt certs for my web domains, which also handled renewals and much of the background installation work. With a few config changes and adding the secure mods, all the web pages were switched over to HTTPS without incident.

The more I use Ubuntu (and thus Linux) to do actual tasks, the more I’m liking the ease of installation and maintenance. The fact that there is now a critical mass of GOOD help online is also a great bonus.

PiDP11 Issues

The PiDP11 is a scale model of the PDP11 front panel, complete with working lights and switches, all driven by the SIMH software running on a Raspberry Pi.

It was designed by Oscar Vermeulen, who sells it as a kit for the hobbyist to build and enjoy. I bought my kit last summer, but due to the home renovations did not build it until Christmas.

Oscar and others have developed software for it that customizes SIMH and drives the lights and reads the switches. There is also a marvelous manual describing how to build and operate the kit in great detail.

There’s been one problem: the front panel occasionally locks up. There was a lot of discussion on the PiDP11 discussion forum, and several causes and solutions were offered and discussed.

Eventually the source of the lock-up was traced to a race condition in one of the files controlling the front panel. A corrected version of the file was provided in February 2019, and I finally got around to installing it (and recompiling the software) on May 22. It ran until May 28, when it locked up again.

I reported this to the group, and one of the developers (Bill) modified the source file to try and fix the problem permanently. The fix was only offered as a set of lines to edit and recompile rather than a new file to keep versions clean. The fix was posted yesterday (May 29)

Today (May 30), I edited the source file and recompiled, then rebooted the PiDP11 at 9am. The main purpose of this post is to document the date & time of this latest fix in the event it does not lock up again. I can then refer back to this post for the date/time the fix was made.

Updating WordPress Just got Messy. Really Messy.

As the title says, really, really messy.

It’s not wordpress’s fault. Rather, wordpress is keeping up with the times, and the times say PHP needs to be kept current.

Up until the last version of WordPress (5.2), my OpenBSD server created many years ago was OK. It’s version of PHP was old, but it worked. When I updated to 5.1, WordPress warned me that my PHP was obsolete, but there wasn’t much I could do at the time. My old version of OpenBSD did not have a simple path to update PHP. Rather, to update such things one is expected to update OpenBSD.

Then came WordPress 5.2. My existing WordPress stated “cannot upgrade due to older PHP” or something similar.

Time to update PHP, which meant a new OpenBSD.

That’s when Messy happened. The latest OpenBSD (6.4) is wonderful. It’s shiny and new and fast, but … they replaced Apache with a new program ‘httpd’ that was written to be ultra-secure. Too secure, in fact.

I spent two weeks fighting OpenBSD 6.4 and httpd, but could not get it to do what I needed. Worse, there’s almost zero helpful documents written about it. The manual is OK but dense, and the only “how to” site covered setting up the ultra-secure version, and nothing else.

Yesterday I finally gave up. Instead of updating OpenBSD, what if I just modified the firewall to scoot all the apache pages to a new server? Something like Ubuntu 18.04 that I’d recently put on all my tomcat/jupterhub servers? Would it be as difficult as OpenBSD?

I found a couple of how-tos, and they seemed utterly simple. “apt-get install apache2”. Done, and it was running! “apt-get install php”. Done, and also running. “apt-get install mysql” (this was for a virtual test server). Done and running. This was scary easy.

Even configuring Apache has gotten much easier due to the plug-and-play structure that’s been adopted (probably for some time).

The only difficult part was installing wordpress. It really wants to be in one place, and doesn’t like port forwarding. For example, if the test server was “http://10.1.1.214”, then that’s where wordpress wanted to be. Port forward “http://huntrods.com:8008” and it just reverted to either 10.1.1.214 or “huntrods.com” and didn’t work. Eventually I realized that port forwarding would happen when I threw the “big switch” and turned off the current (old) server.

With that in mind I did some more testing on WordPress – duplicating this blog from database backup and latest wordpress. It hated the older version and refused to run, but copying over the latest files did the trick.

Finally the big moments: install apache2 and php on the physical server, create the various accounts necessary for users, then copy all the files from the old server to the new server. Most were static web sites, but there were 3 wordpress blogs that had to come over, complete with new databases (from today’s backups).

At last it was all working locally. Time to throw the switch. On the OpenBSD server, I stopped the old apache and disabled it, then forwarded port 80 to the new server. SUCCESS, or at least partial success. I still needed to create all the Virtual Hosts from before, but with the plug-and-play Apach2 that turned out to be easy if time consuming.

Lastly I fired up the WordPress accounts, and they failed. It turned out I had to copy “latest.tar.gz’ over the older WordPress files and then everything worked.

So after two weeks of fighting httpd, I was able to get Apache2 working on an Ubuntu server, complete with full testing, in just under two days.

Success, indeed.