OK, this does seem like a bit of a back-pedal, doesn’t it?

Well, that’s the thing about the “Linux Model” – the very things that are so irritating can also be the reason it works.

Let me explain by returning to one example I mentioned in my prior post, using C++ in Jupyter Hub.

To recap: C++ used to work in Jupyter Hub, then suddenly stopped after an update of some packages in Conda. Conda is the environment that manages Jupyter Hub, and works a lot like apt-get for Linux. After one update, all things C++ failed to the point the kernel would not load at all. An examination of logs revealed <features.h> was missing as well as many other library errors.

Simple google search revealed many with similar (but NOT the same) problems, and many complicated workarounds.

This is one of the problems with the Linux Model. The many “solutions” can often make the problem infinitely worse. Worse to the point you throw up your hands and just rebuild from scratch, which most certainly did NOT want to do. Part of the problem is that “solutions” can come from anyone in the community: seasoned pros, or first-time amateurs. Most don’t document what they are doing very well, and so you make assumptions… and get in worse trouble.

The Linux Model solution is to try and find an authoritative source. Usually this means contacting the team that developed the “thing” that’s broken. Often (and again a failing of the open Linux Model) the team has moved on to other things and really doesn’t care or maintain the broken thing. In such cases, you are pretty much hooped unless you can get the code and love delving into ancient artifacts.

It also requires a LOT of digging in many cases to find the team, or else… EXPERIENCE knowing where to look.

Fortunately, I was beginning to obtain that experience. (and NO, it’s most definiely NOT the group of Stack Overflow websites, but that opinion is for another day). After starting with Jupyter Hub, I began noticing that a lot of the projects were hosted on github.com. I’ve used github before, but only to download/install things. With Jupyter, I began noticing a lot of activity happening on the “Issues” tab. Here I discovered the magic: if the project was active, the developers READ the issues and would comment/reply.

Knowing this, I returned to my C++ problem. I found the package on github, and used issues to contact the team with my problem…. “it’s busted” but stated more “unix like” 😀

Within an hour one of the developers contacted me to say they’d changed the way they distributed the package for the very reason I mentioned (C++ library problems). They rewrote the distribution and moved the code from a custom source to the Conda standard source “conda-forge”. However, the old code was still “out there”. I was told to grab the new code and it shoudl work.

I did this, and it didn’t work. However, having chatted with a developer, I simply updated my “Issue”. The next day I received a reply: remove EVERYTHING from the old distro source. Using “conda list” I could clearly see MANY packages (not just the base C++ package) came from the “now bad source”. After removing all of them and reinstalling the main package from the proper source (conda-forge), I tried my C++ example and it worked perfectly.

So the Linux Model does work, but you have to do a lot more homework and find the place where the developers’ hang out with the current code.

For Tomcat, that’s the Tomcat-users or Tomcat-devel list group. For my 8-bit computer replicas, that place is some specific google groups. For most things involving Jupyter Hub, that place is the appropriate github.com repo (and the Issues tab).

My final thought for now on the Linux Model is that it does work for almost anything current. The big bonus is there is often a HUGE community of active developers who really want their work to be appreciated and used. Find them, and ask properly worded respectful questions, and you can see the Model work beautifully.

This is about LetsEncrypt, JupyterHub and Tomcat.

I built my JupyterHub server on a quad-core xeon 1U ‘pizza box’ server I had spare. It’s short on memory because this generation HP Proliant server maxed out at 8gig, so that’s all I can put in it. Still, it works and is a good demo platform for JupyterHub and my Java course revision project.

JupyterHub really wants to be running as secure HTTP (HTTPS) with a proper certificate. I put the server on a different port (not 443) but can still reach it from my domain, using packet-filter redirection in my firewall.

But – it wants that proper certificate. Typically one would just create a ‘self-signed’ cert using Java’s keytool and use that for Tomcat, but Jupyter wanted something else.

Fortunately I found enough documentation and tutorials to enable me to install and generate a LetsEncrypt (free) certificate that worked perfectly with JupyterHub. There were issues, mostly involving the need to create the certificate manually, but once these were resolved it worked perfectly.

This past week I wondered “could I use the LetsEncrypt certificate with my Tomcat application?”. I searched the web, and found several rather conflicting accounts of how to do it. I tried a few, and all failed.

Eventually I found one that started with “forget all the difficult stuff you’ve read. Installing a LetsEncrypt ‘pem’ file into a Tomcat keyfile is easy. Here’s how…”. I followed that two-command process, and was immediately rewarded with full certificate security for my Tomcat application, WITHOUT having to create a browser exception for the certificate.

It is so very nice when something “just works” the way it’s supposed to work. It’s even nicer when you find simple, unambiguous instructions as a guide. Thanks to Maximilian Böhm and his guide here: https://maximilian-boehm.com/en-gb/blog/create-a-java-keystore-jks-from-lets-encrypt-certificates-1884000/

There’s a thing I’m going to call the ‘Linux model’. Not because it pertains ONLY to Linux, but because most of what’s wrong with this model often starts with Linux and stuff that runs (best) on Linux.

In a way, this is really a story about all the stuff that’s broken in JupyterHub, but it goes beyond that… it’s the general model that’s broken, and the model really owes it’s roots to Linux.

Basically, when you install something on a Linux box, and even the OS for the Linux box itself, it’s probably broken. That is, *something* won’t work after installing it, and there is no way short of digging into some code somewhere of ever fixing it.

Worse, the breaking of such stuff is often super complex and intricate – somewhere buried in a log somewhere is a message regarding “package X failed due to expecting library Y to be x.x.x but was z.z.z”. Or similar obscure “thing” that takes days to figure out, if ever.

You can post the error on google and what you get most of the time is a dozen hits – all questions on StackOverflow asking the same thing and getting precious little of value in response.

Worse, you are expected to manually update packages on an almost continuous basis, and (of course) such updates often break things that were working fine before the update. Yet if you don’t update, something ELSE will break.

The entire model is broken.

What triggered this particular rant today is that I spent ages figuring out how to (finally) install C++ into JupyterHub so I could run C++ notebooks. Yesterday, I found it broken. The log complains about a library *supplied by the supporter of this C++ package* being the wrong date compared to what’s expected. It doesn’t matter. C++ in JupyterHub is now broken, and good luck finding anyone to respond with anything useful. Even less likely is that the C++ supplier will fix it anytime soon.

That’s the other problem with the Linux model. Everything is well documented and often supplied with tutorials. BUT… THEY ARE ALL YEARS OUT OF DATE. Worse, the stuff they describe has changed so much in the years since that you cannot follow the tutorial without being worse off then if you’d just thrown mud at a wall.

The biggest problem with the Linux model is that noone really cares. “I did this really cool thing in 2012 but now I’m bored and … who cares”, seems the mantra of every developer. Nothing is maintained for long. It’s becoming obvious that nothing is really being used either. Otherwise the failures would be noted and (hopefully) fixed.

Overall, it’s a really depressing time to be trying to actually do anything on a Linux box.

I’m rapidly on my way to becoming an old codger. This Christmas Break I soldered together a couple of hardware kits that emulate some old and older computers. One was an Altair 8800 copy, which in it’s day was one of the very first “personal computers” ever sold. The other kit was a PDP11/70 replica, which was some of the first “big iron” I ever programmed on.

Now as testiment to my codgerhood, my first computer experience was at the UofC on a CDC Cyber 170, followed by the Honeywell Multics system that replaced the CDC at the UofC a few years later.

My first job post-graduation was at a company using two IBM 3033 mainframes, each of which filled a large room. The laser printer filled an equally large room, but that’ another story (it was VERY fast).

From there I worked with various other systems, including the above (actual) PDP 11/70’s and even at one point some time on a Cray YMP.

But this isn’t about “bit iron”, it’s about the personal computer. My first was a TRS-80 Model I. I bought a silk-screen expansion board, sourced and soldered it together as I could not afford the “offical” one. Later I bought a TRS80 Model III, then the 4 and finally a Model 4P, which I still have complete with all manuals and software.

But in amongst that time came the IBM PC. It changed the world simply because it was IBM and it seemed *everyone* (or every company) bought one.

I never owned an IBM PC, nor a clone PC. My first forray into “modern” (i.e. post-IBM) PC ownership came when Tandy brought out the Model 2000. This was based on the 80186 chip, which was a “hybrid” – not an 8086 and not an 80286, but something in between. It was a great machine, and much more affordable (for the time) than a “286”.

As I struck out on my own consulting, I bought one of the newest “386” machines, and it cost me \$6000. But for the time it was the greatest, fastest machine you could buy.

I lived, worked, and owned PCs through the 486 era, and into the “Pentium” machines. By then the operating systems were firmly Windows based. I skipped Windows 1 through 3, but at Windows 3.1 it finally came into it’s own. Windows for Workgroups (WFW 4) was a really nice system at the time, and I did quite a bit of work on it.

Then came Windows 95, which “changed the world”. Certainly it brought the internet to the common computer owner, as well as a pretty decent OS. Buggy, but decent. Then came Windows 98 and Windows ME (pronounced “meh” – as in “what the hell is this piece of crap???”). By then I’d gravitated to Windows NT, which had one great feature – it worked and worked well.

Through this we had Pentiums. They got faster, but they were Pentiums.

Eventually sometime after 2000 Intel started putting out the I series – I3, I5, I7. Each one had more cores and was faster than the predecessor. AMD also had multi-core chips, and there was, for a time, a nice “arms race” of computing horsepower.

At the end of April, 2012, I built my current PC system. It uses an Intel Quad Core i7 3770K, Asus Sabertooth Z77 ATX motherboard, 16GB of RAM, a couple of fancy graphics cards, a fancy case with water cooling, 2 x SSD hard drives and a Blue Ray writer. All state-of-the-art for early 2012. I bought the components and assembled it myself, and it was (and is) a very nice system.

It was also considered very fast and high performance. That particular Intel I7 (3770K) was quad core, and fast.

But what I’ve noticed since then is… nothing. I *think* you can buy processors with more cores, and probably faster ones, but today I realized that although I still get tech-type feeds, I haven’t actually heard much in the past few years about “newer, faster, better” processors.

It’s as if we’ve exhausted that particular line of “faster, better” in personal computing. I suspect that for 99% of the market, ANYTHING you buy today is plenty fast enough. The other 1% is gamers, and perhaps if I got gaming feeds or magazines I would hear more about “faster gaming machines”, but I do wonder.

Have we really reached the end end of the “faster, better” in computing hardware?

I also wondered; if I wanted to find out what the FASTEST computer you could use today, how would I even go about finding it? Yea, there’s “the google”, but I’ve also started noticing that between all the “targeted results” based on what you like, it’s getting harder and harder to find any REAL information on the internet these days.

<sigh> I guess I really am becoming an old codger.

I bought a couple of Vintage Computer replica kits in the summer, but did not have time to work on them due to home renovations. One kit was a replica PDP11/70, the other a replica Altair 8800.

I decided that they would be perfect “Christmas Break” projects and so kept them until then.

Over the Christmas Break, I got them out and started building them. I started first with the PDP11/70 kit, or PiDP11 as it’s called. It features a manufactured plastic case and switches that create the complete look of a vintage PDP11/70. There is also a front panel and professional grade circuit board and all the components (switches, resisters, LEDs, diodes, etc.). The kit uses a Raspberry Pi (Model 3B recommended) running software called simh to drive the replica. Basically you run the Pi’s Linux and simh runs as a process on top of that – reading the switches and driving the LEDs.

The kit was straightforward to solder together, and ended up taking most of one afternoon and evening to build. When complete, it looks and works very much like the PDP11/70’s I have used in the past, minus the loud whirring noise of the giant disk packs and fans.

The second kit was the Altair 8800 replica, which again featured a case (bamboo this time), front panel, circuit board and the bags of components. The Altair 8800 emulates the 8080 of the early computer days using an Arduino Due, rather than a Raspberry Pi. This kit was more complex, and took an entire day to solder together and assemble.

I had a few initial issues with the Altair kit, as it features a bluetooth serial port as well as an SD card reader to hold various “disk pack” images. At first I could not get either the bluetooth nor the SD reader working. Some email discussion with the kit designer indicated the bluetooth card, though powered, was not initialized unless you manually configured it in the software setup. Once done, the bluetooth works perfectly and has become my preferred communication channel with the replica. The SD reader was more interesting, in that the metal ‘can’ protecting the pins was bent, preventing full insertion of the SD card. Once that was fixed the SD reader worked perfectly, as did the replica.

It’s been fun keying in a few simple programs into both replicas using the front panel switches, but the real power comes from all the operating systems both replicas support.

The Altair 8800 replica, or “AltairDuino” offers CPM, Altair DOS, many games and other amusements. The PiDP11 offers RTS11, BSD 2.11, 3 flavors of Unix and a real-time OS once used in SCADA industrial control.

I really enjoy playing with these old machines. Given the current state of obsolescence and the love of many to consign everything unwanted to dumpsters, I’ll likely never own full-size originals, but these are a lot of fun.

I’ve continued to work with JupyterHub since my last post, and have made significant progress towards my overall goal of creating a real system for developing a programming course.

The first development was to recreate my work to date on a new server: Ubuntu 18.04 Server, as opposed to Desktop, which I had been using. I also moved this server to VirtualBox (now V6) on a different machine. The new machine acts as a file server and has capacity to spare, plus stays on “as a server” all the time.

Installing Ubuntu 18.04 Server on the machine was not difficult, and following my scripts I was able to create JupyterHub on the new server, with full encryption and networked through “huntrods.com”. I also recreated the various demo logins to allow me to share this work with other colleagues.

I finished developing “Unit 0” for my Java programming course, as well as exploring other resources such as using it for my Network Java Programming course. There were some issues, but most of the programs work.

I also found some significant shortcomings in SciJava, which I contacted the developers for more documentation. Their response was “move to BeakerX, as it has a full Java implmentation”. They also informed me that SciJava might be End-Of-Life soon, which would be unfortunate.

However, I installed BeakerX according to guidelines from a developer on my single-user Ubuntu Desktop. It worked, so I then tried installing it on the Ubuntu Server. After one set of instructions failed, I reverted to the method that worked for many of the packages, and it worked.

I now have a full-featured Java running on JupyterHub under BeakerX. There is one outstanding issue that affects both BeakerX-Java as well as SciJava: neither will accept user input from the keyboard.

Another limit on BeakerX-Java is that it won’t run fragments of code that aren’t real Java. Example: SciJava will evaluate “10+23” and output “33”. BeakerX-Java gives an error as would happen with “real” Java (which is what BeakerX has).

It turns out (from the developer) that SciJava is really a Java+Groovy hybrid, which is great for what I’d been doing, but isn’t really “real” Java.

Either I modify my Unit 0, or go with the SciJava in some notebooks and BeakerX-Java in others.

However, it’s great to have full-blown Java available in my notebooks.

I started working with Jupyter Notebooks in late November (2018), and was rewarded fairly quickly with the ability to create notebooks for Java (SciJava), Chemistry (rdkit), Engineering (scipy), graphics (matplotlib) and Geography (Basemap).

However, the real sticking point was these were all pages executing Jupyter on a local user account, running on a VirtualBox Ubuntu Linux server (18.10) that I’d created.

The real goal was to create a Jupyter system that would work for multiple users, so that I could use it for my new revision of “Introduction to Computing – Java” for Athabasca University. This meant running JupyterHub.

Along the way I moved to Ubuntu 18.04LTS (a checkpoint version) and spent hours on google, youtube and the plethora of Jupyter (and JupyterHub) pages. There were many frustrations along the way, from a complete communications breakdown in forums trying to get a site certificate (letsencrypt), to documentation and tutorials written in 2015 and never updated when everything (and I do mean everything) changed in the time since.

By December 5, I was able to create a functioning JupyterHub on huntrods.com with the proper certificate. The only kernel running was Python3, but it featured either PAM (local user) authentication or OAuth (Github login) authentication, so I was pretty happy.

BUT… (and this is huge) I really needed SciJava, or writing a Java course would be a bust.

The breakthrough came this week – yesterday, in fact. After repeated ‘banging head against the wall’ attempts, I was able to install SciJava for all users. With that success, it was relatively simple to install the other libraries (noted above) so that all my single-user demonstration notebooks ran in the new JupyterHub.

I was off and running, and quickly wrote my first notebook for the Java course. It’s everything I wanted, and more. It’s really a new way of “doing programming”, a mix of documentation and program code that works almost seamlessly together. Instead of a book with dead code examples, the code is ‘alive’ – press run and it executes. Better still, the student can change the ‘book code’ and immediately see the change take effect. It’s brilliant!

Today I worked on getting the Hub automated with supervisor. My next project is to store the notebook pages in a Git repository, either GitHub or local to the server, and then refresh them whenever users log in to the Hub.

Eventually I’ll use Git for notebook version management for all users, but one step at a time.

richard-java

# This is a new SciJava Notebook¶

What’s happening

• The notebook is viewed and running from a browser on a completely different machine
• Jupyter Lab is running on Ubuntu in the background as user anaconda
• Jupyter Lab is accessible over the local network via https & secured with a password

this is really very cool
Why?

• because it is
• because I said so
• did I mention it’s really cool?
In [1]:
System.out.println("Numbers:");
for(int i = 0; i < 100; i++) {
System.out.print(i + " ");
}
System.out.println("Done.");

Numbers:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 Done.


And now… The Sieve of Eratosthenes…

• the code was on my local pc
• the code was just cut and pasted into this notebook
• the code ran first time (well, it did on the PC as well)…
In [2]:
public class Seive {

static int MAX = 1000;

public static void main(String[] args) {
int[] stones = new int[MAX+1];

// initialize
for(int i = 2; i <= MAX; i++) {
stones[i] = i;
}

// remove non-primes
for(int i = 2; i <= MAX/2; i++) {
for(int j = i+i; j <= MAX; j += i) {
stones[j] = 0;
}
}

// display the primes
System.out.println("Primes");
for(int i = 2; i <= MAX; i++) {
if(stones[i] > 0) System.out.print(stones[i] + " ");
}
System.out.println();
}
}

Primes
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997


a web link using regular old html…
Huntrods Zone

In [ ]:



I have a Canon 7D Mk 1. Two actually. I bought one with an underwater housing, and it’s awesome. I bought another locally to use as a spare in case there’s a problem with the UW camera. Housings cost a lot more than cameras for most models, so it’s good insurance to have a spare camera.

Anyway, some time ago the 7D (land) model started to fail. I got “Err 20” errors when I’d try and take a photo. At first it seemed to correlate to using the pop-up flash, but later it simply happened any time, and eventually all the time.

I sent the camera to a Victoria camera shop that could fix Canons, as it was a lot cheaper than sending it back to Ontario – the only Authorized Canon Center in Canada.

It was diagnosed with the problem “stutter failure”. The quoted repair price was good, so I had them proceed. After just over a month, it came back “good as new”. Except, after a couple of weeks, the “Err 20” returned.

I sent it back under repair warranty, and the tech said “I don’t know”. So they sent it back to Canon Canada (in Ontario) to have it diagnosed. It turned out to be the mirror box, which was replaced. Fortunately, it was under repair warranty so I didn’t have to pay anything.

As it turns out, there is a very rare case where both the shutter and mirror box fail, but you can’t diagnose the mirror box until you clear (i.e. fix) the shutter problem. That was the case for my camera.

At any rate, it’s back now and working perfectly. I’ve been using it to take my renovation photos since it came back.

Nothing to report. Literally. I’ve been working on the main bathroom renovation since late June, and then in July Linda broke her ankle. Between the two, I’ve had no time at all to even enter the glass shop.

Maybe in September I can check the wiring connection and put things back together, but not right now.