The future is here, but am I? (web pages, blogs & RSS)

While the topic is rather broad and full of possibilities, what I’m really going to talk about here is the future of “telling stories on the internet”.

I have had web servers since the mid-1990s. Since then I have created several web sites to document my activities, from university courses to my teaching career and now my hobbies and “other stuff”. At the moment I have three active sites: one for glassblowing, one for scuba diving, and one for general “stuff”, including general work around the house.

Recently I installed and tried out wordpress (this blog) which I do like. But it’s not the same as those hand crafted web pages with photos and links to youtube (or vimeo) videos.

My conundrum right now is that I have started the process of building a 3-d printer from scratch. It will be open hardware and is helped by a colleague at AU who has also built one. I would like to document the build in detail, but I’m caught in a bounty of “hows”. I could build a web page, which I’ve done in the past

(i.e. ) or I could just blog the adventure here. I would like to connect to the AU landing as I’ve created a robotics group there which might benefit from the posts. That requires RSS feeds.

WordPress has RSS feeds built in, and I’ve already connected that to my landing main blog. This very post will appear on the landing soon. I can easily connect the rss feed to the robotics group as well. So using wordpress is OK, but I’m not convinced yet that wordpress blogs are where I want to do this.

Which means back to my static web pages. The problem is, they don’t generate rss without significant intervention by either a program, service or manual labor. I’ve had a look at the on-line tools,  and am in the process of digesting whether java can do the job easily or not.

Stay tuned…

Success – git mysteries on Apple resolved (finally!)

Today I finally achieved success performing the desired git merge on my GasRad project on my Macbook. As mentioned previously, GasRad is my SCUBA gas blending app written for IOS devices (iPhone, iPad) using Xcode on my Macbook.

When we last looked in, I had a branch ‘alt_testing…’ many commits removed from the master branch. As hard as I tried, I could not figure out how to merge the ‘alt’ branch back to master using either Xcode or SourceTree on the Mac.

I spent the last two days, and yesterday in particular, tackling this problem. First I made sure that my Xcode project was backed up using the mac time machine program. This was critical as it turned out. <hint: Always back up your work before  doing ‘stuff’. ALWAYS>. With the project backed up, I tried various means and methods to merge master with ‘alt’. I read and re-read the Pro GIT PDF book, and checked many solutions on-line (again, thanks Stack Exchange).

Ultimately, it was one problem that kept things from working: Xcode changed the way it tracked changes to Xcode; new versions of .gitignore handled this, but older versions didn’t. The result was every time I switched to master in Xcode, it wanted to store (commit) the UI change file. Over and over and over. Kind of a catch-22.

After messing about during  the day, I finally had an epiphany late last night. Actually, TWO.

  1. My project was not really branched at all. I had created a branch ‘alt’ back in May 2014, and had many commits on that branch, but absolutely zero commits on master. Really, it was just a simple linear series of commits, and only  the branch ‘name’ was different. As it turned out by my experimentation during the day, the actual merge was simple (no conflicts) if ONLY I could solve the problem of the persistent UI commit when in the master branch.
  2. Other testing during the day showed using terminal to issue git commands worked perfectly on the Mac, and did not conflict at all with either Xcode’s implementation of git, nor with SourceTree’s. The epiphany was that if I started with the ‘alt’ branch with Xcode not running, I could switch to master using terminal git commands and not involve the UI file at all.

Knowing these two things, this morning I set about my final cunning plan. I made sure the ‘alt’ branch was fully committed and working. I then terminated Xcode and SourceTree. I started up terminal, moved to the GasRad directory, and typed ‘git checkout master’. This switches to the master branch. Then “git merge alt…” which quickly and cleanly merged ‘alt’ into master. Using SourceTree I confirmed that master and ‘alt’ were now the same. I used SourceTree to tag the old master location as V2.0.0 and the new master location as V2.0.1, just to remind me where these branches had been. Finally I removed the ‘alt’ branch as it was now the same as master. The result was/is a clean branch tree with only the master branch, fully committed. Of course I finished up by using time machine to back up this clean version.

The real secret here was using terminal to bypass quirks in Xcode due to having git branches that spanned a major IOS/Xcode upgrade. Lesson learned is to ensure any git project is where I want it to be BEFORE I upgrade IOS or Xcode. I also learned the real value of SourceTree is in it’s graphical view of the project’s git history, and it’s ability to easily add tags to any commit.

I am a much, much happier Apple camper today.

Xcode and git (another comment on Apple)

I’ve been using GIT as a source version control system for some time on my Macbook from within Xcode. It was Xcode and a tutorial that got me really started with GIT.

Then this past spring I spent several days importing all the zip archives of an enterprise production system (source in Java) into GIT repositories on my PC. The book “Pro GIT” was invaluable.

Most of my GIT work has been utterly routine – add a feature or perform some maintenance, then commit the changes. Push the changes to the main repository. Simple. Works.

However, on my big Xcode project, GasRad, which I mentioned in another blog post today, I didn’t do things simply. Rather, many months ago I faced a fork in the road when implementing some testing in the new version of GasRad. There were two almost equally valid ways to send parameters to a testing routine, and so I tried both. From the MASTER, I created a branch “alt_param_testing”. So far, so good.

Then, due to previously written problems with Xcode versions and IOS versions, my program quit running on my iPhone. So I gave up for a time.

Fast forward a couple of months, and I added Doxygen comments to my GasRad program. Very nice. Highly recommended.

Fast forward to this past weekend, and I had to modify the program in two small spots to allow it to compile in the newest Xcode / IOS version. One was some dumb nonsense about unallocated assets (Launch Images) and the other was some new nonsense Apple forced on apps for the latest version (needing root windows). My rant on this stuff was posted earlier today, so enough said about that.

What does matter is now I have a project in GIT with MASTER many, many changes older than “alt_param_testing”. What’s  worse, the current active branch really isn’t “alt” anything now, it’s the version of param testing that worked. Really, it’s MASTER.

However, using Xcode and another Mac program I was using “SourceTree”, trying to follow standard GIT practices to merge “alt” and “master” into a new single master branch proved impossible. It got so bad that the project in GIT was a writeoff. All gone. Thankfully I had a time machine backup to recover all but the last few days work, which I remembered – so all was not lost. Not really.

But the whole point of GIT is that you aren’t supposed to be ABLE to mess up a project so badly in GIT that you can’t recover back to where you wanted.

However, no such luck with Xcode or SourceTree. They hide just enough of the workings that the merge simply stalled but without the A/B editor windows (or a clear process) to resolve any conflicts.

I re-read the Pro GIT book, but it’s not really much help in this case. It’s not GIT, it’s the tools that are lacking.

The other infuriating thing is every time I start Xcode, there’s yet another binary file sitting in the pile of uncommitted things. Even after a commit. Why a binary ANYTHING is in the uncommitted files is beyond me. Xcode sets up GIT and sets up .gitignore, so it should be aware of what must and what must not go into a GIT repository.

For the moment I’m stuck in GIT hell – with a project where I cannot merge the two branches at this time  without some essential,  missing piece of information. Yippee.

UPDATE (5pm): After seeing this one ‘new’ file that changed and wanted a new commit, I decided to investigate. Copied the file name to the clipboard, then pasted into google. First 5 hits were all from stackoverflow and all referred to this file and “why is it showing up and needing commits?”.

It turns out it’s another UPGRADE ARTIFACT created by Apple. If you create a new project and git repository in the latest Xcode, this file is already in the new .gitignore file. Of course Apple does not offer to “fix” this in existing projects. so the solution offered was to run “git clean -f -d” to remove the file and then commit the  change. It appears to have worked.

The file in question is a system file that stores the location of windows, what’s open, etc. for Xcode on a per-project basis. While it might be nice to have  different Xcode windows open, settings etc. in each project, it’s not really a “source” file to be saved in most people’s opinion. Certainly not since it’s a file that will be updated EVERY time you run Xcode and do anything at all.

Glad one mystery / annoyance has been cleared up. Now back to the merge…

My current love / hate with IOS and Xcode (Apple developer stuff)

Some time ago I wrote a rather neat little app for the iPhone & iPad (IOS) called GasRad. It’s a program that assists in blending SCUBA breathing gasses using pure oxygen and helium (nitrox or trimix or heliox). You input pressures and blends you have and what you desire, and it tells you how much O2 or He to add in additon to topping the tank up with a compressor. I wrote it to replace cranky spreadsheets I used to use, which replaced pencil and paper, calculator and formulae.

The app got to the stage where I actually released version 1.0 to the App store ($0.99 – a real deal).

After a while, I wanted to update the app as there was a small bug in it, plus I wanted to allow other gas blends (i.e. heliox) and generally improve it. It’s been over two years and counting, and it’s still not ready to release version 2.

Not that it’s all my fault. I was pretty much done May 2014. However, every time I went to release the app, Apple had come out with yet another version of IOS. And the new IOS changed the Apple (mandatory) programming environment Xcode, AND the new IOS introduced new features and requirements for Apps. As each new iPhone or iPad comes out, the list of necessary “things” an App requires grows. More icons, more Launch images, more screen shots for the store… It was a moving target that a small app developer busy with other stuff simply could not manage. Apple docs for this process are virtually nonexistent, and there’s no help from Apple – after all, they are dealing with big fish in the app store, not one-shot dudes like  me.

Then my iPhone 4 went obsolete. No  more IOS upgrades. Since Aug 2014, my iPhone compiles would not run on the phone. Launch and then crash. No message, just would not work. At least the iPad version would run…. until the latest IOS upgrade.

With some resignation to the inevitable, last weekend I decided to recompile the app and see it on one of the many virtual test devices. By accident I didn’t switch the “compile to” setting from my iPhone. Imagine my shock when the program not only compiled, but RAN. On my iPhone. 4!

I tried compiling for the iPad, but now I got some semi-gibberish error message. Fortunately I’ve learned over the years. Copy the error message and plug into google, then read the top 3 hits from StackOverflow. Sure enough, there were numerous people with the same error, and the usual hoard of well-meaning folk  with advice – usually bad or involving a dozen weird steps. However I saw one that simply said “add this line to your app delegate and it will work”. I did that and low and behold, the app now compiles and runs on the iPad without errors. Better yet, it runs on the iPhone too (still!).

Why it would not run for over a year is still a mystery, but I am glad that the latest IOS/Xcode versions do again work with my devices. I can’t say I’m happy with the niggling, fiddly little changes that Apple makes to their stuff that forces a perfectly working program to require code changes to compile and run.

I hate to say it, but Apple could learn a thing or two from Microsoft. I have programs I wrote in 1990 for Windows (3.1 at the time) that STILL RUN TODAY on Windows 7 (and 8, I’m told). I can still compile a simple windows program written in C in the 1990s with a new compiler (mingw) and it still compiles and runs. Sure, I have to spend a few days figuring out compiler options with the new compiler, but it’s not that hard.

So why does Apple keep breaking WORKING PROGRAMS when new verisons of IOS come out, and then require non-trivial code changes to get them to compile and run again?

Why does Apple HATE backwards compatibility?

Social media has landed… (on the Huntrod’s Zone)

In my attempts to embrace some new technologies, which includes this wordpress blog site and more recently my Moodle site and courses, I spent this afternoon installing Elgg on my server. Elgg is a social media platform that offers much control and freedom. It’s from The base ‘elgg’ is rather basic, but there are tons and tons of plugins that offer many customized features.

The Landing at Athabasca University is an example of an Elgg site. It supports discussion and three of my courses (COMP444 – Robotics, COMP348 – Java Network Programming, and COMP601 – Survey of Computing and Information Systems). It’s quite a rich environment, and I wanted to play with such stuff, hence the Elgg installation.

It’s located at but doesn’t have  much yet. Mostly it’s a sandbox for me to learn.

I tried installing elgg a while ago, and it simply didn’t work. As it turns out today, thanks to some expert help from colleague Jon Dron, it was a permissions problem (both group membership and permissions). Once that was cleared up the installation proceeded uneventfully.

More on Moodle (and my recent course revision at AU)

I posted this recently to the revision site for our courses…

Just a short “lessons learned” from this particular revision.

I have Moodle 2.0 installed on my own server at home. After all, I’m a “computer person” and have been working directly with comptuers since 1979. There are few computing “jobs” that I haven’t done, from programming to building and selling the horrid things (worst. job. ever.)

So when I want to learn a technology, the best way forward for me is to get the latest packages and install it locally. That way I can experience the whole deal, from being an installer/admin to all the various roles. That certainly has been the case with moodle.

Unfortunately, some technical issues prevented me having a full working copy of Moodle 2.0 until very recently, and it was only yesterday I could really play.

What I learned by creating one of my very old (1990’s) C programming courses is that there are a lot of ways to do things in Moodle, and many of them easy and fun.

Were I to repeat this revision, I would immediately erase the alfresco links to the study guide and place the content into moodle pages, which I am fully capable of editing – even remotely. This is in contrast to the impossible situation I found trying to remote edit alfresco documents. In the end I had to give up completely on alfresco. It’s probably great if you are local (i.e. in an office) in Edmonton or Athabasca, but horrible from where I am.

I don’t think anything would have been lost with respect to my course putting the content back into moodle (where it was in the beginning), but what I would have gained in time and ease of revision would have been phenomenal.

As I said, just a few thoughts on the experience to date.


My reason for posting is twofold. First, the obvious. The edit process had some horrid bits (alfresco access) and I wanted to mention them again.

The second reason is more important. I want to counteract the notion that we academics just like to complain. Speaking for myself and others I know in Comp. Sci., we often spend a lot of our own time experimenting with technology. Keeping current in our chose field is essential, and that includes teaching technologies in our field.

Playing with Moodle

I’ve been playing with Moodle for a few years now. I first installed Moodle 1,9 on my server and then tried creating a couple of courses, which turned out to be fairly painless. The courses I chose were based on my notes from teaching C in the 1990’s.

Fast forward to last year, when I tried updating to Moodle 2.0. All did not go well, as the “brains of moodle” decided to remove some packages from moodle, requiring the user to have them pre-installed on the server instead. I was able to find and install all but the zip file support. Without all packages, moodle would not install.

I eventually upgraded the server to a newer version of the OS, which had the zip package. Once that was done, installation of Moodle 2.0 was quite painless. Unfortunately, I lost the courses in the process.

This week I decided to try adding the courses again. The new moodle has a nicer look and feel, and much improved tools. It also has a pletora of options and choices, making some decisions much more difficult. Fortunately there is a really good help system that offers tips as you work, so deciding things like “page or lesson?” is reasonable.

In the end it took under 2 hours to create and fully populate my two courses (C I and C II) from my old MS word notes. There are some quirks (why do some list entries appear with shadow border and others without?) but all in all it was fun and I have my courses on-line again.

One reason for installing moodle on my server and playing with it has to do with Athabasca University using Moodle as it’s primary course delivery mechanism. It pays to know the tools, but you can’t just do anything with someone else’s servers. So building my own allows me full rein to play and learn.

One thing I did learn, albeit too late for my current course revision, is that it’s really easy to create curriculum pages in moodle.

At AU we have this blend of content – some on moodle, some in a thing called alfresco. Sadly, editing alfresco content remotely is nigh-on impossible. For the current course, I had to resort to having the alfresco content cut-and-pasted into MS word documents by local experts,  then emailed to me for editing, then cut and pasted back into the alfresco documents. Yikes, what a process!

As I said, I wish I’d known how easy it was to create “pages” for content in moodle, as I would have put all the content back into moodle (via pages) and erased the alfresco links. It would have changed a rough multi-week editing process into in a few days.

Live and learn.

Drysuits are fun (except when they’re not)

I have three drysuits because the ocean is wet and it’s cold. When I first started diving in 2000, we learned in wetsuits – 7mm neoprene suits that overlapped on the torso to give 14mm. I felt like that tire mascot and could hardly move. The first dive was nice, the second horrible (because you were wet, cold and evaporation on the surface interval chilled you even more).

On my advanced course we did a drysuit dive; I bought one the next week.

Fast forward to today, where I now own 3 very high quality (a.k.a. expensive) drysuits. All are from DUI, a very good manufacturer. Two are their TLS350 suits (nylon tri-laminate shell suit) and the newest one is a Flex Extreme (polypropylene tri-laminate shell suit). I also have expensive dive underwear.

When the suits don’t leak, they are wonderful. I still have trouble with cold hands as I have never really found glove liners that are really warm, but overall it’s great.

But – when the suits leak, they aren’t fun at all. Leaks range from seeping due to very small holes (usually in the feet) to full floods due to suit failure (i.e. zipper needs replacing) to wrinkled neck seal. Full floods can start anywhere during the dive, but you really notice it at the end when you  stand up to remove the gear – and all  the water accumulated in the suit rushes in a chilly torrent to soak your legs and feet.

Finding the leaks (if it’s not the zipper or the neck seal) is even less fun than being wet. There are numerous methods to finding leaks, and all of them work about equally well, which is to say poorly. My experience is that any leak “good enough” to show itself in a normal leak test is a BIG leak. Most  leaks are of the weeping/seeping kind, and are almost impossible to accurately locate.

My current (as of today’s post) situation is all 3 suits are “questionable”. The oldest  suit (2003) needs a new shoulder exhaust valve, as that leaked quite well on the last dive. I was also very wet on my rear end; it may be a leak or it may be a wrinkled neck seal. Of course the leak testing was negative for finding any leak, so I’ll just have to replace the valve and dive it to see. (yay, maybe dry, maybe wet).

The second suit (2009) was my teaching suit, and it had two rather large holes in the feet. The holes were large enough that I found the leaks easily, and have now patched both with aquaseal. All that remains is a test dive. The feet were getting so soaked from the first step in the water that I’m not really keen on the test dive. Maybe next course.

The last suit was just bought this year, and is essentially perfect… except twice now I’ve managed to wrinkle the fancy silicone neck seal and end up with a FULL suit flood. Last Sunday was the worst, as water poured in from the moment I entered the ocean. Of course I did a dive anyway, but it was short and very wet and very, very cold. As I thought I was very careful donning the neck seal, I’m really puzzled at this time as to what’s going on.