Success – git mysteries on Apple resolved (finally!)

Today I finally achieved success performing the desired git merge on my GasRad project on my Macbook. As mentioned previously, GasRad is my SCUBA gas blending app written for IOS devices (iPhone, iPad) using Xcode on my Macbook.

When we last looked in, I had a branch ‘alt_testing…’ many commits removed from the master branch. As hard as I tried, I could not figure out how to merge the ‘alt’ branch back to master using either Xcode or SourceTree on the Mac.

I spent the last two days, and yesterday in particular, tackling this problem. First I made sure that my Xcode project was backed up using the mac time machine program. This was critical as it turned out. <hint: Always back up your work before  doing ‘stuff’. ALWAYS>. With the project backed up, I tried various means and methods to merge master with ‘alt’. I read and re-read the Pro GIT PDF book, and checked many solutions on-line (again, thanks Stack Exchange).

Ultimately, it was one problem that kept things from working: Xcode changed the way it tracked changes to Xcode; new versions of .gitignore handled this, but older versions didn’t. The result was every time I switched to master in Xcode, it wanted to store (commit) the UI change file. Over and over and over. Kind of a catch-22.

After messing about during  the day, I finally had an epiphany late last night. Actually, TWO.

  1. My project was not really branched at all. I had created a branch ‘alt’ back in May 2014, and had many commits on that branch, but absolutely zero commits on master. Really, it was just a simple linear series of commits, and only  the branch ‘name’ was different. As it turned out by my experimentation during the day, the actual merge was simple (no conflicts) if ONLY I could solve the problem of the persistent UI commit when in the master branch.
  2. Other testing during the day showed using terminal to issue git commands worked perfectly on the Mac, and did not conflict at all with either Xcode’s implementation of git, nor with SourceTree’s. The epiphany was that if I started with the ‘alt’ branch with Xcode not running, I could switch to master using terminal git commands and not involve the UI file at all.

Knowing these two things, this morning I set about my final cunning plan. I made sure the ‘alt’ branch was fully committed and working. I then terminated Xcode and SourceTree. I started up terminal, moved to the GasRad directory, and typed ‘git checkout master’. This switches to the master branch. Then “git merge alt…” which quickly and cleanly merged ‘alt’ into master. Using SourceTree I confirmed that master and ‘alt’ were now the same. I used SourceTree to tag the old master location as V2.0.0 and the new master location as V2.0.1, just to remind me where these branches had been. Finally I removed the ‘alt’ branch as it was now the same as master. The result was/is a clean branch tree with only the master branch, fully committed. Of course I finished up by using time machine to back up this clean version.

The real secret here was using terminal to bypass quirks in Xcode due to having git branches that spanned a major IOS/Xcode upgrade. Lesson learned is to ensure any git project is where I want it to be BEFORE I upgrade IOS or Xcode. I also learned the real value of SourceTree is in it’s graphical view of the project’s git history, and it’s ability to easily add tags to any commit.

I am a much, much happier Apple camper today.

Xcode and git (another comment on Apple)

I’ve been using GIT as a source version control system for some time on my Macbook from within Xcode. It was Xcode and a tutorial that got me really started with GIT.

Then this past spring I spent several days importing all the zip archives of an enterprise production system (source in Java) into GIT repositories on my PC. The book “Pro GIT” was invaluable.

Most of my GIT work has been utterly routine – add a feature or perform some maintenance, then commit the changes. Push the changes to the main repository. Simple. Works.

However, on my big Xcode project, GasRad, which I mentioned in another blog post today, I didn’t do things simply. Rather, many months ago I faced a fork in the road when implementing some testing in the new version of GasRad. There were two almost equally valid ways to send parameters to a testing routine, and so I tried both. From the MASTER, I created a branch “alt_param_testing”. So far, so good.

Then, due to previously written problems with Xcode versions and IOS versions, my program quit running on my iPhone. So I gave up for a time.

Fast forward a couple of months, and I added Doxygen comments to my GasRad program. Very nice. Highly recommended.

Fast forward to this past weekend, and I had to modify the program in two small spots to allow it to compile in the newest Xcode / IOS version. One was some dumb nonsense about unallocated assets (Launch Images) and the other was some new nonsense Apple forced on apps for the latest version (needing root windows). My rant on this stuff was posted earlier today, so enough said about that.

What does matter is now I have a project in GIT with MASTER many, many changes older than “alt_param_testing”. What’s  worse, the current active branch really isn’t “alt” anything now, it’s the version of param testing that worked. Really, it’s MASTER.

However, using Xcode and another Mac program I was using “SourceTree”, trying to follow standard GIT practices to merge “alt” and “master” into a new single master branch proved impossible. It got so bad that the project in GIT was a writeoff. All gone. Thankfully I had a time machine backup to recover all but the last few days work, which I remembered – so all was not lost. Not really.

But the whole point of GIT is that you aren’t supposed to be ABLE to mess up a project so badly in GIT that you can’t recover back to where you wanted.

However, no such luck with Xcode or SourceTree. They hide just enough of the workings that the merge simply stalled but without the A/B editor windows (or a clear process) to resolve any conflicts.

I re-read the Pro GIT book, but it’s not really much help in this case. It’s not GIT, it’s the tools that are lacking.

The other infuriating thing is every time I start Xcode, there’s yet another binary file sitting in the pile of uncommitted things. Even after a commit. Why a binary ANYTHING is in the uncommitted files is beyond me. Xcode sets up GIT and sets up .gitignore, so it should be aware of what must and what must not go into a GIT repository.

For the moment I’m stuck in GIT hell – with a project where I cannot merge the two branches at this time  without some essential,  missing piece of information. Yippee.

UPDATE (5pm): After seeing this one ‘new’ file that changed and wanted a new commit, I decided to investigate. Copied the file name to the clipboard, then pasted into google. First 5 hits were all from stackoverflow and all referred to this file and “why is it showing up and needing commits?”.

It turns out it’s another UPGRADE ARTIFACT created by Apple. If you create a new project and git repository in the latest Xcode, this file is already in the new .gitignore file. Of course Apple does not offer to “fix” this in existing projects. so the solution offered was to run “git clean -f -d” to remove the file and then commit the  change. It appears to have worked.

The file in question is a system file that stores the location of windows, what’s open, etc. for Xcode on a per-project basis. While it might be nice to have  different Xcode windows open, settings etc. in each project, it’s not really a “source” file to be saved in most people’s opinion. Certainly not since it’s a file that will be updated EVERY time you run Xcode and do anything at all.

Glad one mystery / annoyance has been cleared up. Now back to the merge…

My current love / hate with IOS and Xcode (Apple developer stuff)

Some time ago I wrote a rather neat little app for the iPhone & iPad (IOS) called GasRad. It’s a program that assists in blending SCUBA breathing gasses using pure oxygen and helium (nitrox or trimix or heliox). You input pressures and blends you have and what you desire, and it tells you how much O2 or He to add in additon to topping the tank up with a compressor. I wrote it to replace cranky spreadsheets I used to use, which replaced pencil and paper, calculator and formulae.

The app got to the stage where I actually released version 1.0 to the App store ($0.99 – a real deal).

After a while, I wanted to update the app as there was a small bug in it, plus I wanted to allow other gas blends (i.e. heliox) and generally improve it. It’s been over two years and counting, and it’s still not ready to release version 2.

Not that it’s all my fault. I was pretty much done May 2014. However, every time I went to release the app, Apple had come out with yet another version of IOS. And the new IOS changed the Apple (mandatory) programming environment Xcode, AND the new IOS introduced new features and requirements for Apps. As each new iPhone or iPad comes out, the list of necessary “things” an App requires grows. More icons, more Launch images, more screen shots for the store… It was a moving target that a small app developer busy with other stuff simply could not manage. Apple docs for this process are virtually nonexistent, and there’s no help from Apple – after all, they are dealing with big fish in the app store, not one-shot dudes like  me.

Then my iPhone 4 went obsolete. No  more IOS upgrades. Since Aug 2014, my iPhone compiles would not run on the phone. Launch and then crash. No message, just would not work. At least the iPad version would run…. until the latest IOS upgrade.

With some resignation to the inevitable, last weekend I decided to recompile the app and see it on one of the many virtual test devices. By accident I didn’t switch the “compile to” setting from my iPhone. Imagine my shock when the program not only compiled, but RAN. On my iPhone. 4!

I tried compiling for the iPad, but now I got some semi-gibberish error message. Fortunately I’ve learned over the years. Copy the error message and plug into google, then read the top 3 hits from StackOverflow. Sure enough, there were numerous people with the same error, and the usual hoard of well-meaning folk  with advice – usually bad or involving a dozen weird steps. However I saw one that simply said “add this line to your app delegate and it will work”. I did that and low and behold, the app now compiles and runs on the iPad without errors. Better yet, it runs on the iPhone too (still!).

Why it would not run for over a year is still a mystery, but I am glad that the latest IOS/Xcode versions do again work with my devices. I can’t say I’m happy with the niggling, fiddly little changes that Apple makes to their stuff that forces a perfectly working program to require code changes to compile and run.

I hate to say it, but Apple could learn a thing or two from Microsoft. I have programs I wrote in 1990 for Windows (3.1 at the time) that STILL RUN TODAY on Windows 7 (and 8, I’m told). I can still compile a simple windows program written in C in the 1990s with a new compiler (mingw) and it still compiles and runs. Sure, I have to spend a few days figuring out compiler options with the new compiler, but it’s not that hard.

So why does Apple keep breaking WORKING PROGRAMS when new verisons of IOS come out, and then require non-trivial code changes to get them to compile and run again?

Why does Apple HATE backwards compatibility?

Social media has landed… (on the Huntrod’s Zone)

In my attempts to embrace some new technologies, which includes this wordpress blog site and more recently my Moodle site and courses, I spent this afternoon installing Elgg on my server. Elgg is a social media platform that offers much control and freedom. It’s from elgg.org. The base ‘elgg’ is rather basic, but there are tons and tons of plugins that offer many customized features.

The Landing at Athabasca University is an example of an Elgg site. It supports discussion and three of my courses (COMP444 – Robotics, COMP348 – Java Network Programming, and COMP601 – Survey of Computing and Information Systems). It’s quite a rich environment, and I wanted to play with such stuff, hence the Elgg installation.

It’s located at http://elgg.huntrods.com but doesn’t have  much yet. Mostly it’s a sandbox for me to learn.

I tried installing elgg a while ago, and it simply didn’t work. As it turns out today, thanks to some expert help from colleague Jon Dron, it was a permissions problem (both group membership and permissions). Once that was cleared up the installation proceeded uneventfully.

More on Moodle (and my recent course revision at AU)

I posted this recently to the revision site for our courses…

Just a short “lessons learned” from this particular revision.

I have Moodle 2.0 installed on my own server at home. After all, I’m a “computer person” and have been working directly with comptuers since 1979. There are few computing “jobs” that I haven’t done, from programming to building and selling the horrid things (worst. job. ever.)

So when I want to learn a technology, the best way forward for me is to get the latest packages and install it locally. That way I can experience the whole deal, from being an installer/admin to all the various roles. That certainly has been the case with moodle.

Unfortunately, some technical issues prevented me having a full working copy of Moodle 2.0 until very recently, and it was only yesterday I could really play.

What I learned by creating one of my very old (1990’s) C programming courses is that there are a lot of ways to do things in Moodle, and many of them easy and fun.

Were I to repeat this revision, I would immediately erase the alfresco links to the study guide and place the content into moodle pages, which I am fully capable of editing – even remotely. This is in contrast to the impossible situation I found trying to remote edit alfresco documents. In the end I had to give up completely on alfresco. It’s probably great if you are local (i.e. in an office) in Edmonton or Athabasca, but horrible from where I am.

I don’t think anything would have been lost with respect to my course putting the content back into moodle (where it was in the beginning), but what I would have gained in time and ease of revision would have been phenomenal.

As I said, just a few thoughts on the experience to date.

-R

My reason for posting is twofold. First, the obvious. The edit process had some horrid bits (alfresco access) and I wanted to mention them again.

The second reason is more important. I want to counteract the notion that we academics just like to complain. Speaking for myself and others I know in Comp. Sci., we often spend a lot of our own time experimenting with technology. Keeping current in our chose field is essential, and that includes teaching technologies in our field.

Linux distros: your initial boot process is broken

EVERY LINUX DISTRO SHOULD BOOT TO NETWORK SSH LOGIN READY, especially those customized for hardware boards.

Every new hardware board that has come out in recent months is usually accompanied by a custom ‘distro’ (short for distribution) of linux. This would be a great thing, except that the distros are always crippled by one fatal flaw.

Specifically, every new hardware distro assumes that every user wants to plug in a keyboard and a monitor and maybe a mouse, and then fire up the new board and CONFIGURE the thing from the ‘console’?

WHY? THIS IS THE 21st CENTURY.

Here is what I think should be the default for every new linux distro:

1. boot using DHCP. While I prefer static IP’s on my own internal network, I still have a DHCP server ready for just this occasion. If the developer writes the MAC address on the board, I can even associate a specific IP to the MAC address.

2. boot with SSH active and provide a default root password. Even better: provide a default user account with password, which can use sudo or ‘su -‘ (to root) with an supplied sudo or root password.

In other words, boot to network login ready. With that, I can take it from there and customize the config any way I want.

3. bonus round: boot to graphical user interface if you like, so that those who simply must have a keyboard and mouse and monitor can still get their jollies.

Linux is ubiquitous, and why that’s not always a good thing

As someone who has been using and working with unix and unix-like operating systems since the early 1980s, I am growing increasingly frustrated with linux.

Linux has become the defacto industry standard server platform for all things web. Certainly for any open source project. The problem is that everyone who develops on the linux platform seems to assume that because it’s ‘almost good enough’ with respect to security, that developing with linux assumptions is good enough for everyone.

But that’s not true. It’s not that linux is insecure, but rather that many choices have been made creating the popular linux distros that entail less security than can be achieved. And there’s the problem. Try to install a ‘produced on linux’ product on a more secure operating system, or an operating system with higher security settings, and the install will fail.

Examples include: wordpress, moodle, and elgg; all latest versions, and all who fail to install on a stock OpenBSD (ultra secure) OS. The problem is with permissions, ownership and groups. In order to install one of the above packages on OpenBSD, one is forced to change groups and file permissions from secure settings to much less secure settings before the install will succeed.

It’s all very frustrating. Taking an ultra-secure operating system and intentionally crippling some of the security just to get popular linux developed packages to install and run.

It’s not that linux itself is necessarily at fault, but rather the typical developer mentality of “it worked on my machine, so the problem is you”. This trend seems to pervade much modern software development. And that is not a good thing.

The Internet

(originally posted dec 3, 2008)

I hate the internet.

Well, actually, I love the internet.

I’m just glad as hell that I didn’t have the internet in:

  1. pre-school
  2. grade school
  3. junior high
  4. high school
  5. university

Because if the internet had existed in it’s present form back when I was doing 1-4 (above), then today I would most likely be over 50 (which I am), working at a job (when I could break away from the internet), asking “…would you like fries with that?”.

I’d also probably be over 1000lbs, unable to move (except to surf the web) and eat only doritos or cheesits or some similar plastic-cheeze flavored deep-fried extruded-paste pseudo-food.

Yep. Thanks to having NO INTERNET as a kid, teen and young adult, I actually got to do things like play outside, read books, and GRADUATE.

I’m also very grateful that there were no available computers when I was growing up (at least until university where I was in rural Canada), and ESPECIALLY no COMPUTER GAMES. Just thinking about all the time I wasted on computer games AFTER I had a career is scarey stuff. Imagine if I had even the rickety games from the 1980’s back in school. Scary stuff, kids!

PHP

(originally posted jun 8, 2010)

I am not very fond of PHP.

Really.

PHP is one of the “go to” languages for web development. Actually, I suspect it’s the “go to” language of the same bunch that embraced VB (visual basic) in the ’90s.

You know – the ones that can program well enough to get into REAL trouble, but not well enough to make code that is elegant.

Sure, most PHP code (like older VB code) works, but has almost no ability to handle anything outside the meagre boundaries of the original problem. Give it some weird input, and it crashes like Windows ME.

I’ve been ressurecting an older PHP project this week, and that’s what got me thinking about all this. The code I’ve found (it was buried in a rather non-obvious place on the server) works, but it looks like crap (visually). The logic is a true cobble-together nightmare, with every single thing being a separate source file.

Worse, and this is perhaps the crux of my argument, the code is a mish-mash of programming “stuff”. And it’s 100% “good” PHP. There’s regular expressions right next to weird function calls right next to cryptic command, arguments and stuff that looks like it came out of some horrid bash script.

The real problem with PHP is that it’s a utiltity language, and that means that it’s been cobbled together from bits and pieces of all the other utility languages that came before it… shell scripting, awk, grep, (and other unix ‘stuff’), perl and who-knows what other languages… all thrown together in a washing machine’s spin cycle to tumble around into PHP.

In short – horrid. Certainly in danger of becoming a “write only” language.

Software “Engineering”

(originally posted oct 9, 2012)

… is NOT Engineering. It’s not really software either. It’s mostly age-old project management drivel in a shiny new wrapper.

If you examine the discipline closely, you will notice it’s not really about software. Students may take a C++ course or two, but aside from a few projects don’t delve deep into programming or programming topics like the older ‘Computer Science’ discipline. What you get instead is methodologies. Not just any methodologies, but the newest and shiniest ‘agile’ type methodologies. Any exposure to older methodologies is as ‘bad example’ object lessons. In the end, you are not being taught software, you are being taught managment. They might just as well call the program “Computer MBA”.

I took Engineering in the 80’s. Back then, there were four main disciplines: Chemical, Civil, Electrical and Mechanical. Each one had a common core for the first two years, in which everyone took the same courses. We all took Math (lots of math), Statics, Dynamics, Physics, Materials, Design and Drafting, Economics (yuk!) and options. After second year, if you passed, you went on to third and fourth year where you declared your specialty and took two years of courses specific to that discipline.

One thing was constant through all four years, no matter the discipline. You learned how to solve problems, not how to memorize or regurgiate classroom lectures. Almost every Engineering exam was open book, and most allowed you to bring in a ‘formula sheet’, not that it did much good. You were expected to take what had been discussed in the lectures and text, plus the assignment work, and extrapolate solutions to novel problems posed on the exams. Assignments were the same – adapt, derive, extrapolate, solve. Partial marks for showing your work were worth more than the correct answer. Knowing HOW to get the correct answer was as important as getting the correct answer. (In fact, if all you wrote down was the correct answer you would receive a mark of zero in some courses – showing the path to the solution was that important to some professors).

SO what does this have to do with the so-called ‘software engineering’? Lots. People enrolled in software engineering in many colleges and universities do not have to take ANY core Engineering courses. They take some computer science courses instead, but the ones with the ‘special’ appelation of “Software Engineering – xxx” in the name. Basically, they take methodology courses. While I’m sure this produces good methodology majors, it does not produce an Engineer. An Engineer solves problems. Engineers arrive at this capability by taking all those courses in their four years (especially in the first two years) that introduce Engineeing and problem solving to the students, and require them to master some of these skills to advance.

Until and unless “software engineering” programs require all students in the program to take the first two years of Engineering core before branching into the software side of things, they are emphatically NOT, in my opinion, Engineers.