Showing posts with label rant. Show all posts
Showing posts with label rant. Show all posts

Dear Journalists: Bits and Bytes

Dear Journalist type person,

There's something we must discuss. You see, you've been making a very basic mistake in many of your articles when it comes to writing about the Internet, and specifically Internet speeds. Let's take a look at a small quote:

...unless you have an internet connection of impossible speeds. (Mine is nominally 10MB, by the way, which in practice means maximum download speeds of 1.4 megabytes per second).
(source: Rock-Paper-Shotgun).

Can you spot the problem here? Internet speeds are measured in megabits per second. The symbol for 'bits' is a lower-case 'b', so an Internet connection that's 10 Megabits per second could be written as "10 Mbps". I guess if you're feeling lazy you could leave off the "ps", and end up with "10 Mb" (although it's a really sloppy thing to do), but NEVER "10 MB" - that means something else entirely.

Modern PCs use bytes that contain 8 bits. The correct symbol for a byte is an upper-case 'B'. so "10 MB" means "ten mega-bytes", not mega-bits, which is probably what you meant when you were describing the speed of your Internet connection.
Back to our Internet connection that runs at 10Mbps. It's unfortunate that speeds are measured in bits, because a much more useful measure is bytes per second, since that's how we deal with data sizes. We know that a CD ISO image is likely to be around 700 MB, an MP3 file around 3 MB, and an image from a digital camera to be around 1 MB. To convert our 10Mbps connection speed to megabytes per second, we divide by 8, and get 1.25MBps. However, this is the theoretical maximum speed, and there's a lot of overhead in any network connection, so in practise it's unlikely you will experience anything close to this maximum speed.

If your eyes glazed over, or perheps you felt light-headed reading that, here are a few take-home points to make it easier for you:
  • Connection speeds are measured in megabits-per-second. The correct unit symbol for this is "Mbps".
  • Files are measured in Megabytes.
  • A Byte has 8 bits. So to turn your connection speed into something useful, divide the number by 8 and make the unit symbol "MBps".
I would be honoured if you'd consider this small point next time you go to write online. Some of us are acutely sensitive to these matters, and you really don't want to upset the geeks of this world.

Kind Regards,

Another Microsoft UI Fail

...this time in Microsoft Outlook. Could the programmer who was too lazy to write their code properly please go back to school? There is no excuse for this:

You want me to take this string, split it on comma characters, and re-join it with semi-colons instead? Sure, I'll do that you you, with my meat-fingers, because I'm really good at that, and computers are known to be terrible and SIMPLE TEXT MANIPULATION.

Visual Studio Fail

Perhaps this is a symptom of the underlying operating system, rather than the Visual Studio IDE. In either case it sucks:

I like to keep my files organised in folder hierarchies. Now I'm being forced to use a flat, wide folder tree by my IDE.

Not. Happy. At. All.

Why your python editor sucks

I'm doing a reasonable amount of python-coding work these days. It would help me to have an editor that doesn't suck. My requirements are:

  1. Small & Fast. I'm not after a massive clunky IDE, just an editor with enough smarts to make editing multiple python files easier.
  2. Sensible syntax highlighting.
  3. Understands python indentation, PEP8 style. Specifically, indents with 4 spaces, backspace key can be used to unindent.
  4. Can be integrated with one or more lint checkers. Right now I use a wonderful combination of pep8, pyflakes and pylint. I want the output of these to be integrated with the editor so I can jump to the file & line where the problem exists.
That's it. I don't think I'm asking too much. Here's the editors I've tried, and why they suck:

  1. KATE. I love kate, it's my default text editor for almost everything. However, there is no way to integrate lint checkers. I could write a plugin, but that's yet another distraction from actually doing my work.
  2. Vim. I'm already reasonably skilled with vim, and Alain Lafon's blog post contains some great tips to make vim even better. My problem with vim is simply that it's too cryptic. Sure, I could spend a few years polishing my vim skills, but I want it to just work. Vim goes in the "kind of cool, but too cryptic" basket.
  3. Eric. When you launch eric for the first time it opens the configuration dialog box. It looks like this:
    How many options do I really need for an editor? Over-stuffed options dialogs is the first sign of trouble. It gets worse however, once you dismiss the settings window, the editor looks like this:

    Need I say more?
  4. Geany. Looks promising, but no integration into lint checkers.
  5. pida. Integrates with vim or emacs for the editor component. Looks promising, although the user interface is slightly clunky in places. Pida suffers from exactly the same problems as vim does however, but I may end up using it.
There are a few options I have not tried, and probably won't:
  1. Eclipse & pydev. Eclipse is a huge, hulking beast. I want a small, fast, lean editor, not an IDE.
  2. Emacs. Can't be bothered learning another editor. Doesn't look that much different to vim, so what's the point in learning both?
  3. KDevelop. Same reason as Eclipse, above.

I suspect there's a market for a simple python editor that just works. Please! Someone build it!

Visual Studio Exception Woes

Microsoft, in their infinite wisdom have decided to make programming easier. How? By setting the default behavior for Visual Studio 2010 Ultimate to be to ignore (i.e.- not break on) exceptions thrown from non user-code. Behold the default settings for exceptions in a brand new C# project:
Try as I might, I have not yet discovered a way to change the default for these settings for all projects. How am I supposed to teach students about exception handling when Microsoft are doing their best to get rid of them?

Bah.

Attention all Programmers:

As a user of open source software, I like to try and give something back to the community whenever I can. As a somewhat proficient programmer i can do this more often than most, but one of the most effective ways of giving back for non-programmers is by filing bug reports.


Unfortunately, there are two main issues with this:
  1. Submitting a bug report is often incredibly painful. Most software bug trackers I have seen require an account, which means registering a new username & password (I can't wait for more non-essential services like bug trackers to start using openId), activating my account... all this can take 30 minutes of more. Submitting a bug report should be a fire-and-forget affair, taking 10 minutes tops: any longer and I can't afford to spend my time.

    Many bug trackers ask users for information that is hard to obtain, or intimidating to non-programmers. How many users know their CPU architecture? Or distribution? Or even the software version they're using? One way around this is to have the bug-reporting done from within the application on the client machine itself, but still - bug trackers should be as friendly to users as possible. How about posting some simple instructions on how to obtain this information for non-technical users?

  2. Even after navigating the multiple hurdles involved in submitting a bug, you then have to deal with the programmers fielding the bug report. This is where it gets tricky. Many programmers view bug reports as a personal insult to them (perhaps subconsciously). Many programmers will triage bugs that they don't want to fix, giving excuses like "It's like that by design", or simply "Low priority, won't fix".

    Here's the thing though: The customer is (nearly) always right.

    If a user has taken the time to navigate your awful bug tracking software and submit a bug, it must be a big deal to them. If the matter at hand really is like that "by design", your design is probably screwy. If you won't fix it because it's low priority then you need to stop adding new features, and fix the ones you already have.
Open source software seems to suffer from these problems more than commercial software. I guess it's because we're not trying to extract money from our clients. Can you imagine a professional code shop telling a paying customer "I'm sorry, we're not going to fix that bug you reported, because we intended it to work like that"? Yeah, right.

So how do we fix this for the open source world?

There's no simple answer that I can fathom. It requires programmers to be a bit smarter and have a bit more empathy for the mere mortals who have to use their software. As a programmer, I include myself in this category.

That is all, thank you.

Project Documentation

Why is it that most open source project pages are so terrible at documenting their own project?


I'm not talking about API or technical documentation - I'm talking about telling new visitors to your site what the hell your code is about.

Project authors, here are some handy tips:
  1. On your project front page, right at the top, put a simple explanation of what your code does (or what you hope it will do someday). Remember that your audience may not have the same level of technical experience as you do. Examples (screenshots, code snippets) are a MUST. A picture speaks a thousand words and all that...

  2. Make sure you include the development status of your project. I can't count the number of times I've spent 30 minutes looking at a project only to realize that it's not nearly complete enough to be usable to me. There's no shame in saying "this library is working, but not production ready. It is missing features X, Y, Z"

  3. Inject some enthusiasm! How many boring, dull, dry project descriptions do I have to read through? Most sound like the authors aren't passionate about their product. Sell your project; inject some enthusiasm, and maybe your viewers will become more enthusiastic in the process!
Well, that's my rant for the day. Now I must go update my project documentation...

Design and Implementation

One of the key tenets in good software deisgn is to separate the design of your product from it's implementation.


In some industries, this is much harder to do. When designing a physical product, the structural strength & capabilities of the material being used must be taken into account. There's a reason most bridges have large columns of concrete and steel going down into the water below. From a design perspective, it'd be much better to not have these pillars, thereby disturbing the natural environment less and allowing shipping to pass more easily.

Photo by NJScott. An example of design being (partially) dictated by implementation.

Once you start looking for places where the implementation has "bubbled up" to the design, you start seeing them all over the place. For example, my analogue wristwatch has a date ticker. Most date tickers have 31 days, which means manual adjustment is required after a month with fewer than 31 days. I'm prepared to live with this. However, the date ticker on my watch is made up of two independent wheels - and it climbs to 39 before rolling over, which means manual intervention is required every month! What comes after day 39? day 00 of course!




It's easy to understand why this would be the case - it's much simpler to create a simple counting mechanism that uses two rollers and wraps around at 39 than it is to create one that wraps at the appropriate dates. I have yet to see an analogue wristwatch that accounts for leap-years.

Software engineers have a much easier time; our materials are virtual - ideas, concepts and pixels are much easier to manipulate than concrete and steel. However, there are still limitations imposed on us - for example data can only be retrieved at a certain speed. Hardware often limits the possibilities open to us as programmers. However, these limitations can often be avoided or disguised. Naive implementations often lead to poor performance. A classic example of this is Microsoft's Notepad application. Notepad will load the entire contents of the file into memory at once, which can take a very long time if the file you are opening is large. What's worse is that it will prevent the user from using the application (notepad hangs, rendering it unusable) while this loading is happening. For example, opening a 30MB text file takes roughly 10 seconds on this machine. This seems particularly silly when you consider that you can only ever see a single page of the data at a time - why load the whole file when such a small percentage of it is required at any one time? I guess the programmers who wrote notepad did not intend for this use case, but the point remains valid: an overly-simple implementation led to poor performance.

The unfortunate state of affairs is that the general population have been conditioned to accept bad software as the norm. There really is no excuse for software that is slow, crashes, or is unnecessarily hard to use. It's not until you use a truly incredible piece of software that you realise what can be achieved. So what needs to change? Two things:
  1. Developers need to be given the tools we need to make incredible software. These tools are getting better all the time. My personal preference for the Qt frameworks just paid off with the beta release of Qt 4.7 and QT Creator 2.0. I plan on writing about the new "Quick" framework in the future: I anticipate it making a substantial difference to the way UI designers and developers collaborate on UI design and construction.

  2. Users need to be more discerning and vocal. As an application developers it can be very hard to know what your users think. If you don't get any feedback, are your users happy, or just silent? We need a better way for users to send feedback to developers; it needs to be low-effort fast and efficient.

My Morning thus far

My morning thus far:

Woke up. Noticed it had been snowing. Roughly 4-5 centimetres on the ground, and still coming down, although it's more ice crystals than snow.

Since today is the first day that my company is exhibiting at the BETT trade show in London, got dressed in snazzy new company shirt, and trudged my way (30 minutes) to train station.

Bought £21 ticket. Went into station, just in time to hear announcment that all trains were terminating at woking, and there would be no services to London. Damn!

Walked home through park. Snow quite pretty, but cold and wet as well:

Finally got back home. Roads didn't look too bad, so I thought I could at least drive into office. Cleared car of snow and ice, drive 2 metres forward and got stuck, half in, and half out of driveway!

So here's the thing: If the gulf stream breaks down, or becomes more erratic, this will happen more and more. We need infrastructure to cope with the bad weather. How do other countries deal with this?

Sexism in IT?

Mark Shuttleworth recently copped some flack for allegedly sexist content in a talk. I wasn't there, and haven't seen the talk, so I can't really comment on the material itself, but a few things struck me about some of the online responses:
  1. Many of the people complaining weren't there - they watched the video footage online. Why would you do this? If you suspect that there's going to be content that offends you, don't watch it. If you do decide to watch it, I'm not sure you can complain too loudly when (surprise surprise) you are offended by it.

  2. Yes, IT is a male dominated field - for whatever reason (there's lots of research discussing why this is, but that's for you to find). That's not to say that sexism should be inherent, or even tolerated, but it is to be expected. Anyone shocked by this statement should try working in other male-dominated fields, such as construction or engineering. No, it's not right, but it's how it is.
I met Mark briefly at a Linux conference a number of years ago and he seemed to me to be a straight-talking, reasonably honest, good natured kind of guy. I'm sure he made an honest mistake, and regrets his choice of words. I would urge Mark to apologise, and urge everyone who complained to spend the same amount of energy protesting equally important matters such as software patents, or advocacy for open, sane standards.

Spolsky loses his cool


Today I stumbled across Joel Spolsky's article "The Duct Tape Programmer". Essentially it's a thousand word rant to make this simple point:

A 50%-good solution that people actually have solves more problems and survives longer than a 99% solution that nobody has because it’s in your lab where you’re endlessly polishing the damn thing. Shipping is a feature. A really important feature. Your product must have it.

Of course he's right - however, his post is ten agonising paragraphs wherein he rants about design patterns, extended C++ features such as template classes (wait, they've been around for a while now - can we still call them "extended" features), and multi-threading (!!!), and finally one succinct paragraph in which he makes his point (most of which I have quoted above). Now don't get me wrong - I am by no means criticising his writing style ("people in glass houses..." and all that) - all I'm suggesting is that someone with Joel's reputation may wish to think a little harder before posting this sort of tripe online, lest he tarnish his otherwise good reputation. Let me give an example:

One principle duct tape programmers understand well is that any kind of coding technique that’s even slightly complicated is going to doom your project. Duct tape programmers tend to avoid C++, templates, multiple inheritance, multithreading, COM, CORBA, and a host of other technologies that are all totally reasonable, when you think long and hard about them, but are, honestly, just a little bit too hard for the human brain.
So Joel Spolsky is seriously suggesting that C++, templates, multiple inheritance and multi-threading are invariably going to "doom your project"? Come on. Multi-threading is critical to the success of many projects - without it, or something similar, a huge portion of applications simply wouldn't exist, or at least would be a lot more complicated. I challenge Joel to write a print spooler as part of an interactive application in a single thread. I challenge Joel to write a tool for scientific analysis that must process lots (gigabytes? exabytes?) of data while maintaining an interactive user interface.

As I mentioned earlier, Joel has a point - however, instead of suggesting that any slightly-complicated technology be banned outright, I'll instead suggest that any slightly complicated technology had better be understood by your programmers before you use it in your project. Don't use multi-threading because it sounds cool, use it because it's the right tool for the job.


Microsoft's unpaid testers

I just discovered this charming little quote in the winows 7 blog:

To date, with the wide usage of the Windows 7 Beta we have received a hundreds [sic] of Connect (the MSDN/Technet enrolled beta customers) bug reports and have fixes in the pipeline for the highest percentage of those reported bugs than in any previous Windows development cycle.

So you're publically advertising the fact that your product was very buggy when you launched the beta test phase, and you're scrambling to fix all the bugs at the last minute? Whatever happened to internal testing? Who will test all the bugs introduced with your bug fixes?


Bah, my dislike of the Microsoft software mill continues! Hooray for uninformed opinion!

Henry lives on

After complaining about the poor state of the web browser in a KDE platform, I have to report with mixed emotions that I've bitten the bullet and installed firefox. I'm not a huge fan of firefox - yes, it's open source, and seems to work fairly well, but it's also slow and a huge resource hog.

Who here remembers when firefox first came out? It was supposed to be a stripped down version of the mozilla web browser. The idea was that by removing the mail client, IRC chat application, and god knows how many other applications we'd end up with a smaller, faster, lighter browser. To some extent it worked. However, I'm starting to wonder if they'd have been better starting from scratch.

I challenge anyone reading this to use Chrome for windows for a week and then switch back to Firefox for good - I guarantee you you'll be pulling your hair out within a week; firefox is slow! I always assumed that the reason my browsing experience was so poor was down to my slow Internet connection, but it turns out that a fair amount of the delay is the browser.

So I have firefox - the GTK theme KDE installs looks awful, and several web sites look rubbish, but at least I can check my email...

Well, that's it for now. More to come soon (and this time I'll lose the shakespearean titles).

My Kingdom for a Browser!

This post is set to be one of the most painful entries I have ever written on this weblog. Not because the subject matter is particularly difficult, but because the technology has let me down.

The story starts with me upgrading my laptop to Kubuntu 8.10. It's been out for a while, and I'm a big fan of KDE 4, but I hadn't had a sufficiently quiet weekend in which to take the plunge. I was previously running Kubuntu 8.04, so I could have just downloaded the latest packages, but I wanted to start from scratch, for a couple of reasons.:

  1. I wanted to remove all the rubbish that I had installed over the last six months. I frequently download and install applications, only to find that they're not quite what I want. I rarely uninstall them, so over time my lhard disk fills with cruft.

  2. I wanted to wipe away all the stale config, especially as my window manager would be changing from KDE 3.x to KDE4. Besides, there's a certain pleasure to be derived from configuring a brand new KDE installation.

The install was a breeze, and for the first time ever all my laptop hardware was detected and configured correctly without any hacking on my part - even the weird web-cam, which doesn't even work in Windows XP. Life was good, until I went to browse the Internet.

KDE ships with Konqueror as it's default web browser. As far as web browsers go it's fairly nice - It lacks the large "Add-Ons" repository that Firefox has, but many of the plugins I can't live without when using Firefox are included as standard in Konqueror.



Konqueror is more than just a web browser though - the integration between konqueror and the rest of KDE is truly stunning (as an aside: this is why I prefer KDE over other desktops. Technologies like KPart and DBus are the future of desktop applications, and KDE is leading the charge in this area). As an example, if you want to search google for something, but don't have your browser window open, what can you do? Easy! just press Alt + F2 to open the "Run Command" dialog, and type "gg: " followed by your desired keywords. Hit enter and you'll launch Konqueror with the google results right there waiting for you.


Konqueror also has extensive protocol support. For example, SCP and SFTP are supported by default. Try typing something like "fish://user@host" - konqueror will as for the user password, and will then behave like a file browser for the remote machine.

These two examples hardly scratch the surface of what Konqueror can do. However - there are some very serious problems with it. Using GMail with Konqueror is torturous. First Google will give you the plain-old-HTML-only mode, since Konqueror isn't officially supported. Then, if you ask for the full version anyway you get all sorts of weirdness - and a completely unusable inbox. The solution seems to be to set the user agent to Safari 2.0, but even then my inbox seems to be incredibly slow.

Members of the KDE community have pointed out that GMail plays fast-and-loose with web standards, so it's understandable that Konqueror misses a few tricks. The Google engineers must have tested the javascript enhanced version of GMail with the most popular browsers, and left Konqueror out in the cold - and fair enough. However, the KDE developers are missing the point: no matter how good their browser is technically - no matter how standards compliant it is, it simply does not work for me - the user. I now have a browser that I cannot use to check my email (no, using the HTML-only version is not an option).

So what are the alternatives?

Before I upgraded Kubuntu I had Firefox installed. However, when I went to install it, I nearly had a heart attack. In order to install Firefox, I had to install 63 other packages - most of them gnome or GTK packages. The reason for this is simple: Firefox uses the GTK toolkit to provide a UI. I knew this already, but this early on in my new Kubuntu install I wasn't about to pollute my OS install with GTK packages.


What can I do? There are a few other options available to me:

There's been talk of a Firefox port to Qt. However, nothing usable has materialised yet, so that's off the cards.

There's the Arora browser - this is a Qt browser running the Webkit engine (which is included as standard in later Qt distributions). A quick install told me what I needed to know: also not really usable as my default browser.

Finally there's Google's offering: Chromium. However, this has not yet been ported to Linux.

So what's the underlying cause of my troubles? Without hacking the code directly, I have no idea. Perhaps this is part of the KHTML vs Webkit debacle - There's a good article outlining the whole issue here, but I'd like to quote a couple of paragraphs:

So, what's the situation? Well, it appears that KHTML will remain the web rendering engine for Konqueror going into KDE 4.0, and that it could be changed to qtWebkit as of KDE 4.1. That does not seem to be officially settled, so much as the most likely scenario. It appears that the KHTML team seems hesitant about the proposition, while many KDE developers and users alike have expressed a very receptive attitude toward seeing Konqueror user qtWebkit. And Rusin made clear to a reader that he believes the KHTML team should continue their work as long as they like.

The challenge is that Webkit, which comes from Apple, is widely tested, and is thus known to work well with a large number of websites. KHTML is not as widely tested, and, for example, GMail doesn't work well with Konqueror. Many Konqueror fans have expressed regret at having to keep Firefox around just for sites like GMail, that don't recognize KHTML. Using Webkit would solve these problems, enabling many users to stick to one browser.

In other words: "The developers are dragging their feet to implement a fix that would arguably make Konqueror a better browser". Of course, the developers involved are free to do as they please with their code, but they're dragging down the rest of the KDE platform - I now have to have multiple browsers installed to do the most basic of day-to-day tasks.

While the situation is frustrating in itself, the unfortunate fact is that similar things are happening all over the open source scene. Frequently developers get too caught up in making sure that their code is "right" (that may mean designed correctly, stable, cool, standards compliant, well integrated, or anything else the developer feels is important), and not enough time is spent making sure that the product is usable. I suppose this is one of the draw backs to a development methodology where there is no external pressure to develop your product.

Usability is king, and trumps all other concerns in a product. If it's not usable, it's no good.

WiiWare: Innovation and mistakes


I bought a Nintendo Wii earlier this year. I've never actually owned a console before, but have a reasonably strong loyalty to Nintendo. They appear to publish the best games (of course, that's entirely subjective). My game catalogue now includes the following titles:

You may have noticed that I'm not a big fan of the more lighthearted "party" style games out there - I prefer the more focused, single player games.Once I had purchased those titles I began to look for something else, but quickly found that there's not a whole lot of choice out there right now. Most new Wii games tend to be in the "party" category.

Thankfully, Nintendo have launched WiiWare. WiiWare is a collection of titles created by third party developers. There are many different titles to choose from, and each title costs around £10. I ended up purchasing two titles:
These are both splendid games. However, once again, the pool of good games in the WiiWare collection is very limited - the main reason for this as far as I can see is that it's incredibly difficult to get your hands on the tools required to develop games for the Wii. For a start, Nintendo are only selling their development kit to well-established development houses (you need a registerred business, proper offices, previously published titles etc.). Their application form states that:

The Application includes a NonDisclosure Agreement (NDA). Once the Application and NDA are
submitted by you, we will email you a copy of the Application and NDA for your records. Please
note that your submission of an Application and NDA does not imply that your company is approved,
or will be approved, as an Authorized Developer for the platforms above.

...
If the Application is approved by Nintendo, we will notify you by email. At this point, your
company will be considered an Authorized Developer for the platform(s) specified. If your company
is approved for Wii, this also includes WiiWare. If approved the appropriate SDKs can be downloaded
from Warioworld, and development kits can be purchased from Nintendo of America.

So First you need to sign an NDA, Then, if you are accepted you need to purchase the development kit (priced at over $1000 USD). All this makes is increadibly hard for "joe programmer" to start cutting code for the Wii.

I really think Nintendo have missed a trick here; imagine the community that could form behind a free development kit. Think about the success of the Apple AppStore for the iPhone, but with games instead. The Wii is a revolutionary platform, with a unique control interface: surely lowering the barriers to entry can only be a good thing?

There's another side to this as well: The Wii Homebrew team have already done a lot of work reverse engineering the Wii, to the point where there is already an SDKs available for use. Is it usable? I haven't tried it myself yet (perhaps when I finish some of my current projects I'll play with it), but there are already a fair number of games available for the homebrew channel: I count more than 70 games listed, as well as a number of utilities, emulators and other bits and pieces.

The free development kit is based on the gcc PPC port, and comes bundled with everything you need to start development. GNU gcc has been a well established payer on the compiler scene, so it's not like we're playing with untested technology here.

Given that many of the secrets of the Wii are out (or are being reverse engineered even as you read this), wouldn't it be prudent for Nintendo to officially welcome third party developers to the fold? More importantly, for other, future consoles, imagine a world where:

  • The original manufacturer (Nintendo, Microsoft, Sony or whoever) use an open source toolchain from the beginning. I assume that Nintendo have spent a lot of time and money developing their toolchain, which seems a little wasteful to me, when an open source solution already exists. Sure, it may need to be tailored for the Wii, but I'm sure there are plenty of people who would embrace these changes. An open source toolchain lowers development costs, and lowers the barrier to entry for third party developers.
  • Third party developers are encouraged to write applications themselves, and the cost to entry is kept as low as possible. The manufacturer supplies the hardware, points to a pre-packaged toolchain of open source applications, and provides a development SDK with decent documentation. If all you need to test your games is a copy of the console itself, that would be great. However, why not build an emulator that can run on a standard PC?
  • The manufacturer provides bug-fixes for the SDK when needed, and creates a community-oriented website for budding developers.
  • The manufacturer provides a free (or very cheap) means of distributing third party applications via the internet, and offers the option of DRM routines, should the initial autors wish to make use of them.

I believe this setup could bring about a number of beneficial changes to the console gaming market:
  • An overall increase in the diversity and quality of available games.
  • A vibrant community of developers who help the manufacturer maintain the platform SDK and development toolchain by submitting bugs, feature requests and other suggestions.
  • Increased popularity for the platform (I'd buy any platform that offered all of the above).
Unfortunately, I can't see it happening any time soon. It seems to me that the big three console manufacturers are still engrossed in the "proprietary hardware, closed source" paradigm. Still, a guy can dream, right?

Teaching Programming mk. 2

I blogged before about what I think we should teach programming students, and almost immediately wished I hadn't. Sometimes I feel that my blog posts are somewhat pointless meanderings through the garbage that inhabits my sleep-deprived brain. At other times I feel that I have contributed something useful to the general public. The post in question is firmly in the former category - but what can I do? I won't start deleting articles as soon as I fall out of favor with them, so I'm hereby correcting my earlier mistakes (at least, attempting to). Illiad Frazer knows how I feel:

The whole point of the previous post was that I felt that most graduate students were under-prepared for work in industry. My main evidence of this is that it seems to take a long time, and more importantly a lot of interviews before one strikes "candidate gold" when recruiting for a new programmer.
I will admit that this could be for many reason: perhaps our expectations are too high, perhaps we are not paying enough to attract the kind of graduate we're looking for, or perhaps the industry we're in isn't desirable enough to attract the better candidates. The list goes on endlessly - and yet I cannot ignore the fact that most graduates I meet are not up to scratch.

So what prompted this revision of a past article? I happened to read E. W. Dijkstra's article entitled "On the cruelty of really teaching computing science". In it, he postulates that the methods used by most universities are fundamentally flawed when it comes to teaching computer science, and more specifically when teaching computer programming. I'd like to quote part of this article:

...we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it.

Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification. While designing proofs and programs hand in hand, the student gets ample opportunity to perfect his manipulative agility with the predicate calculus.
This method of programming - approaching the programming language as a kind of "predicate calculus' has it's advantages. It demands that the students pay attention to the features, rules, regulations and guarantees that the language provides. Whichever language is used (and to a certain extend it does not matter), the rules and regulations of that language are going to dictate the structure of the program. This is similar to the fact that the laws of math dictate the form of any mathematical proof; ignore the laws of the language, and your program (or proof, if you will) no longer makes sense. In the domain of integer mathematics, 2 + 3 will aways equal 5. In the domain of C++, local variables are destroyed in the reverse order that they were created in (insert whatever rule of the language you want there).

Consider for a moment my previous post; I listed 11 things which I thought were essential for any programming student to know. Looking back, I notice that the top five items are all specific to C++ (since that's the language I talk in). Is it a coincidence that the five most important things any programming student can know are specific to the language they are using? I think not.

Rather, I believe that to be a great programmer, one must have a deep understanding of the language at hand, and how that language allows you to express logical problems. One must approach a program like a mathematical problem - that is, one must know the rules of the language, and then use those rules to design a proof that conclusively solves the logical problem using the language at hand.

That last point is worth reiterating: Anyone can write a program that appears to solve a problem most of the time. However, for non-trivial problems it becomes much harder to guarantee that the program will solve the problem 100% of the time. As we get further into the "edge cases" of the application logic it becomes less likely such a naive implementation will work correctly. However, a program that has been built from the ground up using the guaranteed behavior of the language can still contain bugs, but it's much more likely that they are logic errors introduced by the programmer, rather than subtle bugs introduced through language misuse.

At this point I must point out that I do not believe that Dijkstra's idea is as good as he makes it sound. He addresses one point - that students should understand the rules of the language, but a "love of the language"is only half the picture. There are also many non-language related skills that come in to play. Consider debugging for example; there are formal techniques that can be used to debug certain types of errors. Knowing these techniques, and knowing when to employ them is a powerful aid in any language, and these are skills that should be taught, rather than learned in an ad hoc approach.

So, my revised top 10 list of things every programming student should know can now be revised into this, much shorter form:
  1. Know your language. I don't care what your language is - if you want a job it had better be something that's being used, but you can be a great programmer even if all you know is an out-dated language. Not only do you need to know your language, you need to have a passion for knowing your language - you must actively want to extend your knowledge of the language and how it works, what guarantees it provides and which it doesn't. This knowledge will translate into programs that use the features of the language to create minimal, efficient, well structured and error-free programs.

  2. Be willing to learn new techniques. There are so many useful techniques and skills for a new programmer to have that I cannot list them all here, and course designers cannot possibly include them all in their course material.

That's it - two things. Much better than the self-absorbed tripe I rattled off a few weeks ago. To anyone who actually bothered to read that, I apologize profusely.

Ten Things to Teach Programming Students

While talking to a friend recently, we began discussing the role of graduates in the industry. My belief is that employers employ graduates and expect them to have the same skill level as their existing, trained employees (I have certainly seen this first-hand). Having been on the "other side" of the problem I appreciate that graduates are rarely fit for the tasks set for them without further training.

This got me thinking: If there were 10 things graduates should know before graduating, what should they be? What short list of skills can graduates teach themselves to become better than their competition (and getting that first job is just that: a competition). That train of thought spawned the following list:

Ten things programming students should know before graduating:
  1. Inheritance & Composition. In the land of OO, you must know what inheritance does for you. In C++, this means that you must know what public, protected and (rarely used) private inheritance means. If class A is publically inherited form class B, that does that tell you about the relationship between A and B? What about if the inheritance was protected, rather than public? In a similar vein, what does virtual inheritnace do, and when would you want to use it? Sooner or later a graduate programmer will discover a complex case of multiple inheritance, and they need to be able to cope with it in a logical fashion. Knowing the answers to the above questions will help.
    Unfortunately, a lot of the time inheritance is over-used. Just because we have access to inheritance, doesn't mean we should use it all the time! Composition can be a useful tool to provide clean code where inheritance would muddy the waters. Composition is such a basic tool that many graduates don't even think of it as a tool. Experience will teach when to use composition and when to use inheritance. Graduates have to know that both can be solutions to the same problem.

  2. Memory Allocation. So many graduates do not understand the importance of cleaning up after yourself. Some do not fully appreciate the difference between creating objects on the stack and on the heap. Some know that but fail to understand how memory can be leaked (exceptions are a frequent cause of memory leaks in novice programmers). Every programmer should know the basic usage of new, new[], delete and delete[], and should know when and how to use them.

  3. Exceptions. Most programmers share a love / hate relationship with exceptions; You gotta know how to catch them, but at the same time you tend to avoid using them yourself. Why? Because exceptions should be .... exceptional! There's a reasonably large amount of overhead associated with throwing and catching exceptions. Using exception as return values or flow-control constructs are two examples of exception mis-use. Exceptions should be thrown only when the user (or programmer) does something so bad that there's no way to easily fix or recover from it. Running out of resources (whether it be memory, disk space, resource Ids or whatever) is a common cause for exceptions to be thrown.

  4. Const correctness. Const correctness is so simple, yet so many programmers just don't bother with it. The big advantage of const-correctness is that it allows the compiler to check your code for you. By designating some methods or objects const you're telling the compiler "I don't want to change this object here". If you do accidentally change the object the compiler will warn you.

  5. Threading. Threading is hard. There's no simple way around this fact. Unfortunately, the future of PC hardware seems to be CPUs with many cores. Programs that do not make use of multiple threads have no way to make use of future hardware improvements. Even though using libraries like Qt that make it ridiculously easy to create threads and pass data between threads, you still need to understand what a thread is, and what you can and cannot do. A very common thing I see in new programmers is a tendency to use inadequate synchronization objects in threads. Repeat after me: "A volatile bool is not a synchronization object!".

  6. Source control. Every programmer on the planet should know how to use at least one version control system. I don't care if it's distributed or not, whether it uses exclusive locks or not, or even if it makes your tea for you. The concepts are the same. Very few professional programmers work alone. Graduates must be able to work in a team - that includes managing their code in a sensible fashion.

  7. Compiler vs Linker. Programmers need to understand that compiling an application is a two step process. Compilation and Linking are two, discreet, and very different steps. Compiler errors and Linker errors mean very different things, and are resolved in very different ways. Programmers must know what each tool does for them, and how to resolve the most common errors.

  8. Know how to debug. When something goes wrong, you need to know how to fix it. Usually, finding the problem is 90% of the work, fixing it is 5% of the work, and testing it afterwards is another 10%. No, that's not a typo - it does add up to more than 100%, which is why there's a lot of untested code out there! Of course, if you were really good you wouldn't write any bugs in the first place!

  9. Binary Compatibility. This one is for all those programmers that write library code, or code that gets partially patched over time. As you probably already know, shared libraries contain a table of exported symbols. If you change that table so a symbol is no longer available (or it's signature changes), code that uses that symbol will no longer work. There's a list of things you can and cannot do while maintaining binary compatability, and it's very hard not to break those rules, even if you know what you're doing. I've blogged about this before, and linked to the KDE binary compatibility page on techbase - worth a read!
    The main method of maintaining binary compatibility is to program to an interface, rather than to an implementation. Once you start paying attention to binary compatibility, you'll quickly realise that it's a very bad idea to export your implementation from a shared library, for one simple reason: If you want to change your implementation you're stuck with the restrictions placed upon you by the need to maintain binary compatibility. If all you export is a pure interface and a means to create it (possibly via a factory method) then you can change the implementation to your heart's content without having to resort to pimpl pointers.

  10. Read the right books. There are a few movers and shakers in the programming industry that it pays to keep an eye on. There are many books worth reading, but I'm going to recommend just two. The first is "Design Patterns: Elements of Reusable Object-Oriented Software", and the second is the "Effective C++" series. Neither are considered to be great bedtime reading, both are considered to be packed from cover to cover with things that will help you out in every-day situations. Any programmer worth his or her salt will own a copy of at least one of these books, if not both. Of course, there are books on UI design and usability, threading, text searching, SQL and database maintenance, networking, hardware IO, optimisation, debugging... the list goes on.

  11. Networking. What's this? An 11th item? That's right: it's in here because it cannot be ignored in most programming tasks. It's getting harder and harder to avoid networking. Most graduates will have to write code that sends data over a network sooner or later, so they'll need to know the difference between UDP and TCP and IP, as well as what the basic network stack looks like (think "Please Do Not Touch Steve's Pet Alligator"), and what each layer does. Being familiar with tools like wireshark helps here.

What's not in the list:

You may notice that I haven't included any specific technologies in this list. That's because I firmly believe that it really doesn't matter. Sure, there are some libraries that are better than others (I'd bet my life on a small set of libraries), but the programmer next to me has a different set. I care not one grote whether a graduate knows how to program in .NET, Qt, wxWidgets or anything else - as long as they're willing to learn something new (whatever I'm using on my project).

Which brings me nicely to the conclusion: The single quality I see in all the programmers I admire is a sense of curiosity; a restlessness and a sense of adventure. Our industry is constantly shifting. The best programmers are able to ride the changes and come out better for it.

Is this post horribly self-indulgent and boring? Probably, but it had to be done. Have I forgotten anything? Things you feel should be on the list that are missing? Remember that the point of the exercise is to keep a small list - I could list every programming skill and technology required under the sun, but that would not be very useful would it?


VMWare Server 2: Worse Than Failure


OK, so this is hardly breaking news, but I thought I'd share this mini-rant with you now any way.

At work We deal with multiple operating systems (WinXP, Vista, Linux), and multiple programming environments. A few weeks ago I decided to take the plunge and do all my development work inside virtual machines. The advantage of this approach is that it's very fast to switch from one environment to another (much faster than a whole machine reboot).

There are two problems with this approach:
  1. Performance. Programming is a pretty CPU-intensive task. Well, compiling the code is anyway. Compiling our code base takes around an hour on a physical machine with top-of-the-line specs (this makes compiling the Linux kernel seem fast). On a virtual machine, that time doubles. This is an inescapable truth about software virtualization: there will always be some overhead.
  2. Hardware. The applications I program need access to physical hardware, over a variety of interfaces, including RS232, RS422, USB, Parallel, TCP/IP, UDP/IP and a few others besides. Any software virtualization package must be able to forward all these hardware interfaces through to the virtual machine.
In my experience, the only software virtualization package that meets requirement 2 is VMWare. I know I've raved about Virtualbox before, and I was very tempted to use it again now, but it lacks the hardware support I need.

So, VMWare Server it is then. The next question then becomes: which version? I've used version 1.x before, and it fulfilled all my needs at the time. However, I noticed that version 2 is now available. I thought "In software, bigger numbers are better, right?"

After downloading the sevrer, and registerring for a free license key, I spent a busy 30 minutes clicking through the ubiquitous license agreements and installation options (does anyone ever read these things?). Everything was installed. I went to fire the application up, only to have it launch Mozilla Firefox. What's going on here? Then it hit me: a wave of fear and horror. What were they thinking?

The managment interface for VMWare Server 2 is web based.

That's right... you want to use your virtual machine? You need to run a web browser. The interface is slow, the browser plugin that supports the interface is buggy... I could go on, but some other people have detailed the problems with the software far better than I ever could (Yes, I realise that link is talking about the beta release. Trust me, nothing much has changed).

Maybe I'm missing something, some gold nugget of understanding that would make Server 2 more usable for me, but right now I just don't get it. Why would you decide that the primary interface to a virtual machine should reside inside a browser? Browsers are notorious for implementing different standards, being generally slow, memory-hogging apps that occaisonally crash. Who in their right mind would want to use a browser for their virtual machines?


Until then, I'll stick with VMWare server 1, thank you very much.

pointer quiz

One of my pet peeves with regards to C++ is how very few people understand how the delete operator handles NULL pointers. Let's see if you pass the test!

Question: What happens when you do this:


myType *ptr = NULL;
delete ptr;



Well? Your choices are:

a) Crash, bang, boom, your computer is now a very heavy paperweight.
b) Nothing - the delete line ignores the ptr to be deleted.


It turns out that deleting a null pointer is safe. Section 5.3.5/2 of the C++ standard states that:

"In either alternative, if the value of the operand of delete is the
null pointer the operation has no effect."


This has been a pet peeve of mine for a while now. I can't count the number of times I've seen programmers write something like this in class destructors or cleanup methods:


if (pMyPtr)
delete pMyPtr;


This is stupid for several reasons. For a start, the if statement is redundant - if the pointer is NULL the delete will do nothing. Secondly, the programmer never sets the pointer to NULL after deleting it, which means that if this code were to be called again, you would definitely experience problems.

In order to avoid these issues, and avoid angering me if I ever see your code, you should:

  1. Always initialize pointers to NULL if you're not going to set them to something else straight away (i.e.- if the pointer is not always used).

  2. Always set them to NULL after you delete them, especially if there's a chance that the delete can be called twice.

Apathy, Apples and Understanding

You may notice that it's been a long time since my last post. Truth be told I've been lazy. It's tempting to say that I've been busy, but that's a sugar coating on apathy. For that I apologise! Hopefully I can get back into the habit of regular posting again soon.

Inspiration struck the other day when Jeff Atwood posted an interesting article on his blog. "Dealing With Bad Apples" talks about how single members of a programming team can be difficult, often working against the rest of the team.

Atwood quotes Robert Miesen:

I was part of a team writing an web-based job application and screening
system (a job kiosk the customer called it) and my team and our
customer signed on to implementing this job kiosk using Windows,
Apache, PHP5, and the ZendFramework -- everyone except one of our team
members, who I will refer to as "Joe". Joe kept advocating the use of
JavaScript throughout the technology deliberation phase, even though
the customer made it quite clear that he expected the vast majority of
the job kiosk to be implemented using a server-side technology and all
the validation should be done using server-side technology.

The fact that the customer signed off on this, however, did nothing
to deter Joe from advocating JavaScript -- abrasively. Every time our
project hit a bump in the road, Joe would go off on some tirade on how
much easier our lives would be if we were only writing this job kiosk
in JavaScript. Joe would constantly bicker about how we were all doing
this all wrong because we weren't doing it in JavaScript, not even
bother to learn the technologies we were actually using, and, whenever
fellow teammates would try and gently bring him back into the fold
(usually via email), Joe would just flame the poor guy. At the height
of Joe's pro-JavaScript bigotry, he would regularly belt off comments
like, "Well, if we had only done it in JavaScript," to such an extent
that the team would have been better off if he had just quit (or was
reassigned or fired.)

Jeff then goes on to suggest that perhaps the problem here is a "bad apple" - a team member that is doing more harm than good in a team. He's probably right, but I have a slightly different angle.

Perhaps the real problem here is poor team leadership / management? Without meeting "Joe" personally, I cannot make any accurate assessment of the situation, but it seems to me that perhaps Joe feels undervalued in his team? I say this because i recognize that behavior pattern - in myself.

In any team of programmers, each member will have different backgrounds, strengths and weaknesses. Joe obviously has experience using Javascript, and feels the need to share his expertise in that field. I'm not saying that this is a good thing, but perhaps the underlying problem is a lack of cohesion and understanding between team members?

So, further to Atwood's list of warning signs for detecting "bad apples", I have a list of actions team leaders could consider taking when dealing with a so-called "bad apple":
  • Listen to them. Most geeks (I use the term with all possible affection) are reasonable people. If a team member is repeating themselves, perhaps they feel that their point was never seriously considered in the first place? I can't count the number of times I've made contributions in meetings that were ignored, only to hear (usually six weeks later) "hey, we should have done X, what a pity it's too late now...". It always seems petty to point out that I suggested X from the start. Bear with me here - I'm certainly not suggesting that I'm always right - far from it; my point is that you ignore contributions from your team members at your own risk!

    Finally, I'm not suggesting that team leaders always act on suggestions from their team members, but listening is a good start.

  • Once you start listening to bad apples, you may find that some of your team members have strengths you didn't expect. Can you use these strengths in the future? This depends a lot on your business model and workload. From my own experience I can understand that programming code that doesn't interest you week in - week out can be incredibly draining. Perhaps bad-apples can be encouraged to pull together with the team?
  • Finally - I don't know what the IT job market is like where Jeff lives, but you can't fire programmers and expect to get a replacement any time soon. Jeff writes:

    You should never be afraid to remove -- or even fire -- people who do not have the best interests of the team at heart. You can develop skill, but you can't develop a positive attitude.


    I say "bollocks" to that - it's incredibly expensive to fire and replace someone. Not only is there the cost of looking for, and hiring someone new, but there's the training overhead, and there's no guarantee that you can find someone with the appropriate skill set any time soon. From where I'm sitting it looks like we have to wait around 6-8 weeks between looking for, and hiring a new programmer. That's almost two months of productivity down the drain! Suggesting that you can't develop a positive attitude in your team-members is incredibly negative and close-minded. I'm certainly glad I'm not on a team with a leader like that!


That said and done, I do hope that the current skill shortage in this country develops a greater appreciation of the worker. I suspect that most companies vastly underestimate the value of their skilled (and unskilled) workers.
Next time you have a problem with someone, consider the massive cost of replacing them, and - more importantly - consider the huge amounts of good work they've done, before you concentrate on the bad.