Showing posts with label idealism. Show all posts
Showing posts with label idealism. Show all posts

Dear Journalists: Bits and Bytes

Dear Journalist type person,

There's something we must discuss. You see, you've been making a very basic mistake in many of your articles when it comes to writing about the Internet, and specifically Internet speeds. Let's take a look at a small quote:

...unless you have an internet connection of impossible speeds. (Mine is nominally 10MB, by the way, which in practice means maximum download speeds of 1.4 megabytes per second).
(source: Rock-Paper-Shotgun).

Can you spot the problem here? Internet speeds are measured in megabits per second. The symbol for 'bits' is a lower-case 'b', so an Internet connection that's 10 Megabits per second could be written as "10 Mbps". I guess if you're feeling lazy you could leave off the "ps", and end up with "10 Mb" (although it's a really sloppy thing to do), but NEVER "10 MB" - that means something else entirely.

Modern PCs use bytes that contain 8 bits. The correct symbol for a byte is an upper-case 'B'. so "10 MB" means "ten mega-bytes", not mega-bits, which is probably what you meant when you were describing the speed of your Internet connection.
Back to our Internet connection that runs at 10Mbps. It's unfortunate that speeds are measured in bits, because a much more useful measure is bytes per second, since that's how we deal with data sizes. We know that a CD ISO image is likely to be around 700 MB, an MP3 file around 3 MB, and an image from a digital camera to be around 1 MB. To convert our 10Mbps connection speed to megabytes per second, we divide by 8, and get 1.25MBps. However, this is the theoretical maximum speed, and there's a lot of overhead in any network connection, so in practise it's unlikely you will experience anything close to this maximum speed.

If your eyes glazed over, or perheps you felt light-headed reading that, here are a few take-home points to make it easier for you:
  • Connection speeds are measured in megabits-per-second. The correct unit symbol for this is "Mbps".
  • Files are measured in Megabytes.
  • A Byte has 8 bits. So to turn your connection speed into something useful, divide the number by 8 and make the unit symbol "MBps".
I would be honoured if you'd consider this small point next time you go to write online. Some of us are acutely sensitive to these matters, and you really don't want to upset the geeks of this world.

Kind Regards,

Attention all Programmers:

As a user of open source software, I like to try and give something back to the community whenever I can. As a somewhat proficient programmer i can do this more often than most, but one of the most effective ways of giving back for non-programmers is by filing bug reports.


Unfortunately, there are two main issues with this:
  1. Submitting a bug report is often incredibly painful. Most software bug trackers I have seen require an account, which means registering a new username & password (I can't wait for more non-essential services like bug trackers to start using openId), activating my account... all this can take 30 minutes of more. Submitting a bug report should be a fire-and-forget affair, taking 10 minutes tops: any longer and I can't afford to spend my time.

    Many bug trackers ask users for information that is hard to obtain, or intimidating to non-programmers. How many users know their CPU architecture? Or distribution? Or even the software version they're using? One way around this is to have the bug-reporting done from within the application on the client machine itself, but still - bug trackers should be as friendly to users as possible. How about posting some simple instructions on how to obtain this information for non-technical users?

  2. Even after navigating the multiple hurdles involved in submitting a bug, you then have to deal with the programmers fielding the bug report. This is where it gets tricky. Many programmers view bug reports as a personal insult to them (perhaps subconsciously). Many programmers will triage bugs that they don't want to fix, giving excuses like "It's like that by design", or simply "Low priority, won't fix".

    Here's the thing though: The customer is (nearly) always right.

    If a user has taken the time to navigate your awful bug tracking software and submit a bug, it must be a big deal to them. If the matter at hand really is like that "by design", your design is probably screwy. If you won't fix it because it's low priority then you need to stop adding new features, and fix the ones you already have.
Open source software seems to suffer from these problems more than commercial software. I guess it's because we're not trying to extract money from our clients. Can you imagine a professional code shop telling a paying customer "I'm sorry, we're not going to fix that bug you reported, because we intended it to work like that"? Yeah, right.

So how do we fix this for the open source world?

There's no simple answer that I can fathom. It requires programmers to be a bit smarter and have a bit more empathy for the mere mortals who have to use their software. As a programmer, I include myself in this category.

That is all, thank you.

Design and Implementation

One of the key tenets in good software deisgn is to separate the design of your product from it's implementation.


In some industries, this is much harder to do. When designing a physical product, the structural strength & capabilities of the material being used must be taken into account. There's a reason most bridges have large columns of concrete and steel going down into the water below. From a design perspective, it'd be much better to not have these pillars, thereby disturbing the natural environment less and allowing shipping to pass more easily.

Photo by NJScott. An example of design being (partially) dictated by implementation.

Once you start looking for places where the implementation has "bubbled up" to the design, you start seeing them all over the place. For example, my analogue wristwatch has a date ticker. Most date tickers have 31 days, which means manual adjustment is required after a month with fewer than 31 days. I'm prepared to live with this. However, the date ticker on my watch is made up of two independent wheels - and it climbs to 39 before rolling over, which means manual intervention is required every month! What comes after day 39? day 00 of course!




It's easy to understand why this would be the case - it's much simpler to create a simple counting mechanism that uses two rollers and wraps around at 39 than it is to create one that wraps at the appropriate dates. I have yet to see an analogue wristwatch that accounts for leap-years.

Software engineers have a much easier time; our materials are virtual - ideas, concepts and pixels are much easier to manipulate than concrete and steel. However, there are still limitations imposed on us - for example data can only be retrieved at a certain speed. Hardware often limits the possibilities open to us as programmers. However, these limitations can often be avoided or disguised. Naive implementations often lead to poor performance. A classic example of this is Microsoft's Notepad application. Notepad will load the entire contents of the file into memory at once, which can take a very long time if the file you are opening is large. What's worse is that it will prevent the user from using the application (notepad hangs, rendering it unusable) while this loading is happening. For example, opening a 30MB text file takes roughly 10 seconds on this machine. This seems particularly silly when you consider that you can only ever see a single page of the data at a time - why load the whole file when such a small percentage of it is required at any one time? I guess the programmers who wrote notepad did not intend for this use case, but the point remains valid: an overly-simple implementation led to poor performance.

The unfortunate state of affairs is that the general population have been conditioned to accept bad software as the norm. There really is no excuse for software that is slow, crashes, or is unnecessarily hard to use. It's not until you use a truly incredible piece of software that you realise what can be achieved. So what needs to change? Two things:
  1. Developers need to be given the tools we need to make incredible software. These tools are getting better all the time. My personal preference for the Qt frameworks just paid off with the beta release of Qt 4.7 and QT Creator 2.0. I plan on writing about the new "Quick" framework in the future: I anticipate it making a substantial difference to the way UI designers and developers collaborate on UI design and construction.

  2. Users need to be more discerning and vocal. As an application developers it can be very hard to know what your users think. If you don't get any feedback, are your users happy, or just silent? We need a better way for users to send feedback to developers; it needs to be low-effort fast and efficient.

Live Forever!

Ray Kurzweil suggests that most, if not all technical development & evolution happens on an exponential scale, rather than a linear one. What does this mean? It means that, amongst other things, by the year 2020, we will have access to technologies far beyond anything we've thought about to date.

Makes sense to me!

Sexism in IT?

Mark Shuttleworth recently copped some flack for allegedly sexist content in a talk. I wasn't there, and haven't seen the talk, so I can't really comment on the material itself, but a few things struck me about some of the online responses:
  1. Many of the people complaining weren't there - they watched the video footage online. Why would you do this? If you suspect that there's going to be content that offends you, don't watch it. If you do decide to watch it, I'm not sure you can complain too loudly when (surprise surprise) you are offended by it.

  2. Yes, IT is a male dominated field - for whatever reason (there's lots of research discussing why this is, but that's for you to find). That's not to say that sexism should be inherent, or even tolerated, but it is to be expected. Anyone shocked by this statement should try working in other male-dominated fields, such as construction or engineering. No, it's not right, but it's how it is.
I met Mark briefly at a Linux conference a number of years ago and he seemed to me to be a straight-talking, reasonably honest, good natured kind of guy. I'm sure he made an honest mistake, and regrets his choice of words. I would urge Mark to apologise, and urge everyone who complained to spend the same amount of energy protesting equally important matters such as software patents, or advocacy for open, sane standards.

Henry lives on

After complaining about the poor state of the web browser in a KDE platform, I have to report with mixed emotions that I've bitten the bullet and installed firefox. I'm not a huge fan of firefox - yes, it's open source, and seems to work fairly well, but it's also slow and a huge resource hog.

Who here remembers when firefox first came out? It was supposed to be a stripped down version of the mozilla web browser. The idea was that by removing the mail client, IRC chat application, and god knows how many other applications we'd end up with a smaller, faster, lighter browser. To some extent it worked. However, I'm starting to wonder if they'd have been better starting from scratch.

I challenge anyone reading this to use Chrome for windows for a week and then switch back to Firefox for good - I guarantee you you'll be pulling your hair out within a week; firefox is slow! I always assumed that the reason my browsing experience was so poor was down to my slow Internet connection, but it turns out that a fair amount of the delay is the browser.

So I have firefox - the GTK theme KDE installs looks awful, and several web sites look rubbish, but at least I can check my email...

Well, that's it for now. More to come soon (and this time I'll lose the shakespearean titles).

WiiWare: Innovation and mistakes


I bought a Nintendo Wii earlier this year. I've never actually owned a console before, but have a reasonably strong loyalty to Nintendo. They appear to publish the best games (of course, that's entirely subjective). My game catalogue now includes the following titles:

You may have noticed that I'm not a big fan of the more lighthearted "party" style games out there - I prefer the more focused, single player games.Once I had purchased those titles I began to look for something else, but quickly found that there's not a whole lot of choice out there right now. Most new Wii games tend to be in the "party" category.

Thankfully, Nintendo have launched WiiWare. WiiWare is a collection of titles created by third party developers. There are many different titles to choose from, and each title costs around £10. I ended up purchasing two titles:
These are both splendid games. However, once again, the pool of good games in the WiiWare collection is very limited - the main reason for this as far as I can see is that it's incredibly difficult to get your hands on the tools required to develop games for the Wii. For a start, Nintendo are only selling their development kit to well-established development houses (you need a registerred business, proper offices, previously published titles etc.). Their application form states that:

The Application includes a NonDisclosure Agreement (NDA). Once the Application and NDA are
submitted by you, we will email you a copy of the Application and NDA for your records. Please
note that your submission of an Application and NDA does not imply that your company is approved,
or will be approved, as an Authorized Developer for the platforms above.

...
If the Application is approved by Nintendo, we will notify you by email. At this point, your
company will be considered an Authorized Developer for the platform(s) specified. If your company
is approved for Wii, this also includes WiiWare. If approved the appropriate SDKs can be downloaded
from Warioworld, and development kits can be purchased from Nintendo of America.

So First you need to sign an NDA, Then, if you are accepted you need to purchase the development kit (priced at over $1000 USD). All this makes is increadibly hard for "joe programmer" to start cutting code for the Wii.

I really think Nintendo have missed a trick here; imagine the community that could form behind a free development kit. Think about the success of the Apple AppStore for the iPhone, but with games instead. The Wii is a revolutionary platform, with a unique control interface: surely lowering the barriers to entry can only be a good thing?

There's another side to this as well: The Wii Homebrew team have already done a lot of work reverse engineering the Wii, to the point where there is already an SDKs available for use. Is it usable? I haven't tried it myself yet (perhaps when I finish some of my current projects I'll play with it), but there are already a fair number of games available for the homebrew channel: I count more than 70 games listed, as well as a number of utilities, emulators and other bits and pieces.

The free development kit is based on the gcc PPC port, and comes bundled with everything you need to start development. GNU gcc has been a well established payer on the compiler scene, so it's not like we're playing with untested technology here.

Given that many of the secrets of the Wii are out (or are being reverse engineered even as you read this), wouldn't it be prudent for Nintendo to officially welcome third party developers to the fold? More importantly, for other, future consoles, imagine a world where:

  • The original manufacturer (Nintendo, Microsoft, Sony or whoever) use an open source toolchain from the beginning. I assume that Nintendo have spent a lot of time and money developing their toolchain, which seems a little wasteful to me, when an open source solution already exists. Sure, it may need to be tailored for the Wii, but I'm sure there are plenty of people who would embrace these changes. An open source toolchain lowers development costs, and lowers the barrier to entry for third party developers.
  • Third party developers are encouraged to write applications themselves, and the cost to entry is kept as low as possible. The manufacturer supplies the hardware, points to a pre-packaged toolchain of open source applications, and provides a development SDK with decent documentation. If all you need to test your games is a copy of the console itself, that would be great. However, why not build an emulator that can run on a standard PC?
  • The manufacturer provides bug-fixes for the SDK when needed, and creates a community-oriented website for budding developers.
  • The manufacturer provides a free (or very cheap) means of distributing third party applications via the internet, and offers the option of DRM routines, should the initial autors wish to make use of them.

I believe this setup could bring about a number of beneficial changes to the console gaming market:
  • An overall increase in the diversity and quality of available games.
  • A vibrant community of developers who help the manufacturer maintain the platform SDK and development toolchain by submitting bugs, feature requests and other suggestions.
  • Increased popularity for the platform (I'd buy any platform that offered all of the above).
Unfortunately, I can't see it happening any time soon. It seems to me that the big three console manufacturers are still engrossed in the "proprietary hardware, closed source" paradigm. Still, a guy can dream, right?

Teaching Programming mk. 2

I blogged before about what I think we should teach programming students, and almost immediately wished I hadn't. Sometimes I feel that my blog posts are somewhat pointless meanderings through the garbage that inhabits my sleep-deprived brain. At other times I feel that I have contributed something useful to the general public. The post in question is firmly in the former category - but what can I do? I won't start deleting articles as soon as I fall out of favor with them, so I'm hereby correcting my earlier mistakes (at least, attempting to). Illiad Frazer knows how I feel:

The whole point of the previous post was that I felt that most graduate students were under-prepared for work in industry. My main evidence of this is that it seems to take a long time, and more importantly a lot of interviews before one strikes "candidate gold" when recruiting for a new programmer.
I will admit that this could be for many reason: perhaps our expectations are too high, perhaps we are not paying enough to attract the kind of graduate we're looking for, or perhaps the industry we're in isn't desirable enough to attract the better candidates. The list goes on endlessly - and yet I cannot ignore the fact that most graduates I meet are not up to scratch.

So what prompted this revision of a past article? I happened to read E. W. Dijkstra's article entitled "On the cruelty of really teaching computing science". In it, he postulates that the methods used by most universities are fundamentally flawed when it comes to teaching computer science, and more specifically when teaching computer programming. I'd like to quote part of this article:

...we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it.

Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification. While designing proofs and programs hand in hand, the student gets ample opportunity to perfect his manipulative agility with the predicate calculus.
This method of programming - approaching the programming language as a kind of "predicate calculus' has it's advantages. It demands that the students pay attention to the features, rules, regulations and guarantees that the language provides. Whichever language is used (and to a certain extend it does not matter), the rules and regulations of that language are going to dictate the structure of the program. This is similar to the fact that the laws of math dictate the form of any mathematical proof; ignore the laws of the language, and your program (or proof, if you will) no longer makes sense. In the domain of integer mathematics, 2 + 3 will aways equal 5. In the domain of C++, local variables are destroyed in the reverse order that they were created in (insert whatever rule of the language you want there).

Consider for a moment my previous post; I listed 11 things which I thought were essential for any programming student to know. Looking back, I notice that the top five items are all specific to C++ (since that's the language I talk in). Is it a coincidence that the five most important things any programming student can know are specific to the language they are using? I think not.

Rather, I believe that to be a great programmer, one must have a deep understanding of the language at hand, and how that language allows you to express logical problems. One must approach a program like a mathematical problem - that is, one must know the rules of the language, and then use those rules to design a proof that conclusively solves the logical problem using the language at hand.

That last point is worth reiterating: Anyone can write a program that appears to solve a problem most of the time. However, for non-trivial problems it becomes much harder to guarantee that the program will solve the problem 100% of the time. As we get further into the "edge cases" of the application logic it becomes less likely such a naive implementation will work correctly. However, a program that has been built from the ground up using the guaranteed behavior of the language can still contain bugs, but it's much more likely that they are logic errors introduced by the programmer, rather than subtle bugs introduced through language misuse.

At this point I must point out that I do not believe that Dijkstra's idea is as good as he makes it sound. He addresses one point - that students should understand the rules of the language, but a "love of the language"is only half the picture. There are also many non-language related skills that come in to play. Consider debugging for example; there are formal techniques that can be used to debug certain types of errors. Knowing these techniques, and knowing when to employ them is a powerful aid in any language, and these are skills that should be taught, rather than learned in an ad hoc approach.

So, my revised top 10 list of things every programming student should know can now be revised into this, much shorter form:
  1. Know your language. I don't care what your language is - if you want a job it had better be something that's being used, but you can be a great programmer even if all you know is an out-dated language. Not only do you need to know your language, you need to have a passion for knowing your language - you must actively want to extend your knowledge of the language and how it works, what guarantees it provides and which it doesn't. This knowledge will translate into programs that use the features of the language to create minimal, efficient, well structured and error-free programs.

  2. Be willing to learn new techniques. There are so many useful techniques and skills for a new programmer to have that I cannot list them all here, and course designers cannot possibly include them all in their course material.

That's it - two things. Much better than the self-absorbed tripe I rattled off a few weeks ago. To anyone who actually bothered to read that, I apologize profusely.

Ten Things to Teach Programming Students

While talking to a friend recently, we began discussing the role of graduates in the industry. My belief is that employers employ graduates and expect them to have the same skill level as their existing, trained employees (I have certainly seen this first-hand). Having been on the "other side" of the problem I appreciate that graduates are rarely fit for the tasks set for them without further training.

This got me thinking: If there were 10 things graduates should know before graduating, what should they be? What short list of skills can graduates teach themselves to become better than their competition (and getting that first job is just that: a competition). That train of thought spawned the following list:

Ten things programming students should know before graduating:
  1. Inheritance & Composition. In the land of OO, you must know what inheritance does for you. In C++, this means that you must know what public, protected and (rarely used) private inheritance means. If class A is publically inherited form class B, that does that tell you about the relationship between A and B? What about if the inheritance was protected, rather than public? In a similar vein, what does virtual inheritnace do, and when would you want to use it? Sooner or later a graduate programmer will discover a complex case of multiple inheritance, and they need to be able to cope with it in a logical fashion. Knowing the answers to the above questions will help.
    Unfortunately, a lot of the time inheritance is over-used. Just because we have access to inheritance, doesn't mean we should use it all the time! Composition can be a useful tool to provide clean code where inheritance would muddy the waters. Composition is such a basic tool that many graduates don't even think of it as a tool. Experience will teach when to use composition and when to use inheritance. Graduates have to know that both can be solutions to the same problem.

  2. Memory Allocation. So many graduates do not understand the importance of cleaning up after yourself. Some do not fully appreciate the difference between creating objects on the stack and on the heap. Some know that but fail to understand how memory can be leaked (exceptions are a frequent cause of memory leaks in novice programmers). Every programmer should know the basic usage of new, new[], delete and delete[], and should know when and how to use them.

  3. Exceptions. Most programmers share a love / hate relationship with exceptions; You gotta know how to catch them, but at the same time you tend to avoid using them yourself. Why? Because exceptions should be .... exceptional! There's a reasonably large amount of overhead associated with throwing and catching exceptions. Using exception as return values or flow-control constructs are two examples of exception mis-use. Exceptions should be thrown only when the user (or programmer) does something so bad that there's no way to easily fix or recover from it. Running out of resources (whether it be memory, disk space, resource Ids or whatever) is a common cause for exceptions to be thrown.

  4. Const correctness. Const correctness is so simple, yet so many programmers just don't bother with it. The big advantage of const-correctness is that it allows the compiler to check your code for you. By designating some methods or objects const you're telling the compiler "I don't want to change this object here". If you do accidentally change the object the compiler will warn you.

  5. Threading. Threading is hard. There's no simple way around this fact. Unfortunately, the future of PC hardware seems to be CPUs with many cores. Programs that do not make use of multiple threads have no way to make use of future hardware improvements. Even though using libraries like Qt that make it ridiculously easy to create threads and pass data between threads, you still need to understand what a thread is, and what you can and cannot do. A very common thing I see in new programmers is a tendency to use inadequate synchronization objects in threads. Repeat after me: "A volatile bool is not a synchronization object!".

  6. Source control. Every programmer on the planet should know how to use at least one version control system. I don't care if it's distributed or not, whether it uses exclusive locks or not, or even if it makes your tea for you. The concepts are the same. Very few professional programmers work alone. Graduates must be able to work in a team - that includes managing their code in a sensible fashion.

  7. Compiler vs Linker. Programmers need to understand that compiling an application is a two step process. Compilation and Linking are two, discreet, and very different steps. Compiler errors and Linker errors mean very different things, and are resolved in very different ways. Programmers must know what each tool does for them, and how to resolve the most common errors.

  8. Know how to debug. When something goes wrong, you need to know how to fix it. Usually, finding the problem is 90% of the work, fixing it is 5% of the work, and testing it afterwards is another 10%. No, that's not a typo - it does add up to more than 100%, which is why there's a lot of untested code out there! Of course, if you were really good you wouldn't write any bugs in the first place!

  9. Binary Compatibility. This one is for all those programmers that write library code, or code that gets partially patched over time. As you probably already know, shared libraries contain a table of exported symbols. If you change that table so a symbol is no longer available (or it's signature changes), code that uses that symbol will no longer work. There's a list of things you can and cannot do while maintaining binary compatability, and it's very hard not to break those rules, even if you know what you're doing. I've blogged about this before, and linked to the KDE binary compatibility page on techbase - worth a read!
    The main method of maintaining binary compatibility is to program to an interface, rather than to an implementation. Once you start paying attention to binary compatibility, you'll quickly realise that it's a very bad idea to export your implementation from a shared library, for one simple reason: If you want to change your implementation you're stuck with the restrictions placed upon you by the need to maintain binary compatibility. If all you export is a pure interface and a means to create it (possibly via a factory method) then you can change the implementation to your heart's content without having to resort to pimpl pointers.

  10. Read the right books. There are a few movers and shakers in the programming industry that it pays to keep an eye on. There are many books worth reading, but I'm going to recommend just two. The first is "Design Patterns: Elements of Reusable Object-Oriented Software", and the second is the "Effective C++" series. Neither are considered to be great bedtime reading, both are considered to be packed from cover to cover with things that will help you out in every-day situations. Any programmer worth his or her salt will own a copy of at least one of these books, if not both. Of course, there are books on UI design and usability, threading, text searching, SQL and database maintenance, networking, hardware IO, optimisation, debugging... the list goes on.

  11. Networking. What's this? An 11th item? That's right: it's in here because it cannot be ignored in most programming tasks. It's getting harder and harder to avoid networking. Most graduates will have to write code that sends data over a network sooner or later, so they'll need to know the difference between UDP and TCP and IP, as well as what the basic network stack looks like (think "Please Do Not Touch Steve's Pet Alligator"), and what each layer does. Being familiar with tools like wireshark helps here.

What's not in the list:

You may notice that I haven't included any specific technologies in this list. That's because I firmly believe that it really doesn't matter. Sure, there are some libraries that are better than others (I'd bet my life on a small set of libraries), but the programmer next to me has a different set. I care not one grote whether a graduate knows how to program in .NET, Qt, wxWidgets or anything else - as long as they're willing to learn something new (whatever I'm using on my project).

Which brings me nicely to the conclusion: The single quality I see in all the programmers I admire is a sense of curiosity; a restlessness and a sense of adventure. Our industry is constantly shifting. The best programmers are able to ride the changes and come out better for it.

Is this post horribly self-indulgent and boring? Probably, but it had to be done. Have I forgotten anything? Things you feel should be on the list that are missing? Remember that the point of the exercise is to keep a small list - I could list every programming skill and technology required under the sun, but that would not be very useful would it?


Apathy, Apples and Understanding

You may notice that it's been a long time since my last post. Truth be told I've been lazy. It's tempting to say that I've been busy, but that's a sugar coating on apathy. For that I apologise! Hopefully I can get back into the habit of regular posting again soon.

Inspiration struck the other day when Jeff Atwood posted an interesting article on his blog. "Dealing With Bad Apples" talks about how single members of a programming team can be difficult, often working against the rest of the team.

Atwood quotes Robert Miesen:

I was part of a team writing an web-based job application and screening
system (a job kiosk the customer called it) and my team and our
customer signed on to implementing this job kiosk using Windows,
Apache, PHP5, and the ZendFramework -- everyone except one of our team
members, who I will refer to as "Joe". Joe kept advocating the use of
JavaScript throughout the technology deliberation phase, even though
the customer made it quite clear that he expected the vast majority of
the job kiosk to be implemented using a server-side technology and all
the validation should be done using server-side technology.

The fact that the customer signed off on this, however, did nothing
to deter Joe from advocating JavaScript -- abrasively. Every time our
project hit a bump in the road, Joe would go off on some tirade on how
much easier our lives would be if we were only writing this job kiosk
in JavaScript. Joe would constantly bicker about how we were all doing
this all wrong because we weren't doing it in JavaScript, not even
bother to learn the technologies we were actually using, and, whenever
fellow teammates would try and gently bring him back into the fold
(usually via email), Joe would just flame the poor guy. At the height
of Joe's pro-JavaScript bigotry, he would regularly belt off comments
like, "Well, if we had only done it in JavaScript," to such an extent
that the team would have been better off if he had just quit (or was
reassigned or fired.)

Jeff then goes on to suggest that perhaps the problem here is a "bad apple" - a team member that is doing more harm than good in a team. He's probably right, but I have a slightly different angle.

Perhaps the real problem here is poor team leadership / management? Without meeting "Joe" personally, I cannot make any accurate assessment of the situation, but it seems to me that perhaps Joe feels undervalued in his team? I say this because i recognize that behavior pattern - in myself.

In any team of programmers, each member will have different backgrounds, strengths and weaknesses. Joe obviously has experience using Javascript, and feels the need to share his expertise in that field. I'm not saying that this is a good thing, but perhaps the underlying problem is a lack of cohesion and understanding between team members?

So, further to Atwood's list of warning signs for detecting "bad apples", I have a list of actions team leaders could consider taking when dealing with a so-called "bad apple":
  • Listen to them. Most geeks (I use the term with all possible affection) are reasonable people. If a team member is repeating themselves, perhaps they feel that their point was never seriously considered in the first place? I can't count the number of times I've made contributions in meetings that were ignored, only to hear (usually six weeks later) "hey, we should have done X, what a pity it's too late now...". It always seems petty to point out that I suggested X from the start. Bear with me here - I'm certainly not suggesting that I'm always right - far from it; my point is that you ignore contributions from your team members at your own risk!

    Finally, I'm not suggesting that team leaders always act on suggestions from their team members, but listening is a good start.

  • Once you start listening to bad apples, you may find that some of your team members have strengths you didn't expect. Can you use these strengths in the future? This depends a lot on your business model and workload. From my own experience I can understand that programming code that doesn't interest you week in - week out can be incredibly draining. Perhaps bad-apples can be encouraged to pull together with the team?
  • Finally - I don't know what the IT job market is like where Jeff lives, but you can't fire programmers and expect to get a replacement any time soon. Jeff writes:

    You should never be afraid to remove -- or even fire -- people who do not have the best interests of the team at heart. You can develop skill, but you can't develop a positive attitude.


    I say "bollocks" to that - it's incredibly expensive to fire and replace someone. Not only is there the cost of looking for, and hiring someone new, but there's the training overhead, and there's no guarantee that you can find someone with the appropriate skill set any time soon. From where I'm sitting it looks like we have to wait around 6-8 weeks between looking for, and hiring a new programmer. That's almost two months of productivity down the drain! Suggesting that you can't develop a positive attitude in your team-members is incredibly negative and close-minded. I'm certainly glad I'm not on a team with a leader like that!


That said and done, I do hope that the current skill shortage in this country develops a greater appreciation of the worker. I suspect that most companies vastly underestimate the value of their skilled (and unskilled) workers.
Next time you have a problem with someone, consider the massive cost of replacing them, and - more importantly - consider the huge amounts of good work they've done, before you concentrate on the bad.

On Idealism and creating your career

I enjoy working with open source software - I guess that comes as no surprise to those of you who know me. There are many reasons why I'm drawn to the open source model, including the following:

  • I like making stuff, and have the skills to do so - It's getting easier too. Application developers are realizing that making it easy for their users to extend and customize their products can only be a good thing. KDE 4.0 does this very well - you can write many simple KDE extensions in a language of your choice. Of course there are more ways to extend an application than by programming - but that's hat I'm good at.
  • Most of the time, I enjoy the community. Like any community, there are always going to be your garden variety blockheads, who seem to live for the sole purpose of making every one else's lives difficult. The open nature of an open source community makes it harder to deal with these people, but I guess that's the price you pay for freedom. On the other hand, where else can you mingle with thousands of industry experts for free? It's like being at a huge tech conference twenty four hours a day for free!
  • Open != unprofitable. Many open source projects have gone on to be the basis for a successful business model. Sure, it's harder to make truck loads of cash by treating your customers poorly, but it is possible to grow a successful business by releasing open source software. The list of companies is huge - Trolltech springs to mind - they've just been bought by Nokia for some vast sum of money, so they must be doing something right.
  • Finally, I enjoy the fact that there are no boundaries. If you have an idea for a piece of software, you can make it. There are no closed protocols to get in the way, there are no commercial pressures forcing you to take shortcuts; you are free to write your software as you wish. If you feel that writing a product that consumes massive amounts of memory and randomly crashes is a good idea - go ahead. The measure of success will be how many users use your software.
    Another point here is that this openness and strong competition leads to some very careful planning of software features. Take KDE 4 for example. Some very intelligent people have sat down together and thought about the best new features they need to make KDE even better. Don't believe me? See Aaron Seigio's KDE4 release keynote speech; it'll knock your socks off.
In some ways, that last point leads to a kind of Darwinian evolution amongst software packages. Good packages survive because they're popular, and thus more developers work on them. Bad packages languish and die. Sometimes packages are forked and sometimes packages are merged. This seething "package soup" has given us a very rich mix of packages to choose from. I can name a dozen web browsers, at least fifteen email clients, and twenty text editors off the top of my head. They're all slightly different, but they're all good software. Some might argue that we're spoiled for choice, but that's another blog post for another day!

Working as a volunteer on an open source project takes skill, commitment and determination. There are very few external motivating factors. If the project you're working on doesn't interest you, chances are you won't complete the work. The reward at the end of the tunnel is the gratitude of your fellow developers and users; not to mention some new skills you can take to your next project.

I've been involved in open source software for the last 10 years in one way or another. In that time I've picked up many skills that I can be proud of. Unfortunately, in my professional life, these skills aren't recognized by my peers - and understandably so. I have no formal qualification in the subject area, they've never seen me apply my skills in a practical manner, there may even be a lack of understanding that it's possible to gain new skills outside professional development or formal qualifications.

Which brings me to the point of this post. I want the open attitudes of the open source world to migrate to the commercial software development world. I want to have an "anything is possible" attitude towards our commercial products. We've already seen time and time again how products that started out as garden-shed projects with this open attitude have merged into multi-billion dollar products. Why can't we replicate that in a real business?

Obviously these Utopian ideals need to be tempered with the reality of running a business, but I refuse to believe that the two trains of thought are mutually exclusive. I encourage any of the readers of this blog to try to develop an "anything is possible" attitude towards product development. If you set your mind to achieve greatness, it'll happen. Why settle for anything less?


I realize that this may sound a little naive - but that's the whole point. Naivety isn't something to be ashamed of, that idealism is what makes us great; our ability to see the world as it should be, and strive to meet that distant goal is (as far as I'm concerned) fundamental to what makes us human.