Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Visualising the Ubuntu Package Repository

Like most geeks, I like data. I also like making pretty pictures. This weekend, I found a way to make pretty pictures from some data I had lying around.

The Data: Ubuntu Packages

Ubuntu is made up of thousands of packages. Each package contains a control file that provides some meta-data about the package. For example, here is the control data for the autopilot tool:

Package: python-autopilot
Priority: optional
Section: universe/python
Installed-Size: 1827
Maintainer: Ubuntu Developers 
Original-Maintainer: Thomi Richards 
Architecture: all
Source: autopilot
Version: 1.2daily12.12.10-0ubuntu1
Depends: python (>= 2.7.1-0ubuntu2), python (<< 2.8), gir1.2-gconf-2.0, gir1.2-glib-2.0, gir1.2-gtk-2.0, gir1.2-ibus-1.0, python-compizconfig, python-dbus, python-junitxml, python-qt4, python-qt4-dbus, python-testscenarios, python-testtools, python-xdg, python-xlib, python-zeitgeist
Filename: pool/universe/a/autopilot/python-autopilot_1.2daily12.12.10-0ubuntu1_all.deb
Size: 578972
MD5sum: c36f6bbab8b5ee10053b63b41ad7189a
SHA1: 749cb0df1c94630f2b3f7a4a1cd50357e0bf0e4d
SHA256: 948eeee40ad025bfb84645f68012e6677bc4447784e4214a5512786aa023467c
Description-en: Utility to write and run integration tests easily
 The autopilot engine enables to ease the writing of python tests
 for your application manipulating your inputs like the mouse and
 keyboard. It also provides a lot of utilities linked to the X server
 and detecting applications.
Homepage: https://launchpad.net/autopilot
Description-md5: 1cea8e2d895c31846b8d3482f96a24d4
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu

As you can see, there's a lot of information here. The bit I'm interested in is the 'Depends' line. This lists all the packages that are required in order for this package to work correctly. When you install autopilot, your package manager will install all it's dependencies, and the dependencies of all those packages etc. This is (in my opinion), the best feature of a modern Linux distribution, compared with Windows.

Packages and their dependant packages form a directed graph. My goal is to make pretty pictures of this graph to see if I can learn anything useful.

Process: The Python

First, I wanted to extract the data from the apt package manager and create a graph data structure I could fiddle with. Using the excellent graph-tool library, I came up with this horrible horrible piece of python code:

#!/usr/bin/env python

from re import match, split
from subprocess import check_output
from debian.deb822 import Deb822
from graph_tool.all import *

graph = Graph()
package_nodes = {}
package_info = {}


def get_package_list():
    """Return a list of packages, both installed and uninstalled."""
    output = check_output(['dpkg','-l','*'])
    packages = []
    for line in output.split('\n'):
        parts = line.split()
        if not parts or not match('[uirph][nicurhWt]', parts[0]):
            continue
        packages.append(parts[1])
    return packages


def get_package_info(pkg_name):
    """Get a dict-like object containing information for a specified package."""
    global package_info
    if pkg_name in package_info:
        return package_info.get(pkg_name)
    else:
        try:
            yaml_stream = check_output(['apt-cache','show',pkg_name])
        except:
            print "Unable to find info for package: '%s'" % pkg_name
            package_info[pkg_name] = {}
            return {}
        d = Deb822(yaml_stream)
        package_info[pkg_name] = d
        return d


def get_graph_node_for_package(pkg_name):
    """Given a package name, return the graph node for that package. If the graph
    node does not exist, it is created, and it's meta-data filled.

    """
    global graph
    global package_nodes

    if pkg_name not in package_nodes:
        n = graph.add_vertex()
        package_nodes[pkg_name] = n
        # add node properties:
        pkg_info = get_package_info(pkg_name)
        graph.vertex_properties["package-name"][n] = pkg_name
        graph.vertex_properties["installed-size"][n] = int(pkg_info.get('Installed-Size', 0))
        return n
    else:
        return package_nodes.get(pkg_name)


def get_sanitised_depends_list(depends_string):
    """Given a Depends string, return a list of package names. Versions are
    stripped, and alternatives are all shown.

    """
    if depends_string == '':
        return []
    parts = split('[,\|]', depends_string)
    return [ match('(\S*)', p.strip()).groups()[0] for p in parts]

if __name__ == '__main__':
    # Create property lists we need:
    graph.vertex_properties["package-name"] = graph.new_vertex_property("string")
    graph.vertex_properties["installed-size"] = graph.new_vertex_property("int")

    # get list of packages:
    packages = get_package_list()

    # graph_nodes = graph.add_vertex(n=len(packages))
    n = 0
    for pkg_name in packages:
        node = get_graph_node_for_package(pkg_name)
        pkg_info = get_package_info(pkg_name)
        # purely virtual packages won't have a package info object:
        if pkg_info:
            depends = pkg_info.get('Depends', '')
            depends = get_sanitised_depends_list(depends)
            for dependancy in depends:
                graph.add_edge(node, get_graph_node_for_package(dependancy))
        n += 1
        if n % 10 == 0:
            print "%d / /%d" % (n, len(packages))

    graph.save('graph.gml')


Yes, I realise this is terrible code. However, I also wrote it in 10 minutes time, and I'm not planning on using it for anything serious - this is an experiment!

Running this script gives me a 2.6MB .gml file (it also takes about half an hour - did I mention that the code is terrible?). I can then import this file into gephi, run a layout algorithm over it for the best part of an hour (during which time my laptop starts sounding a lot like a vacuum cleaner), and start making pretty pictures!

The Pretties:

Without further ado - here's the first rendering. This is the entire graph. The node colouring indicates the node degree (the number of edges connected to the node) - blue is low, red is high. Edges are coloured according to their target node.

These images are all rendered small enough to fit on the web page. Click on them to get the full image.
A few things are fairly interesting about this graph. First, there's a definite central node, surrounded by a community of other packages. This isn't that surprising - most things (everything?) relies on the standard C library eventually.
The graph has several other distinct communities as well. I've produced a number of images below that show the various communities, along with a short comment.

C++

These two large nodes are libgcc1 (top), and libstdc++ (bottom). As we'll see soon, the bottom-right corder of the graph is dominated by C++ projects.

Qt and KDE

This entire island of nodes is made of up the Qt and KDE libraries. The higher nodes are the Qt libraries (QtCore, QtGui, QtXml etc), and the nodes lower down are KDE libraries (kcalcore4, akonadi, kmime4 etc).

Python

 The two large nodes here are 'python' and 'python2.7'. Interestingly, 'python3' is a much smaller community, just above the main python group.

System

Just below the python community there's a large, loosely-connected network of system tools. Notable members of this community include the Linux kernel packages, upstart, netbase, adduser, and many others.

Gnome


This is GNOME. At it's core is 'libglib', and it expands out to libgtk, libgdk-pixbuf (along with many other libraries), and from there to various applications that use these libraries (gnome-settings-daemon for example).

Mono

At the very top of the graph, off on an island by themselves are the mono packages.

Others

The wonderful thing about this graph is that the neighbourhoods are fractal. I've outlined several of the large ones, but looking closer reveals small clusters of related packages. For example: multimedia packages:

This is Just the Beginning...

This is an amazing dataset, and really this is just the beginning. There's a number of things I want to look into, including:
  • Adding 'Recommends' and 'Suggests' links between packages - with a lower edge weight than Depends.
  • Colour coding nodes according to which repository section the package can be found in.
  • Try to categorise libraries vs applications - do applications end up clustered like libraries do?
I'm open to suggestions however - what do you think I should into next?

Sloecode: now with 100% more website

Sloecode now has a real website! It's a rather minimalist affair at present, but it's better than nothing!

Since I last blogged, the setup instructions for the Sloecode server have changed a lot. The new instructions are available on the new site.

bzr-sloecode Installer for Windows

There's now a basic installer for the bzr-sloecode Bazaar plugin for Windows. It currently does the following:
  1. Installs the plugin to the right place.
  2. Sets the SLOECODE_SERVER environment variable to whatever you want.
There are a number of aspects that require improvment however:
  1. It doesn't check that Bazaar is installed first.
  2. It doesn't ask if the user wants to install the plugin for everyone or just the current user. Currently the SLOECODE_SERVER setting is system-wide, which requires admin permissions to set.
  3. No validation on the Bazaar server address.
...So it's a work in progress! Patches are more than welcome. You can grab the installer from the bzr-sloecode launchpad page.

Console Hacking

Several news outlets are reporting that the PS3 has been compromised. What strikes me as odd is that most of the time, the people doing the hacking have no interest in piracy (at least, that's their claim). Instead, their motives seem to be towards allowing home-brew app creation. This is a noble goal, and one that will surely become more and more popular, as we start moving away from passive entertainment towards a more participatory model.


My question is this: Why do console manufacturers still struggle to prevent home-brew projects? The time spent trying to prevent these sorts of exploits must be incredible, and so far, none of them have succeeded.

Personally, I'd love a console that allowed me to run whatever code I want on it - imagine the uses! MythTV running on an XBox? Awesome!

Attention all Programmers:

As a user of open source software, I like to try and give something back to the community whenever I can. As a somewhat proficient programmer i can do this more often than most, but one of the most effective ways of giving back for non-programmers is by filing bug reports.


Unfortunately, there are two main issues with this:
  1. Submitting a bug report is often incredibly painful. Most software bug trackers I have seen require an account, which means registering a new username & password (I can't wait for more non-essential services like bug trackers to start using openId), activating my account... all this can take 30 minutes of more. Submitting a bug report should be a fire-and-forget affair, taking 10 minutes tops: any longer and I can't afford to spend my time.

    Many bug trackers ask users for information that is hard to obtain, or intimidating to non-programmers. How many users know their CPU architecture? Or distribution? Or even the software version they're using? One way around this is to have the bug-reporting done from within the application on the client machine itself, but still - bug trackers should be as friendly to users as possible. How about posting some simple instructions on how to obtain this information for non-technical users?

  2. Even after navigating the multiple hurdles involved in submitting a bug, you then have to deal with the programmers fielding the bug report. This is where it gets tricky. Many programmers view bug reports as a personal insult to them (perhaps subconsciously). Many programmers will triage bugs that they don't want to fix, giving excuses like "It's like that by design", or simply "Low priority, won't fix".

    Here's the thing though: The customer is (nearly) always right.

    If a user has taken the time to navigate your awful bug tracking software and submit a bug, it must be a big deal to them. If the matter at hand really is like that "by design", your design is probably screwy. If you won't fix it because it's low priority then you need to stop adding new features, and fix the ones you already have.
Open source software seems to suffer from these problems more than commercial software. I guess it's because we're not trying to extract money from our clients. Can you imagine a professional code shop telling a paying customer "I'm sorry, we're not going to fix that bug you reported, because we intended it to work like that"? Yeah, right.

So how do we fix this for the open source world?

There's no simple answer that I can fathom. It requires programmers to be a bit smarter and have a bit more empathy for the mere mortals who have to use their software. As a programmer, I include myself in this category.

That is all, thank you.

Project Documentation

Why is it that most open source project pages are so terrible at documenting their own project?


I'm not talking about API or technical documentation - I'm talking about telling new visitors to your site what the hell your code is about.

Project authors, here are some handy tips:
  1. On your project front page, right at the top, put a simple explanation of what your code does (or what you hope it will do someday). Remember that your audience may not have the same level of technical experience as you do. Examples (screenshots, code snippets) are a MUST. A picture speaks a thousand words and all that...

  2. Make sure you include the development status of your project. I can't count the number of times I've spent 30 minutes looking at a project only to realize that it's not nearly complete enough to be usable to me. There's no shame in saying "this library is working, but not production ready. It is missing features X, Y, Z"

  3. Inject some enthusiasm! How many boring, dull, dry project descriptions do I have to read through? Most sound like the authors aren't passionate about their product. Sell your project; inject some enthusiasm, and maybe your viewers will become more enthusiastic in the process!
Well, that's my rant for the day. Now I must go update my project documentation...

Sexism in IT?

Mark Shuttleworth recently copped some flack for allegedly sexist content in a talk. I wasn't there, and haven't seen the talk, so I can't really comment on the material itself, but a few things struck me about some of the online responses:
  1. Many of the people complaining weren't there - they watched the video footage online. Why would you do this? If you suspect that there's going to be content that offends you, don't watch it. If you do decide to watch it, I'm not sure you can complain too loudly when (surprise surprise) you are offended by it.

  2. Yes, IT is a male dominated field - for whatever reason (there's lots of research discussing why this is, but that's for you to find). That's not to say that sexism should be inherent, or even tolerated, but it is to be expected. Anyone shocked by this statement should try working in other male-dominated fields, such as construction or engineering. No, it's not right, but it's how it is.
I met Mark briefly at a Linux conference a number of years ago and he seemed to me to be a straight-talking, reasonably honest, good natured kind of guy. I'm sure he made an honest mistake, and regrets his choice of words. I would urge Mark to apologise, and urge everyone who complained to spend the same amount of energy protesting equally important matters such as software patents, or advocacy for open, sane standards.

Webkit / Konqueror issue raised again

Just thought I'd point out that I'm not the only one who would like Konqueror to use webkit rather than KHTML:

WebKit in Konqueror


...It's a pity the comments are 90% flame, and 10% content.


KDE 4.2: First Impressions

As you may already know, KDE4.2 was released a few days ago. I was interested in writing a plasmoid in python (more on that in a future post), which meant that I had to upgrade.

So what did I think of KDE4.2 after experiencing KDE4.1?

In a word: Brilliant!

KDE4.2 is a breath of fresh air after 4.1. Many of the crashes I experienced in KDE4.1 have been fixed. For example, in 4.1, every time I opened the "Display" configuration dialog my laptop would freeze, and needed to be hard rebooted in order to get it working again. Every time I launched a full-screen application (like a game) my laptop would freeze. I couldn't kill the X server with Ctrl+Alt+Backspace without my laptop freezing... there's a whole list.

KDE4.2 fixes almost all these bugs, and even throws in some nice performance tweaks at the same time. Things feel more responsive - menus are faster to open, and some applications seem faster to load (although I haven't done any actual timing tests - so this is all subjective). Finally, it looks very nice:


It looks good, and it's responsive on my three year old laptop!

A big congratulations to all the KDE developers that made this happen. I believe I've committed a whopping 10 lines of code to KDE4, so I'm glad that other people have more commitment than I do ;)

Henry lives on

After complaining about the poor state of the web browser in a KDE platform, I have to report with mixed emotions that I've bitten the bullet and installed firefox. I'm not a huge fan of firefox - yes, it's open source, and seems to work fairly well, but it's also slow and a huge resource hog.

Who here remembers when firefox first came out? It was supposed to be a stripped down version of the mozilla web browser. The idea was that by removing the mail client, IRC chat application, and god knows how many other applications we'd end up with a smaller, faster, lighter browser. To some extent it worked. However, I'm starting to wonder if they'd have been better starting from scratch.

I challenge anyone reading this to use Chrome for windows for a week and then switch back to Firefox for good - I guarantee you you'll be pulling your hair out within a week; firefox is slow! I always assumed that the reason my browsing experience was so poor was down to my slow Internet connection, but it turns out that a fair amount of the delay is the browser.

So I have firefox - the GTK theme KDE installs looks awful, and several web sites look rubbish, but at least I can check my email...

Well, that's it for now. More to come soon (and this time I'll lose the shakespearean titles).

My Kingdom for a Browser!

This post is set to be one of the most painful entries I have ever written on this weblog. Not because the subject matter is particularly difficult, but because the technology has let me down.

The story starts with me upgrading my laptop to Kubuntu 8.10. It's been out for a while, and I'm a big fan of KDE 4, but I hadn't had a sufficiently quiet weekend in which to take the plunge. I was previously running Kubuntu 8.04, so I could have just downloaded the latest packages, but I wanted to start from scratch, for a couple of reasons.:

  1. I wanted to remove all the rubbish that I had installed over the last six months. I frequently download and install applications, only to find that they're not quite what I want. I rarely uninstall them, so over time my lhard disk fills with cruft.

  2. I wanted to wipe away all the stale config, especially as my window manager would be changing from KDE 3.x to KDE4. Besides, there's a certain pleasure to be derived from configuring a brand new KDE installation.

The install was a breeze, and for the first time ever all my laptop hardware was detected and configured correctly without any hacking on my part - even the weird web-cam, which doesn't even work in Windows XP. Life was good, until I went to browse the Internet.

KDE ships with Konqueror as it's default web browser. As far as web browsers go it's fairly nice - It lacks the large "Add-Ons" repository that Firefox has, but many of the plugins I can't live without when using Firefox are included as standard in Konqueror.



Konqueror is more than just a web browser though - the integration between konqueror and the rest of KDE is truly stunning (as an aside: this is why I prefer KDE over other desktops. Technologies like KPart and DBus are the future of desktop applications, and KDE is leading the charge in this area). As an example, if you want to search google for something, but don't have your browser window open, what can you do? Easy! just press Alt + F2 to open the "Run Command" dialog, and type "gg: " followed by your desired keywords. Hit enter and you'll launch Konqueror with the google results right there waiting for you.


Konqueror also has extensive protocol support. For example, SCP and SFTP are supported by default. Try typing something like "fish://user@host" - konqueror will as for the user password, and will then behave like a file browser for the remote machine.

These two examples hardly scratch the surface of what Konqueror can do. However - there are some very serious problems with it. Using GMail with Konqueror is torturous. First Google will give you the plain-old-HTML-only mode, since Konqueror isn't officially supported. Then, if you ask for the full version anyway you get all sorts of weirdness - and a completely unusable inbox. The solution seems to be to set the user agent to Safari 2.0, but even then my inbox seems to be incredibly slow.

Members of the KDE community have pointed out that GMail plays fast-and-loose with web standards, so it's understandable that Konqueror misses a few tricks. The Google engineers must have tested the javascript enhanced version of GMail with the most popular browsers, and left Konqueror out in the cold - and fair enough. However, the KDE developers are missing the point: no matter how good their browser is technically - no matter how standards compliant it is, it simply does not work for me - the user. I now have a browser that I cannot use to check my email (no, using the HTML-only version is not an option).

So what are the alternatives?

Before I upgraded Kubuntu I had Firefox installed. However, when I went to install it, I nearly had a heart attack. In order to install Firefox, I had to install 63 other packages - most of them gnome or GTK packages. The reason for this is simple: Firefox uses the GTK toolkit to provide a UI. I knew this already, but this early on in my new Kubuntu install I wasn't about to pollute my OS install with GTK packages.


What can I do? There are a few other options available to me:

There's been talk of a Firefox port to Qt. However, nothing usable has materialised yet, so that's off the cards.

There's the Arora browser - this is a Qt browser running the Webkit engine (which is included as standard in later Qt distributions). A quick install told me what I needed to know: also not really usable as my default browser.

Finally there's Google's offering: Chromium. However, this has not yet been ported to Linux.

So what's the underlying cause of my troubles? Without hacking the code directly, I have no idea. Perhaps this is part of the KHTML vs Webkit debacle - There's a good article outlining the whole issue here, but I'd like to quote a couple of paragraphs:

So, what's the situation? Well, it appears that KHTML will remain the web rendering engine for Konqueror going into KDE 4.0, and that it could be changed to qtWebkit as of KDE 4.1. That does not seem to be officially settled, so much as the most likely scenario. It appears that the KHTML team seems hesitant about the proposition, while many KDE developers and users alike have expressed a very receptive attitude toward seeing Konqueror user qtWebkit. And Rusin made clear to a reader that he believes the KHTML team should continue their work as long as they like.

The challenge is that Webkit, which comes from Apple, is widely tested, and is thus known to work well with a large number of websites. KHTML is not as widely tested, and, for example, GMail doesn't work well with Konqueror. Many Konqueror fans have expressed regret at having to keep Firefox around just for sites like GMail, that don't recognize KHTML. Using Webkit would solve these problems, enabling many users to stick to one browser.

In other words: "The developers are dragging their feet to implement a fix that would arguably make Konqueror a better browser". Of course, the developers involved are free to do as they please with their code, but they're dragging down the rest of the KDE platform - I now have to have multiple browsers installed to do the most basic of day-to-day tasks.

While the situation is frustrating in itself, the unfortunate fact is that similar things are happening all over the open source scene. Frequently developers get too caught up in making sure that their code is "right" (that may mean designed correctly, stable, cool, standards compliant, well integrated, or anything else the developer feels is important), and not enough time is spent making sure that the product is usable. I suppose this is one of the draw backs to a development methodology where there is no external pressure to develop your product.

Usability is king, and trumps all other concerns in a product. If it's not usable, it's no good.

WiiWare: Innovation and mistakes


I bought a Nintendo Wii earlier this year. I've never actually owned a console before, but have a reasonably strong loyalty to Nintendo. They appear to publish the best games (of course, that's entirely subjective). My game catalogue now includes the following titles:

You may have noticed that I'm not a big fan of the more lighthearted "party" style games out there - I prefer the more focused, single player games.Once I had purchased those titles I began to look for something else, but quickly found that there's not a whole lot of choice out there right now. Most new Wii games tend to be in the "party" category.

Thankfully, Nintendo have launched WiiWare. WiiWare is a collection of titles created by third party developers. There are many different titles to choose from, and each title costs around £10. I ended up purchasing two titles:
These are both splendid games. However, once again, the pool of good games in the WiiWare collection is very limited - the main reason for this as far as I can see is that it's incredibly difficult to get your hands on the tools required to develop games for the Wii. For a start, Nintendo are only selling their development kit to well-established development houses (you need a registerred business, proper offices, previously published titles etc.). Their application form states that:

The Application includes a NonDisclosure Agreement (NDA). Once the Application and NDA are
submitted by you, we will email you a copy of the Application and NDA for your records. Please
note that your submission of an Application and NDA does not imply that your company is approved,
or will be approved, as an Authorized Developer for the platforms above.

...
If the Application is approved by Nintendo, we will notify you by email. At this point, your
company will be considered an Authorized Developer for the platform(s) specified. If your company
is approved for Wii, this also includes WiiWare. If approved the appropriate SDKs can be downloaded
from Warioworld, and development kits can be purchased from Nintendo of America.

So First you need to sign an NDA, Then, if you are accepted you need to purchase the development kit (priced at over $1000 USD). All this makes is increadibly hard for "joe programmer" to start cutting code for the Wii.

I really think Nintendo have missed a trick here; imagine the community that could form behind a free development kit. Think about the success of the Apple AppStore for the iPhone, but with games instead. The Wii is a revolutionary platform, with a unique control interface: surely lowering the barriers to entry can only be a good thing?

There's another side to this as well: The Wii Homebrew team have already done a lot of work reverse engineering the Wii, to the point where there is already an SDKs available for use. Is it usable? I haven't tried it myself yet (perhaps when I finish some of my current projects I'll play with it), but there are already a fair number of games available for the homebrew channel: I count more than 70 games listed, as well as a number of utilities, emulators and other bits and pieces.

The free development kit is based on the gcc PPC port, and comes bundled with everything you need to start development. GNU gcc has been a well established payer on the compiler scene, so it's not like we're playing with untested technology here.

Given that many of the secrets of the Wii are out (or are being reverse engineered even as you read this), wouldn't it be prudent for Nintendo to officially welcome third party developers to the fold? More importantly, for other, future consoles, imagine a world where:

  • The original manufacturer (Nintendo, Microsoft, Sony or whoever) use an open source toolchain from the beginning. I assume that Nintendo have spent a lot of time and money developing their toolchain, which seems a little wasteful to me, when an open source solution already exists. Sure, it may need to be tailored for the Wii, but I'm sure there are plenty of people who would embrace these changes. An open source toolchain lowers development costs, and lowers the barrier to entry for third party developers.
  • Third party developers are encouraged to write applications themselves, and the cost to entry is kept as low as possible. The manufacturer supplies the hardware, points to a pre-packaged toolchain of open source applications, and provides a development SDK with decent documentation. If all you need to test your games is a copy of the console itself, that would be great. However, why not build an emulator that can run on a standard PC?
  • The manufacturer provides bug-fixes for the SDK when needed, and creates a community-oriented website for budding developers.
  • The manufacturer provides a free (or very cheap) means of distributing third party applications via the internet, and offers the option of DRM routines, should the initial autors wish to make use of them.

I believe this setup could bring about a number of beneficial changes to the console gaming market:
  • An overall increase in the diversity and quality of available games.
  • A vibrant community of developers who help the manufacturer maintain the platform SDK and development toolchain by submitting bugs, feature requests and other suggestions.
  • Increased popularity for the platform (I'd buy any platform that offered all of the above).
Unfortunately, I can't see it happening any time soon. It seems to me that the big three console manufacturers are still engrossed in the "proprietary hardware, closed source" paradigm. Still, a guy can dream, right?

Releases Galore!

It would be negligent of me if I did not point out that today several important software releases were made:

The first is the project formerly known as project greenhouse - now known as Qt Creator. I've blogged about this before. You can now download a technical preview. I'm very excited about this - having played with it in it's beta state I can't wait to use it with some of my active projects. Unfortunately I won't be doing any coding this weekend, as I'm off to Switzerland for a long weekend. I'll have to try it out when I get back.

The second big release event today is the Ubuntu family of distributions. That's right, version 8.10 is out now. I'm a kubuntu man myself, so I'll be trying this out after my long weekend as well.

That's all from me - I have several projects in the wings to blog about in the coming weeks, but for now I need to catch some sleep - my taxi arrives at 5:00 AM tomorrow.

Cheers,

Buffalo Routers Rock!

Waaaaay back in Janurary I mentioned that I might build my own wireless router. Well, like most of my projects I got half way then got distracted. The other day i got fed up with the lack of wireless (I needed to get my Wii online, apart from anything else), so I splashed out and bought a Buffalo Air-Station (£45).

This baby comes pre-loaded with the open source dd-wrt firmware. It was dead-easy to set up, and boy, what a difference the open firmware makes! This is easily the best router firmware I have ever seen - it boasts more feature than you can shake a stick at. Better yet - it's aimed at people who know what they're doing, and doesn't try to hide it's functionality behind restrictive setup "wizards".

All in all, I'm a very happy man with my new wireless network ;)

piwup: A Picasaweb Image Uploader for Linux

One of my pet peeves has always been that unless you want to run google's picasa application under Linux, the only way to upload photos to your picasaweb account is via a klunky web interface that only allows you to select 5 images at a time. When I come back from a trip I have hundreds of photos, so this gets tiresome very quickly.

There is a kipi plugin that is supposed to be able to do this, but it has not yet hit the Linux distribution I am using, and I'm not about to start compiling plugins from source. Besides, half the fun is in making the application!

This is definitely not a finished application! I got it to the point where I could upload my images in a batch, but it needs more work before it's useful to anyone else. Here's a few sample screen shots:

Selecting images to upload.

Uploading the first image.

The application still has a long way to go. Just some of the things yet to complete are:
  • Remove hard coded items from the code (account details, service host, album name), and make these configurable via a nice configuration dialog. Make sure password is stored in a secure form - via the KDE wallet perhaps.
  • Make the GUI half-decent. Originally I just wanted something to work - I need to go back and do it again with a proper menu and image thumbnail support.
  • Bug fixes too numerous to mention here... this is some rouch, cheap and nasty code!

Perhaps, once I get all this done I will attempt to get it officially released into some distros. I think it's a useful application, and the kipi plugin version doesn't seem to be moving along much. Yes, I realize that I'd be better off spending my time improving the kipi plugin, but to be honest I can't be bothered right now - this was a learning experiment for me as much as it was about making an application that solved one of my problems.

The entire application is written in C++ and Qt4. The more I use Qt the more I like it. This application was simplicity itself to make, and I look forward to continued development.

Yes, we DO care!

Jeff Atwood writes a brilliant blog over at coding horror. My only gripe is that he's a .NET kinda guy - which is fine, most of the time. However, his recent post, entitled "Why Doesn't Anyone Give a Crap About Freedom Zero?" seems a little short sighted. For those of you who have never heard of freedom 0, It's one of the four freedoms that the free software foundation have set out to protect. The FSF define it as:

The freedom to run the program, for any purpose.


I don't have a problem with Jeff's love of .NET and microsoft in general, but when you post a blog with a title like that, it sidelines and marginalises everyone who does care.

Jeff - there are many of us who do care about Freedom 0 - and the other three. Take some time out of your busy schedule to investigate Linux and the BSDs. You'll find a community full of bright, innovative developers who all want to make a better computing platform.

This brings me to my second point. Okay, so this is probably really obvious to the rest of you, but I had a sudden realization the other day. It went something like this:

The reason why there are so many technically brilliant innovations found only in open source platforms (Yes, I genuinely believe that to be true) is that the pressures of software development aren't there. Commercial software vendors are always struggling to maintain a balance between allocating time to develop new features, maintain existing features, test / QA the product, and get it out the door. I don't believe that there's a single software product that was 100% complete when it was shipped. That's the nature of commercial software development. Too often the "right" way of doing things is sacrificed for a cheaper, faster way of doing things.

On the one hand this means that deadlines and schedules are met (everyone gets paid), but on the other hand the software quality is lacking. In an open source project however, there are no commercial pressures at all. Developers are left to their own devices to code what they want. Package maintainers are allowed to choose their own schedule - and who better to choose that balance between release frequency and code quality than the programmers who know the product best?


The more I look into the KDE project the more I realize that I'm right. There are some truly brilliant bits of code in there; It's a pity that they're never appreciated by the general public.