Writing UI Tests with Autopilot

At the UDS-r sprint in Copenhagen, I'll be running a session for anyone interested in Autopilot. There will be a demo, and Autopilot "experts" on hand to answer any questions you might have.

Autopilot is a tool for automating UI testing. We've been using it with great success in the last two Ubuntu cycles, and we're starting to support testing traditional Qt4, Qt5, Qml, and Gtk applications.

I hope to see you all there!

So you missed PyCon US...

If you're anything like me you've watched another PyCon US come and go. Living in New Zealand makes attending overseas conferences an expensive proposition. So you've missed the conference. You've watched all the talks on pyvideo.org, but it's still not enough. You'd love to attend a PyCon in person, perhaps one in an exotic location (what a great opportunity for a family vacation). Of course, I have a solution: Come to Kiwi PyCon!



In contrast to the US PyCon, Kiwi PyCon is a smaller, more intimate affair, with a few hundred delegates, two streams, and plenty of chances to meet other python hackers from the Australia/New Zealand/Pacific region. Places are limited, and registrations are open, so here's what you need to do to beat the post-PyCon blues:
  1. Go to nz.pycon.org, and register for the conference. While you're there, check out the sponsorship options!
  2. If you're feeling brave, submit a talk proposal!
  3. Book accommodation and flights (we will soon have accommodation options listed on the website).
  4. Count down the days to the conference!
It's that simple. Do it now!

Experimenting with C++ std::make_shared

C++11 is upon us, and one of the more utilitarian changes in the new standard is the inclusion of the new smart pointer types: unique_ptr, shared_ptr and weak_ptr. An interesting related feature is std::make_shared - a function that returns a std::shared_ptr wrapping a type you specify. The documentation promises efficiency gains by using this method. From the documentation:

This function allocates memory for the T object and for the shared_ptr's control block with a single memory allocation. In contrast, the declaration std::shared_ptr p(new T(Args...)) performs two memory allocations, which may incur unnecessary overhead.
I was curious: How much faster is make_shared than using new yourself? Like any good scientist, I decided to verify the claim that make_shared gives better performance than new by itself.

I wrote a small program and tested it. Here's my code:

#include <memory>
#include <string>

class Foo
{
public:
    typedef std::shared_ptr<Foo> Ptr;

    Foo()
    : a(42)
    , b(false)
    , c(12.234)
    , d("FooBarBaz")
    {}

private:
    int a;
    bool b;
    float c;
    std::string d;
};

const int loop_count = 100000000;
int main(int argc, char** argv)
{
    for (int i = 0; i < loop_count; i++)
    {
#ifdef USE_MAKE_SHARED
        Foo::Ptr p = std::make_shared<Foo>();
#else
        Foo::Ptr p = Foo::Ptr(new Foo);
#endif
    }
    return 0;
}
This is pretty simple - we either allocation 100 million pointers using new manually, or we use the new make_shared. I wanted my 'Foo' class to be simple enough to fit into a couple of lines, but contain a number of different types, and at least one complex type. I built both variants of this small application with g++, and used the 'time' utility to measure it's execution time. I realise this is a pretty crude measurement, but the results are interesting nontheless:
My initial results are confusing - it appears as if std::make_shared is slower than using new. Then I realised that I had not enabled any optimisations. Sure enough, adding '-O2' to the g++ command line gave me some more sensible results:
OK, so make_shared only seems to be faster with optimisations turned on, which is interesting in itself. At this point, I started wondering how other compilers would fare. I decided to pick on clang and run exactly the same tests once more:
Once again we see a very similar pattern between the optimised and non-optimised code. We can also see that clang is slightly slower than g++ (although it was significantly faster at compiling). For those of you who want the numbers:
Now I have evidence for convincing people to use make_shared in favor of new!

Kiwi PyCon Sponsorship Drive

 Kiwi PyCon is approaching! You probably think that's a good thing, but if you're one of the poor volunteer organisers, that's a scary thought. Why? We have bills to pay, and very little income. That means it's time to shill for some cash! Below is an excerpt from our public sponsorship announcement email. If your company is willing to sponsor a good cause, please get in touch with me.

Kiwi PyCon is organised by the New Zealand Python User group - a not-for-profit organisation. We don’t make any profit from the conference, and the organisers donate their free time to make the event a success. We rely entirely on companies’ sponsorship to pay the bills.

Sponsorship has several advantages for you:

  • It’s an opportunity to get brand exposure in front of the foremost Python experts from New Zealand and around the world.
  • Presents a fantastic networking opportunity if you are looking to employ engineers now, or in the future.
  • Align yourself with market leaders and past sponsors such as Github, Weta Digital, Catalyst IT, Mozilla. Become known as a Python promoter and industry leader. 
  • Gold sponsors receive five complimentary tickets to the conference and their logo on the conference shirt and all print materials.

If you’d like to sponsor the conference, a document describing sponsorship opportunities is available here.

To get in touch, email kiwipycon-sponsorship@nzpug.org.

Python GObject Introspection oddness

I recently ported indicator-jenkins to Gtk3 using the python GObject Introspection Repository (gir) bindings. Ted Gould did most of the work, I just cleaned some bits up and made sure everything worked. One issue that puzzled me for a while is that the GObject library changed the way it's "notify" signal works between GObject 2 and GObject 3. I've not seen any documentation of this change, so I'll describe it here.

For this example, let's make a very simple class that has a single property:

import gobject

class MyClass(gobject.GObject):
    prop = gobject.property(type=int)

...and a very simple callback function that we want to call whenever the value of 'prop' changes:

def cb(sender, prop):
    print "property '%s' changed on %r." % (prop.name, sender)

Finally, with GObject 2 we can create an instance of 'MyClass' and connect to the 'notify' signal like this:

inst = MyClass()
inst.connect("notify", cb)
inst.prop = 42

When we run this simple program we get the following output:
property 'prop' changed on .
... which is what we expected. However, if we port this code to GObject 3, it should look like this:

from gi.repository import GObject

class MyClass(GObject.GObject):
    prop = GObject.property(type=int)


def cb(sender, prop):
    print "property '%s' changed on %r." % (prop.name, sender)


inst = MyClass()
inst.connect("notify", cb)
inst.prop = 42

However, running this gives an error:

/usr/lib/python2.7/dist-packages/gi/_gobject/propertyhelper.py:171: Warning: g_value_get_object: assertion `G_VALUE_HOLDS_OBJECT (value)' failed
  instance.set_property(self.name, value)
Traceback (most recent call last):
  File "gobject3.py", line 8, in cb
    print "property '%s' changed on %r." % (prop.name, sender)
AttributeError: 'NoneType' object has no attribute 'name'

The 'prop' parameter in the callback is set to None.

There is a solution however - connecting the callback to a more specific notification signal works as expected:

from gi.repository import GObject

class MyClass(GObject.GObject):
    prop = GObject.property(type=int)


def cb(sender, prop):
    print "property '%s' changed on %r." % (prop.name, sender)


inst = MyClass()
inst.connect("notify::prop", cb)
inst.prop = 42

 It took me a while to figure this out - hopefully I've saved someone else that work.

Indicator-jenkins is now even more awesome

My latest hobby project, indicator-jenkins is now even better (I wrote about this previously, in case you missed it).

New features since my last blog post:

  • The code is now much nicer, and will be much easier to extend. My long-term goal is to support other types of CI servers (I'll probably have to change the project name I guess).
  • Desktop notifications are generated for each new build of a monitored project. The notification includes the status and health report of the last build.
  • LOTS of bug-fixes, especially around the settings UI. I'm still not happy with the settings dialog UI, but it's at least usable now.
To get it installed, do the following:

$ sudo add-apt-repository ppa:thomir/indicator-jenkins
$ sudo apt-get install indicator-jenkins

Then launch indicator-jenkins from the unity dash or command line (Note: Launching it from the command line generates a LOT of debug output - I will turn this off in future releases).

How to Compile Unity from Source

These instructions will help you build unity from source. However, there are a
few things to consider:

  • I recommend that you never copy anything you've built locally outside your home directory. Doing so is asking for trouble, especially as we're building the entire desktop shell. If you manage to ruin your system-wide desktop shell you'll be a very sad programmer!
  • I'm assuming that you're running the precise Ubuntu release (still in alpha at the time of writing, but very usable).
  • I'm also assuming that you want to build unity from trunk (that is, lp:unity).
Without further ado, let's get to it:

Getting the source code

If you don't already have Bazaar installed, install it now:

sudo apt-get install bzr

You may want to make yourself a folder for the unity code. I tend to do something like this:
mkdir -p ~/code/unity
cd ~/code/unity

Let's grab the code from launchpad:
bzr branch lp:unity trunk

This may take a while. If you prefer to use Bazaar checkouts instead of branches, that's fine to.

Installing Build Dependancies

We need to get the build-dependancies for unity. Thankfully, apt-get makes this trivial:

sudo apt-get build-dep unity

Compiling Unity

I have a set of bash functions that makes this step significantly easier. To use them, copy the following bash code into a file in your home directory called ".bash_functions":

function recreate-build-dir()
{
   rm -r build
   mkdir build
   cd build
}

function remake-autogen-project()
{
    ./autogen.sh --prefix=/home/thomi/staging --enable-debug
    make clean && make && make install
}

function remake-unity()
{
    recreate-build-dir
    cmake .. -DCMAKE_BUILD_TYPE=Debug -DCOMPIZ_PLUGIN_INSTALL_TYPE=local -DCMAKE_INSTALL_PREFIX=/home/thomi/staging/ -DGSETTINGS_LOCALINSTALL=ON
    make  && make install
}

function unity-env
{
 export PATH=~/staging/bin:$PATH
 export XDG_DATA_DIRS=~/.config/compiz-1/gsettings/schemas:~/staging/share:/usr/share:/usr/local/share
 export LD_LIBRARY_PATH=~/staging/lib:${LD_LIBRARY_PATH}
 export LD_RUN_PATH=~/staging/lib:${LD_RUN_PATH}
 export PKG_CONFIG_PATH=~/staging/lib/pkgconfig:${PKG_CONFIG_PATH}
 export PYTHONPATH=~/staging/lib/python2.7/site-packages:$PYTHONPATH
}

Note: You will need to replace all instances of "/home/thomi" with your own home directory path!

Now run this in a terminal:
echo ". ~/.bash_functions" >> ~/.bashrc

This ensures that the next time you open a bash shell the functions listed above will be available to you. To avoid having to close and re-open a terminal, we can read them manually just this once:
. ~/.bash_functions

You should now be able to run:
remake-unity

from the trunk/ directory we created earlier. That's it - you're building unity!

Not so Fast!

Chances are, while trying to build unity, you found that it needed a newer version of one of the several supporting projects than you had installed. At the time of writing, you can't compile unity without first building nux from sources first. Thankfully, that's pretty easy with the use of the functions you now have set up.
First we get the source code:

mkdir -p ~/code/nux
cd ~/code/nux
bzr branch lp:nux trunk
cd trunk

Then we need to get the build dependencies for nux.

sudo apt-get build-dep nux

Unfortunately there are a fewpackages missing, so you'll want to install them as well:

sudo apt-get install gnome-common libibus-1.0-dev libgtest-dev google-mock libxtst-dev

Then we use the functions above to build nux:
 
remake-autogen-project

That's it! You can then go back and build unity - hopefully this time with better success.

Build Notes

You may have noticed that the remake-* scripts do a complete rebuild every time. If you'd prefer to just build the files that have changed since last time, change to the trunk/build/ directory, and run:

make && make install

Running Unity

If you'd like to run the version of unity you've built, rather than the system-wide version, open a terminal and run the following commands:

unity-env
unity --replace &

The first line patches several environment variables such that unity will subsequently be launched from your local staging directory. These environment variables will remain changed until you close the terminal, so you need only run unity-env once.

Introducing: indicator-jenkins

For my day job I've been monitoring a jenkins instance (specifically, the public Ubuntu QA jenkins instance) using my web browser. This is obviously suboptimal - I'd like to be able to see the state of the jenkins jobs I'm interested at a glance, without having to open my web browser.

I couldn't find a solution to my problem, so I created one: indicator-jenkins is a panel indicator for your desktop manager of choice. It allows you to select one or more jobs from a jenkins server, and follow the job state without having to open your browser.

The project is hosted on launchpad, and is built daily into my PPA. To install it (I'm assuming you're running Ubuntu):

$ sudo add-apt-repository ppa:thomir/indicator-jenkins
$ sudo apt-get update
$ sudo apt-get install indicator-jenkins

Once it's installed you can launch it in a couple of different ways:

  • From a terminal - jut run indicator-jenkins. If you run it on the terminal you'll get a lot of debugging output (useful if you want to submit a patch, or figure out why it's not working).
  • From the unity dash - open the dash, and search for 'jenkins' - it's probably going to be the first link.

Once it's running you should see the jenkins icon in your panel. To set it up, click the icon to open the settings dialog, enter the URL of the jenkins instance you want to look at, hit the refresh button, and pick the job(s) you want to monitor. Click OK and you're done! It's simpler than it sounds - see for yourself:

Right now it's pretty rough-and-ready - there are many features I'd like to add:
  • Integrate desktop notifications, so you can be alerted when a job state changes.
  • Customise panel icon based on job state (i.e.- show a red icon if any of the monitored jobs are failing).
  • Allow user to customise refresh period.
  • Allow user to monitor jobs from more than one jenkins server.
  • Show more information about a jenkins job. For a start, show stability as well as current status, and maybe in the future show unity test pass/failure rates for projects that have that information.
How it's Made:

The entire application is written in python. We make use of several python modules:
  • The python-appindicator package gives us the ability to create an icon on the menu, and the python gtk2 bindings are used to create the menu and settings dialog.
  • The python 'multiprocessing' module is used to spin up worker processes to fetch the data from jenkins. Initially the application used threads for this, but python's threading support isn't great, and it was taking too long. 
  • The python 'json' module is used to save and load settings.
  • The python-jenkins package is used to communicate with the jenkins server.
This is a great example of python's "batteries included" approach to programming. All you have to do is provide the glue between already-existing python modules.