Subscribe: How Bazaar
Added By: Feedage Forager Feedage Grade B rated
Language: English
branch  change  code  environment  juju  language  launchpad  much  new  project  team  test  time  unity  wikkid  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: How Bazaar

How Bazaar

Updated: 2018-03-07T04:36:36.279+13:00


It has been too long


Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

2013 in review


2013 started with what felt like a failure, but in the end, I believe that thebest decision was made.  During 2011 and 2012 I worked on and then managedthe Unity desktop team.  This was a C++ project that brought me back to myhard-core hacker side after four and a half years on Launchpad.  The Unitydesktop was a C++ project using glib, nux, and Compiz. After bringing Unity tobe the default desktop in 12.04 and ushering in the stability and performanceimprovements, the decision was made to not use it as the way to bring theUbuntu convergence story forward. At the time I was very close tho the Unity 7codebase and I had an enthusiastic capable team working on it. The decisionwas to move forwards with a QML based user interface.  I can see now that thiswas the correct decision, and in fact I could see it back in January, but thatdidn't make it any easier to swallow.I felt that I was at a juncture and I had to move on.  Either I stayed withCanonical and took another position or I found something else to do. I do likethe vision that Mark has for Ubuntu and the convergence story and I wanted tohang around for it even if I wasn't going to actively work on the story itself.  For a while I was interested in learning a new programming language, and Go was considered the new hotness, so I looked for a position working on Juju. I was lucky to be able to join the the juju-core team.After a two weak break in January to go to a family wedding, I came back towork and started reading around Go. I started with the language specificationand then read around and started with the Go playground. Then started with theJuju source.Go was a very interesting language to move to from C++ and Python. Noinheritance, no exceptions, no generics. I found this quite a change.  I evenblogged about some of these frustrations.As much as I love the C++ language, it is a huge and complex language. Onewhere you are extremely lucky if you are working with other really competentdevelopers. C++ is the sort of language where you have a huge amount of power and control, but you pay other costs for that power and control. Most C++ code is pretty terrible.Go, as a contrast, is a much smaller, more compact, language. You can keep theentire language specification in your head relatively easily. Some of this isdue to specific decisions to keep the language tight and small, and others I'msure are due to the language being young and immature. I still hope forgenerics of some form to make it into the language because I feel that theyare a core building block that is missing.I cut my teeth in Juju on small things. Refactoring here, tweakingthere. Moving on to more substantial changes.  The biggest bit that leaps tomind is working with Ian to bring LXC containers and the local provider to theGo version of Juju.  Other smaller things were adding much more infrastructurearound the help mechanism, adding plugin support, refactoring the provisioner,extending the logging, and recently, adding KVM container support.Now for the obligatory 2014 predictions...I will continue working on the core Juju product bringing new and wonderfulfeatures that will only be beneficial to that very small percentage ofdevelopers in the world who actually deal with cloud deployments.Juju will gain more industry support outside just Canonical, and will be seenas the easiest way to OpenStack clouds.I will become more proficient in Go, but will most likely still be complainingabout the lack of generics at the end of 2014.Ubuntu phone will ship.  I'm guessing on more than just one device and withmore than one carrier. Now I do have to say that these are just personalpredictions and I have no more insight into the Ubuntu phone process thananyone outside Canonical.The tablet form-factor will become more mature and all the core applications,both those developed by Canonical and all the community contributed coreapplications will support the form-factor switching on the fly.The Unity 8 desktop that will be based on the same codebase [...]

loggo - hierarchical loggers for Go


Some readers of this blog will just think of me as that guy that complains about the Go language a lot.  I complain because I care.I am working on the Juju project.  Juju is all about orchestration of cloud services.  Getting workloads running on clouds, and making sure they communicate with other workloads that they need to communicate with. Juju currently works with Amazon EC2, HP Cloud, Microsoft Azure, local LXC containers for testing, and Ubuntu's MAAS. More cloud providers are in development. Juju is also written in Go, so that was my entry point to the language.My background is from Python and C++.  I have written several logging libraries in the past, but always in C++ and with reasonably specific performance characteristics.  One thing I really felt was missing with the standard library in Go was a good logging library. Features that I felt were pretty necessary were:A hierarchy of loggersAble to specify different logging levels for different loggersLoggers inherited the level of their parent if not explicitly setMultiple writers could be attachedDefaults should "just work" for most casesLogging levels should be configurable easilyThe user shouldn't have to care about synchronizationInitially this project was hosted on Launchpad.  I am trialing moving the trunk of this branch to github.  I have been quite isolated from the git world for some time, and this is my first foray in git, and specifically git and go.  If I have done something wrong, please let me know.BasicsThere is an example directory which demonstrates using loggo (albeit relatively trivially). import ""...logger = loggo.GetLogger("project.area")logger.Debugf("This is debug output.")logger.Warningf("Some error: %v", err) In juju, we normally create one logger for the module, and the dotted name normally reflects the module. This logger is then used by the other files in the module.  Personally I would have preferred file local variables, but Go doesn't support that, not where they are private to the file, and as a convention, we use the variable name "logger".Specifying logging levelsThere are two main ways to set the logging levels. The first is explicitly for a particular logger:logger.SetLogLevel(loggo.DEBUG)or chained calls:loggo.GetLogger("some.logger").SetLogLevel(loggo.TRACE)Alternatively you can use a function to specify levels based on a string.loggo.ConfigureLoggers("=INFO; project=DEBUG; project.some.area=TRACE")The ConfigureLoggers function parses the string and sets the logging levels for the loggers specified.  This is an additive function.  To reset logging back to the default (which happens to be "=WARNING", you callloggo.ResetLoggers()You can see a summary of the current logging levels withloggo.LoggerInfo()Adding WritersA writer is defined using an interface. The default configuration is to have a "default" writer that writes to Stderr using the default formatter.  Additional writers can be added using loggo.RegisterWriter and reset using loggo.ResetWriters. Named writers can be removed using loggo.RemoveWriter.  Writers are registered with a severity level. Logging below that severity level are not written to that writer.More to doI want to add a syslog writer, but the default syslog package for Go doesn't give the formatting I want. It has been suggested to me to just take a copy of the library implementation and make it work how I want.I also want to add some filter-ability to the writers, both on the inclusive and exclusive, so you could say when registering a writer, "only show me messages from these modules", or "don't show messages from these other modules".This library has been used in Juju for some time now, and fits with most our needs.  For now at least.[...]

A personal thank you


Yesterday evening I had a wonderful IM conversation with a previous team member.  He moved on from Canonical to new challenges last year and he was just getting in touch to let me know that I had been a significant positive influence in his professional development.  This gave me nice warm fuzzies, but also made me think of those that had helped me over the years.  This post is dedicated to those people, who I am going to attempt to recall in roughly historical order.  I am however going to try to keep this limited to significant remembered events otherwise this list may get too huge (it may well anyway).Firstly I'd like to thank Jason Butler.  You taught me an important lesson very early on.  Jason and I worked together as interns (as close a term as I can work out) while at university.  Jason taught me me this:Just because someone talks slowly, doesn't mean that they think slowly.I'd like to thank Jason Ngaio for my first real exposure to C++.  Jason was the instructor of the C++ course that my first employers sent me on.  This was my first real job, and the first time that I think I really got object oriented programming.I'd like to thank Derrick and Pam Finlayson, Arran Finlayson, Blair Crookston, Jenny Cohen, Mathew Downes and Rachel Saunders.  You guys helped me develop personally.  The confidence and people skills that I learnt while around you has undoubtedly helped me in my professional career in software development.David Cittadini from the then Sapphire Technology company based in Wellington really expanded my vision and understanding of developing complex systems. David also got me back into reading around the programming topic.  My technical library started there.  Working with Chris Double helped me understand what it is like to work with someone else in synergy.  Our joint output I'm sure was a lot more than what we would have both produced independently added together.David Ewing made a significant impression on me around knowing my worth and helped in contract negotiations.  David has a wonderful way of dealing with people. Moving over to London gave me the opportunity to meet up with some truly awesome people.  Getting involved with ACCU was great for me.  I worked briefly with Allan Kelly at Reuters, but learned a lot in a brief time. I also had the opportunity to work with Giovanni Asproni and Alan Griffiths at Barclays Capital.  Working with you two really helped me understand the power that the developers hold when talking to the business.  A few other people I'd like to make a personal note of from this time in the UK are Kevlin Henney, Roger Orr and Pete Goodliffe.From my early time at Canonical, I'd like to personally thank Jonathan Lange, Robert Collins and Michael Hudson-Doyle.  You guys really helped me understand the importance of writing good tests, and test driven development.  Also the hammering in the code reviews teaching me how to write those tests well.There are so many other people that I have had great connections with over my professional career and I'd like to thank you all.  Work is more than just what you produce, but the friendships and connections you make with the people you are creating things with.[...]

Stunned by Go


The original working title for this post was "Go is hostile to developers". This was named at a time of extreme frustration, and it didn't quite seem right in the cooler light of days later. Instead I've settled on the term "stunned", because I really was. I felt like the built-in standard library had really let me down. Let's take a small step back in time to the end of last week as I was debugging a problem. In our codebase, we had an open file that we would read from, seek back to the start, and re-read, sometimes several times. This file was passed as an io.Reader into another of our interfaces which had a Put method. This stored the content of the io.Reader in a remote location. I was getting this succeeding the first time, but then erroring out with "bad file descriptor". The more irritating bit was that the same code worked perfectly as expected with one of our interface implementations but not another. The one that failed was our "simple" one. All it used was the built-in http library to serve a directory using GET and PUT http commands. @TheMue suggested that our simple storage provider must be closing the file somehow. Some digging ensued. What I found had me a little exasperated. The standard http library was calling Close on my io.Reader. This is not expected behaviour when the interface clearly just takes an io.Reader (which exposes one and only one method Read). This clearly breaks the "Principle of Least Astonishment" People are part of the system. The design should match the user's experience, expectations, and mental models. Developers and maintainers are users of the development language. As an experienced developer, it is my expectation that if a method says it takes an interface that exposes only Read, then only Read will be called. This is not the case in Go standard library. While I have found just one case, I have been informed that this is common in Go, and that interfaces are just a "minimum" requirement. @howbazaar you'll also find it's pretty common. io.Copy() will call ReadFrom and WriteTo methods, WriteString() is called to avoid copy.— Jesse McNelis (@jessemcnelis) July 13, 2013 It seems to me that Go uses the interface casting mechanism as a way to allow the function implementation to see if the underlying structure supports other methods, or to check for actual concrete implementation types so the function can take advantage of extra knowledge. It is one thing to call methods that don't modify state, however calling a mutating function that the original function did not express an intent to call is so much more than just unexpected, but astonishing. The types of the parameters being passed into a function form a contract. This has been formalized in a number of languages, particularly D and Eiffel. I found myself asking the question "Why do they do this?" The answer I came up with two things: To take advantage of extra information about the underlying object to make the execution of the function more efficientTo work around the lack of function overloading Now the second point is tightly coupled to the first point, because if there was function overloading, then you could clearly have another function that took a ReaderCloser and it would be clear that the Close method may well be called. My fundamental issue here is that the contract between the function and the caller has been broken. There was not even any documentation to suggest that the contract may be broken. In this case, the calling of the Close method on my io.Reader broke our code in unexpected ways. As a language that is supposed to be used for systems programming, this just seems crazy.[...]

juju switch


The switch command has recently landed in trunk, and will be included in the next juju-core release.

juju switch is another way to specify the current working environment. Current precedence for environment lookup still holds, but this now sits between the JUJU_ENV environment variable and the default value in environments.yaml.

If you have multiple environments defined, there are several different ways to tell juju which environment you mean when executing commands.

Prior to switch, there were three ways to specify the environment.

The first and default way to specify the environment is to use the default value in the environments.yaml file.  This was always the fallback position if one of the other ways was not specified.

Another way was to be explicit for some commands, and use the -e or --environment command line argument.

$ juju bootstrap -e hpcloud

There is also an environment variable that can be set which will override the default specified in the environments.yaml file.

$ export JUJU_ENV=hpcloud
$ juju bootstrap          # now bootstraps hpcloud
$ juju deploy wordpress   # deploys to hpcloud

The switch option effectively overrides what the default is for the environments.yaml file without actually changing the environments.yaml file. This means that -e and the JUJU_ENV options still override the environment defined by switch.

$ juju help switch
usage: juju switch [options] [environment name]
purpose: show or change the default juju environment name
-l, --list  (= false)
    list the environment names
Show or change the default juju environment name.
If no command line parameters are passed, switch will output the current
environment as defined by the file $JUJU_HOME/current-environment.
If a command line parameter is passed in, that value will is stored in the
current environment file if it represents a valid environment name as
specified in the environments.yaml file.
aliases: env

It works something like this:

$ juju env
Current environment: "amazon-ap"
$ juju switch
Current environment: "amazon-ap"
$ juju switch -l
Current environment: "amazon-ap"

$ juju switch amazon
Changed default environment from "amazon-ap" to "amazon"
$ juju switch amazon
Current environment: "amazon"
$ juju switch
Current environment: "amazon"

If you have JUJU_ENV set, then you get told that the current environment is defined by this.  Also if you try to use switch to change the current environment when the environment is defined by JUJU_ENV, you will get an error.

$ export JUJU_ENV="amazon-ap"
$ juju switch
Current environment: "amazon-ap" (from JUJU_ENV)
$ juju switch amazon
error: Cannot switch when JUJU_ENV is overriding the environment (set to "amazon-ap")


The Go Language - My thoughts


I've been using the Go programming language for almost three months now pretty much full time. I have moved from the Desktop Experience team working on Unity, into the Juju team. One of the main reasons I moved was to learn Go. It had been too long since I had learned another language, and I felt it was better to dive in, than to just mess with it on my own time.A friend of mine had poked around with Go during a hack fest and blogged about his thoughts. This was just before I really started poking around. Interestingly the main issues that Aldo found frustrating with the errors for unused variables and unused imports, I have found not to be such a big deal. Passingly griping, sure, but not a big issue. Having the language enforce what is often a lint checker in other languages I see as an overall benefit. Also, even though I don't agree with the Go formatting rules, enforced by gofmt, it doesn't matter. It doesn't matter because all code is formatted by the tool prior to commit. As an emacs user, I found the go-mode to be extremely helpful, as I have it formatting all my code using gofmt before saving. I never have to think about it. One thing I couldn't handle though, was the eight character tabs. Luckily emacs can hide this from me.;; Go bits. (require 'go-mode-load)(add-hook 'before-save-hook #'gofmt-before-save)(add-hook 'go-mode-hook (lambda () (setq tab-width 4)))  There are some nice bits to Go. I very much approve of channels being first class objects, and the use of channels to communicate between concurrently executing code. Go routines are also nifty, although I've not used them too much myself yet. Our codebase does, but I've not poked into all the nooks and crannies yet.However there are several things which irritate the crap out of me with Go.Error handlingThe first one I guess is a fundamental design decision which I don't really agree with. That is around error handling being in your face so you have to deal with it, as opposed to exceptions, which are all to often not thought about. Now if our codebase is in any way representative of Go code out there, this is just flat out wrong. The most repeated lines of code in the codebase would have to be:if err != nil {  return nil} This isn't error handling. This is just passing it up to the chain, which is exactly what exception propagation does, only Go makes your codebase two to three times larger due to needing these three lines after every line of code that calls into another function. This is one thing I really dislike, but unlikely to change.As a user of a language though, there are other things that could be added at the language level to make things slightly nicer. Syntactic sugar, as it is often known, makes the code easier to read.If the language is wanting to keep the explicit handling of errors in the current way, how about some sugar with that.Instead offunc magic() (*type, error) {    something, err := somefunc("blah")    if err == nil {        return nil, err    }    otherThing, err := otherfunc("blah")    if err == nil {        return nil, err    }    return foo(something, otherThing), nil }   we had some magic sugar, say a built-in method like raise_error, which interrogated the function signature, and returned zeroed values for all non-error types, and the error, and returned only non-error values, we could have thisfunc magic() (*type, error) {    something := raise_error(somefunc("blah"))    otherThing := raise_error(otherfunc("blah"))    return foo(something, otherThing), nil }The range functionThere are several different issues I have with the range function.range returns one or two parameters, but the language doesn't allow any user defined functions to return one or two parameters, [...]

Unity 5.8 issues and workarounds


Well... with the release of Unity 5.8 and associated dependencies, we got the extra testing we were after in precise, and with it a number of bugs. The positive side to this is that with the extra information from our wonderful beta-testers we have been able to work out how to reproduce a number of the issues. As any developer would tell you, being able to reproduce your user's problems is often the biggest hurdle.

Over the weekend I noticed a number of issues around the release of Unity 5.8, and this morning while going through the bug reports, I was happy to notice that we had some way to work around most of them.

Unity 5.8: Flickering and corruption on Unity UI elements - a fix for many is "unity --reset". The cause appears to be how compiz is dealing with plug-ins that are no longer around. For some there have been plug-ins that existed with Oneiric that are no longer around in Precise, and the reset caused them to be removed from the list to load.

Unity 5.8: Login to blank screen (all black or just wallpaper) - some have been fixed by "unity --reset", but the underlying cause of this one is still a bit of a mystery.

Unity 5.8: Can't login to Unity since upgrade to 5.8 - some have found that disabling "Unity MT Grab Handles" compiz plug-in fixes this issue. We still need to work out what the underlying problem is.

white box randomly shows up at top left corner blocking applications from using stuff under it - this one appears to be triggered by chromium desktop notifications. There have been reports that disabling the animations plug-in in compiz, and then re-enabling it fixes this. We are still investigating why.

If you are getting these issues, you can try the workarounds suggested here.

Guilt reduction


So it is now Monday morning and I'm sitting next to Thomi.  We are going to pair program on this test stuff.  Partly because I think that pair programming is really cool, and partly due to Thomi knowing the autopilot test infrastructure really well, and that'll make this go much faster.

The bug in question related to the launcher getting into a very confused state where it thought there were multiple active applications.  And clicking on a launcher icon that was in this confused state caused a new application to be started rather than switching to the one that was running.

The first step in making all this work then, is to create a branch that is based off a revision that was before the fix.  This way we can write a test that fails first.  A key part of tests is to make sure they fail first.  Then when they start passing, you know it isn't by mistake, and that you have tested what you think, not just created something that passes.

Firstly, find that revision...

$ bzr log | less

The fix is revision 1977, so lets make a branch of trunk from revision 1976.

$ bzr cbranch trunk -r 1976 hud-ap-test
$ cd hud-ap-test/
$ bzr revno

I use light weight checkouts for the unity repo, hence cbranch rather than branch.

At this revision, there is a HUD test that really just checks the reveal. Lets make sure it passes...

$ cd tests/autopilot/
$ python -m autopilot.tests.test_hud
Tests running...
No handlers could be found for logger "autopilot.emulators.X11"

Ran 1 test in 4.238s

I deleted a bunch of gtk warnings, they don't add any value for what I'm trying to show here.  Would be great if someone fixed them though :-)

Now I need to actually build and run my local unity (and test the autopilot test again).

Found out that my machine was failing to build for other reasons, so we switched to Thomi's.  The existing test still passed (of course it did), so the next step was to write a test that encapsulated the broken behaviour that we had found during the many hours of analysis.

That can be found at lp:~thomir/unity/autopilot-hud-triple-hit.

The test failed with the old revision, we then merged trunk, rebuilt, and ran the test again.  Test passed.  Job done.

That guilty feeling


Today had been a frustrating day.  I had been quick to anger and my family bore the brunt of that. It wasn't until I was confronted with this that I actually took a minute to think why I was feeling this way.  It came back to something I read on IRC this morning, where I read that some people I deeply respect were disappointed with the test coverage with Unity 5.4.

I took this disappointment the way people often take it from their parents.  Remember when as a child, one of the worst things you could feel was the disappointment of your parents.  Well I guess that is how I felt.

I took over the engineering manager position of the unity team at the end of last year, and I tend to take criticism of the project and team personally.

So... why the guilty feeling?

Well, back around the time I took over managing the team, the general acceptance criteria for getting Canonical projects into Ubuntu changed.  This includes Unity.  There were a number of automated tests for Unity, and a series of distro acceptance tests that were manually executed.  What we needed to do was to really change the team culture to one where tests were not only written, but expected.  New features needed test coverage, bug fixes needed test coverage.  The idea here, for all those that understand test driven development, and automated testing, was to make sure that bugs that were fixed, and new features, didn't get broken accidentally by new changes.

The guilt really came from knowing that I had allowed code reviews through the process without enforcing the need for tests.  And that as a senior person on the team, others took a lead from what I did.  If I was letting things through, so would others.  This is where the feeling really came from.

It is very easy to land fixes to crashes quickly when under pressure.  Especially when you've spent the last eight hours debugging in gdb, and auditing all the recently landed code looking for that change that would contribute to the broken behaviour that you have been trying to fix.  When you finally find that one line fix, it is so tempting to just commit the one line.  You know it works, you've just spent the last freaking eight hours looking at the weird behaviour.  What you haven't done however, is stopped it from happening again, by encapsulating the behaviour in an automated test.

I plan to spend some of Monday going back and adding an automated test to cover the particular behaviour that we fixed the other day.  I'll also write up what, and how this test gets written.  Hopefully by writing this, not only will Unity get better test coverage, but I'll personally feel better knowing that I've done the right thing.

6 months on Unity


We have just finished another design sprint prior to UDS-P.

While talking with some others I realise that I have worked on Unity for six months, and not changed a single pixel on the output.  No graphical changes, no moving widgets, no changing colours.

So what have I been doing?

First step was getting some new coding standards accepted by the team, which was much easier than I was expecting.

I added some property classes to nux, and did some general clean up in the code of nux and unity.

Refactored the indicator internals for the panel service which started off the shared unity core library for sharing code between the 2D and 3D code-bases.

Then I focused primarily on fixing memory leaks and crashes.

Once we hit final freeze, I did a little more refactoring internally, and now we are on to Precise Pangolin.

Properties in C++


Once you have done any development in a language that natively supports properties, like Python or C#, going back to C++ and not having them often feels like a real pain. I've just proposed my second attempt at C++ properties for the nux library.

This change leans heavily on a paper written by Lois Goldthwaite: SC22/WG21/N1615 - C++ Properties -- a Library Solution. I added change notifications using sigc++. I found that using sigc::slot was nicer than templating the properties on the class and member function pointer. This also meant that I could provide a way for a simple property to get its own custom setter method while still having a sensible default.

Compiling C++ templates still gives absolutely horrendous error message that can take a while to mentally parse. I guess one advantage of having done a lot of template programming in the past is that I don't get too phased by copious quantities of error messages, especially for templates, as for one example today, I had just forgotten to change a template arg in a test function, and got way too many lines to sensibly look at. One benefit of that was it caused me to look at what I was doing, and I ended up simplifying my tests a little more.

Thank you Lois for the time you spent writing up the C++ properties proposal, it was a fantastic starting point for me.

Getting back into C++


I have to admit that unit testing in C++, even with google-test, is so much more of a PITA than in python. Especially when checking string output. Simple string matching using split and regular expressions has really spoiled me.

Another thing that I've noticed is that I spend more time thinking about object design, and what should that object really be able to do, and what should it allow others to do to it.

It is an interesting time as I realise how much I still have to learn for our current domain. Most of my previous C++ experience has been on server side processing. Drawing stuff on the screen, real end user stuff, is still relatively new for me.

DX Sprint


What a week. I've spent the last week in Budapest sprinting with the rest of the Desktop Experience (DX) team. This week was also my first official week with the DX team as I have moved now from the Launchpad team to the DX team. This was a good week. I had met some of the DX team before at other company get togethers, but not really talked to them much. A really important part of any new job is meeting the people that you are working with. This is always ends up happening when you work in the same office with them, but for a distributed company like Canonical, you can end up working on the same team with people that you don't get to meet for months.

It was great to meet different sub-teams of DX, especially those that I'll be working closely with. I'll be hanging out in the #ayatana irc channel now, but I'll also still be in #launchpad and #launchpad-dev. There are some very interesting plans for oneiric, and it will be interesting to see how much we can end up getting done. In the normal way we have "too much to do" and the gauntlet has been thrown.

So... I'll be hacking on the unity stack. Please don't ask me to fix any particular bugs yet as it'll no doubt take me time to find my way through the code :-)

Now for the 40+ hour journey home.

Launchpad and stacked branches


As I'm sure most of you are aware, Launchpad hosts Bazaar branches. One early design decision that we had on Launchpad was that branches should be able to be renamed, moved between people and projects without limitation. This is one reason why each branch that was pushed to Launchpad had a complete history. We wanted to make sure that there weren't any problems where one person was blocking another pushing branches, or that people weren't able to get at revisions that they shouldn't be able to.The Bazaar library, bzrlib, gives you awesome power to manipulate the internals giving you access to the repository of revisions through the branch. This can be a blessing and a curse, as if you have private revisions, then they can't be in a public repository.Having a complete copy of the data for every branch became a severe limitation, especially for larger projects, of which Launchpad itself is one. A solution to this was a change in Bazaar itself that allowed a fallback repository which contained some of the revisions. This is what we call stacked branches. The repository for the branch on Launchpad has a fallback to another repository, which is linked to a different Launchpad branch. We ideally wanted all of this to be entirely transparent to the users of Launchpad. What it means is that when you are pushing a new branch to Launchpad, the bzr client asks for a stacked location. If there is a development focus branch specified for the project, this is then offered back to the client. The new branch then only adds revisions to its repository that don't exist in the development focus branch's repository. This makes for faster pushes, and smaller server side repositories.The problem though was what do we specify the stacked on location to be? When we created the feature, we used absolute paths from the transport root. What the mean was that we stored the path aspect of the branch. For example, lp:wikkid gets translated to bzr+ssh:// or depending on whether the bzr client knows your Launchpad id. The absolute path stored would be /~wikkid/wikkid/trunk. This information was then stored in the branch data on the file system.The problem however was that the web interface allows you to rename branches. The actual branch itself on disk is referred to using a database id, which is hidden from the user using a virtual file system which has rewrite rules for http and at the bazaar transport level. However since the stacked on location refers to a full branch path, changing any part of that, whether it is the branch owner, branch name, or the project or package that the branch is for, would cause any branches stacked on that changed branch to break, bug 377519.In order to fix this we had to change the location that the branch is stacked on to be independent of the branch path. The best solution here is to use the database id. I really didn't want to expose the user to this opaque id, but one opaque id is as good as another. Now when pushing branches to Launchpad, when it is creating a stacked branch you'll see a message like:Created new stacked branch referring to /+branch-id/317141.Existing branches still have their old branch paths saved for now. We'll run a migration script early next week to fix all these up, and hopefully we'll have seen the last of this bug.[...]

Blueprint magic


Just landed in qastaging is some itch-scratching work I did adding AJAX widgets to the main blueprint page. This has passed QA and will end up in production with the next no-downtime rollout (which should be real soon now).

This work was adding a bunch of the lazr-js wrapped widgets. Now we can update the following without reloading the primary page:

  • title - the H1 heading
  • summary
  • whiteboard
  • assignee
  • drafter
  • approver
  • priority
  • implementation status
  • definition status

Using the new custom events that the page raises when the context object changes (using YUI magic and API PATCH requests), when you change the title of the blueprint, the document title (title bar) and the breadcrumbs also change. When the implementation status is updated, the overall status updates, and the "started by" and "completed by" are shown or hidden as appropriate.

This is work that I've wanted to see done for almost a year, and recent other changes I've done adding more widget wrappers and javascript goodness have made this possible without adding copious amounts of custom javascript.

A side-effect of these changes is that there are now more fields exported over the API for blueprints.

Announcing sloecode


Sloecode is a simple Bazaar hosting project.Last year I tried to set up my home server to offer a place for three people to have shared, private access to a bazaar repository for a project. I found it really ackward. I felt that there had to be a simpler way.Launchpad is an awesome place to host Bazaar branches. However Launchpad is for open source projects and personal branches are public.I was approached towards the end of last year by Thomi who suggested we create a simple Bazaar hosting project for Otago Polytechnic to provide a place for the students to host their senior year projects. Since it was something I also cared about, and hadn't found a reasonable solution elsewhere, I agreed to help.Our initial requirements went something like this:Users defined in a database and no local login needed on the hosting serverPrivate repositories for the students to host personal branchesLecturers should be able to see the repositories of the studentsProjects have private repositories only visible to the members of the project team, and the LecturersSimple URLs for getting access to the branchesScalability isn't a priorityIn the end we went with the new pyramid libraries for the web application. We tried briefly with django, but I found the framework blocked me whenever I tried to do something. I had worked a lot with zope, and repoze.bfg was something we looked at. When repoze.bfg and turbogears merged into pyramid we felt that we had found a good match for us.Installing and running the sloecode server is still a bit messy. We'd love to get it to the stage we you can just run an installer and magic happens. But we are not there yet.The application server runs on the same machine as the filesystem hosting the branches and repositories. Shared repositories are created for each user as they are added. When a project is created so is its shared repository, and trunk branch which is set to append only.Users log in and add their public SSH key. Users should also install the bzr-sloecode client plugin. Initially the plugin had hard coded site names for the Otago Polytechnic. But looking forward I decided that we should just use an environment variable. This allows access to the sloecode server using a short hand - gets access to the trunk branch of the project called my-projectsc:my-project/trunk - also gets access to the trunk branchsc:my-project/some-branch - gets access to the branch some-branch of my-projectsc:~myid/personal-branch - gets access to the branch personal-branch on my repositoryThe sc translation is done client-side, like the lp expansion for Launchpad.We run a custom twisted SSH server on the hosting machine. This does the SSH key lookup against our user database, and doesn't allow password login. It also restricts the commands that can be executed on the server side to what it expects Bazaar to ask for. No shells are given. The server then launches a subprocess that uses a wrapped smart server that has a virtual filesystem to translate the requested paths to the underlying filesystem. This operates in a similar way to Launchpad, but much more trivially. The repository hosting configuration just needs two paths: one for the project repositories, and one for the personal repositories. The smart server code also handles the privacy aspect, not allowing unauthorized users access to repositories they shouldn't see.The futureWe'd like to have some form of code browsing functionality added. Whether we use loggerhead, or something else is still up in the air. We'd also like to integrate wikis for the projects. Wikkid would be a good fit (another of my personal projects) and [...]

More responsive recipe builds


SteveK has recently landed a change that at the same time makes our admins happy, and should give our recipe users more responsive builds.

Daily builds were previously kicked off at 23:00 UTC. A rather arbitrary time that put quite a load on the build farm as recipes became more popular.

The new change has our script to kick of daily recipe builds run much more often, and uses a cunning mix of magic and smarts to run the jobs (ok, not so much magic).

When new changes are pushed to Launchpad for branches that are used in source package recipes, the recipe is marked as dirty. Dirty recipes are candidates for daily builds. If a recipe has not been built into the daily build PPA within the last 24 hours a new recipe build job is created very soon after Launchpad notices the new changes. If there has been a build, then the new build job isn't created until 24 hours has past since the last build. Manual builds into other PPAs do not affect the daily build time check.

What this means is that if you have a daily build recipe, and tend to change the branch less often than every day, then when you do change it, the package is built much more quickly.

Refactoring Launchpad's lazrjs widgets


Just landed (r12285 on devel) is some refactoring I've been doing on the lazr-js widget wrappers. After hacking on the picker at Dallas, I felt the other editor widgets needed some attention as well. The primary documentation can be found in the source tree at lib/lp/app/doc/lazr-js-widgets.txt, in fact this is probably the best thing to read.

This new changes did several things:

  • fixed the multi-line editor so you don't need extra HTML tags surrounding the widget in the page template
  • changed the widgets to actually use page templates to render the content instead of fiddling with string substitution
  • refactored the class inheritance - the multi-line editor no longer inherits from the single line editor, but instead they both inherit from a common base
  • it changed the way the editable attribute is defined - from being the name of the attribute, to being the field from the interface
  • the initial text for the multi-line editor is now determined from the object itself, not a parameter passed through to the constructor
  • the permission checks were unified, which fixed a problem with the text editors using mutator functions exposed through lazr.restful
  • you now have to be explicit about the id used for the HTML tag surrounding the editor - this wasn't much of an ask as all call sites were already doing this

All in all I'm pretty happy with this change.

Code Blue at the Thunderdome


I am sitting in the hotel at the end of the Launchpad Thunderdome. Really this was just a sprint for all of the Launchpad developers, but someone decided it needed a neat name, and Thunderdome stuck.It is at this sprint that the Launchpad team is transitioning away from the old application based teams to "squads". There are five squads, each given a colour for now: Blue, Green, Orange, Yellow, and Red. So my "Code" team has been split up and I have a new squad "Blue". Before we even switched to the squads, on of my new members resigned to move to something new, so I have an opening. Actually right now we have two openings. One squad is primarily based in Australasia (with one in New York) and the other has people from North America and Europe.The majority of our time has been spent getting to know our new squads. Three of the squads are working on features, and two are on maintenance. The Blue squad (called "Code Blue" from now) is finishing of the recipe feature. This allows packages to be built automatically directly from a source package assembled from one or more branches on Launchpad. This feature was started over a year ago and needs to be finished off. Most of what we are doing now is just polishing the user experience and interface. The feature has been available since around the middle of the year, and we have almost 250 recipes that are configured to build packages automatically. Once we finish this off Code Blue will move on to maintenance until some other feature teams finish up.We have done some pretty impressive work on the webservice this week. Primarily due to trying to make nice AJAX popups for changing the owner and PPA of a recipe. This resulted in much yak shaving, and refactoring. The work we did make it trivially easy to add an AJAX popup to any place on Launchpad where a single value is chosen from a list of options. We also fixed a number of bugs in lazr.restful and very soon there will be a release that will add a 'web_link' property to the entries in launchpadlib that will refer to the actual object on Launchpad.[...]



I've been working for Canonical for around four years now, and for the vast majority of that time I've worked out of a room in my house. You need to have a certain style of discipline to work effectively from home and I like to think that I have that.

More recently I've been feeling a little down about work in general. I think it is because I am insulated and tend to get a lot of information from blogs, a number of feeds, twitter,, facebook and email lists. A lot of the information that I then get exposed to is often from the vocal minority, and this can be a little draining.

Last Friday I started spending some of my work day at the distiller lab. I have to say it has been great for my general state of mind. Today is my forth day in a row spending some time here and so far, so good.

Wikkid Wiki


As with any creation being released, I'm writing this with some trepidation. I'd like to announce the first release, 0.1, of Wikkid Wiki.What is it?Wikkid is a wiki that uses the Bazaar DVCS as an underlying storage model. Wiki pages are text files in a branch.Why another wiki, surely there are enough already?There is the obvious reason, because I felt like it. But this is not the primary reason. Since I started working for Canonical I've come to appreciate the whole culture of free, libre, and open source software more. During the day I work on Launchpad. Primarily in the area of integrating the Bazaar DVCS. Launchpad doesn't have a wiki integrated, and it is my plan to see wikkid be the wiki that is integrated.I have to admit to having very strong opinions myself on what I wanted for wikis in Launchpad. I've tried to encapsulate that in the vision below. Since no one else was looking at it, I took it upon myself. Discussions started last year between a small group of Launchpad developers, but there was no traction. At the start of March I started writing the Bazaar backed wiki. It needed a name though. Thankfully after trying several I got the perfect name from Aaron Bentley - wikkid.The wikkid visionAny wikkid wiki can be branched locally for off line editing.Any branch can be viewed using wikkid - not limited to branches created through wikkidA local wikkid server can be run using a Bazaar pluginLocal commits use the local users Bazaar identityWikkid can be configured to operate in a stand alone, public facing mode where it has a database of usersWikkid can be integrated as a library into other python applicationsWikkid uses standard wiki markup languages - not inventing its ownWhat does Wikkid Wiki offer?Right now, wikkid offers basic page editing, rendering and browsing.ReST is the default wiki formatCreole 1.0 is also supported (as long as the first line is "# creole")source files are syntax highlighted using pygmentsyou get to see your gravatar for your local Bazaar identityno page locks are used, but instead a three way mergeconflicts due to concurrent editing are shown for the user to resolveWhere to from here?Well here is just the beginning. The TODO file is quite long already, andthat is just a simple brain dump.Things that I want to have done for 0.2 include:Change the underlying server from twisted.web to WSGIChange the generated URL formatAdd the stand-alone user database code, along with sessions and logins and email validationAdd a view to show changes for a pageAllow the reverting of any historical changeDaily build of trunk into the wikkid developer's PPAIdeally for the 0.2 release I'd like to provide everything that is needed forwikkid to be deployed as a stand-alone, public facing, wiki.Wow, how can I help?Wikkid uses Launchpad for colaboration and project tracking - can get a copy of trunk using 'bzr branch lp:wikkid'File bugs for any issues you find playing with itJoin the development and discussion mailing listIf you feel so inclined, you could implement a feature or fix a bug, push the branch to Launchpad and propose the mergeCome and chat in #wikkid on freenode, nothing fires developers up like having encouraging users[...]

Launchpad code update


We've been very busy over the last couple of months with lots of changes that most people will never notice.

Reduced latency

Branches pushed to Launchpad are now immediately available over http to anonymous readers of the branch, which includes the loggerhead code browser.

Code review email changes

When proposing a branch for review the initial emails and subsequent comments will now come in order. Previously if someone commented before the script that generated the diff was run, the comment would be emailed out first, now it isn't.

Teams requested to review now get email

Everyone in the team that is requested to review will get email now. This is a blessing for those that want it, and almost a curse for those that aren't interested. Launchpad adds a number of email headers to help users with filtering of email. Here is an example from an email I received:

X-Launchpad-Message-Rationale: Reviewer @drizzle-developers
X-Launchpad-Notification-Type: code-review
X-Launchpad-Project: drizzle

Since it was a team that was requested to review, there is the @drizzle-developers added to the X-Launchpad-Message-Rationale. If I was personally asked to review, the header would just say Reviewer.

Build branch to archive

This was the original name of the feature, but it is more about recipes now. A recipe is a collection of instructions on how to build a source package. We are still testing this internally, but I'm hoping to get this enabled on edge very soon. This will be extended to add daily builds.

What does this really mean?

Lets say you want to have a daily build of a project, like gwibber. You would then create a recipe that uses trunk as a base branch, merge in the packaging info, and say "Please build this every day into my PPA". And Launchpad will.

Trivial bugs


This is just a quick note really. One thing I've been trying to do more and more is to address simple bugs in a more timely manner.

I use the tag "trivial" to indicate to me that the bug is very simple to fix. By this I mean that I should be able to have the fix and the test all written in under an hour, and normally under 30 minutes.

Personally I'm (hopefully) fixing one trivial bug a day in addition to other work. This way the simple bugs get some attention, and I get the feeling of accomplishing something when other things are in the pipeline that take longer to get completed.

My scheduling of trivial bugs is somewhat arbitrary. Often the most recently commented on trivial bug will get my attention.

Don't wait for perfection


Way back in July I was thinking of writing a post about the new branch listings in Launchpad. I was working on making branch listings for distributions and distroseries work, for example Package branches for Ubuntu and Packages branches for Ubuntu Lucid. But the code wasn't entirely optimised. Then as things happen, the optimisation got pushed back... and back. And finally when I did get the optimisation in, I didn't feel it was worthy of talking about.

I guess the thing to remember is: don't wait for perfection. Sure it wasn't perfect, but if more people were accessing the pages, the optimisation may have happened sooner.

One thing going on at the moment is more integration of the lazr-js widgets. The main merge proposal page now has in in-page multi-line editor for the commit message. Sure, it needs tweaking, but the main functionality is there. More ajaxy goodness is finding its way into Launchpad.

One of the things that I'm thinking about right now is splitting the concepts of code reviews and merge proposals. At the moment we almost use the term interchangeably, which does cause some confusion. I'd like to have the merge proposal reflect the meta-data and information around the intent to have work from one branch be landed or merged into another branch (normally the trunk branch), and the code review the conversation that goes on around the merge proposal. Merge proposals may have an associated code review, but right now, a code review must be associated with a merge proposal.

Associated with this, I'd like to extract some state information. Currently merge proposals have only one status, which really reflects two things. I'd like to break this out into two: review status; and merge status. Review status would be one of: work in progress; needs review; approved; rejected; superseded; and maybe withdrawn. Merge status would be one of: proposed; merged; queued; and merge failed. Queued relates to the merge queues which are currently partially implemented in the Launchpad codebase, and merge failed is the state that a proposal would be set to when a landing robot like PQM or Tarmac attempt to land the branch but it fails due to either merge conflicts or test failures.

My goal for the next six months it to write more often, talk about ideas more, and not wait for perfection.