Launchpad, please stop mailing me mine own comments on bugs. I know what I said.
Launchpad, please stop mailing me mine own comments on bugs. I know what I said.
Rethinking annotate: I was recently reminded of Bonsai for querying vcs history. GNOME runs a bonsai instance. This got me thinking about 'bzr annotate', and more generally about the problem of figuring out code.
It seems to me that 'bzr annotate', is, like all annotates I've seen pretty poor at really understanding how things came to be - you have to annotate several versions, cross reference revision history and so on. 'bzr gannotate' is helpful, but still not awesome.
I wondered whether searching might be a better metaphor for getting some sort of handle on what is going on. Of course, we don't have a fast enough search for bzr to make this plausible.
So I wrote one: bzr-search in my hobby time (my work time is entirely devoted to landing shallow-branches for bzr, which will make a huge difference to pushing new branches to hosting sites like Launchpad). bzr-search is alpha quality at the moment (though there are no bugs that I'm aware of). Its mainly missing optimisation, features and capabilities that would be useful, like meaningful phrase searching/stemming/optional case insensitivity on individual searches.
That said, I've tried it on some fairly big projects - like my copy of python here:
time bzr search socket inet_pton (about 30 hits, first one up in 1 second)... real 0m2.957s user 0m2.768s sys 0m0.180s
The index run takes some time (as you might expect, though like I noted - it hasn't been optimised as such). Once indexed, a branch will be kept up to date automatically on push/pull/commit operations.
I realise search is a long slope to get good results on, but hey - I'm not trying to compete with Google :). I wanted something that had the following key characteristics: * Worked when offline * Simple to use * Easy to install
Which I've achieved - I'm extremely happy with this plugin.
Whats really cool though, is that other developers have picked it up and already integrated it into loggerhead and bzr-eclipse. I don't have a screen shot for loggerhead yet, but heres an old one. This old one does not show the path of a hit, nor the content summaries, which current bzr-search versions create.
Recently I read about a cool bugfix for gdb in the Novell bugtracker on planet.gnome.org. I ported the fix to the ubuntu gdb package, and Martin Pitt promptly extended it to have an amd64 fix as well.
I thought I would provide the enhanced patch back to the Novell bugtracker. This required creating new Novell login as my old CNE details are so far back I can't remember them at all.
However, hard-stop when I saw this at the bottom of the form:
"By completing this form, I am giving Novell and/or Novell's partners permission to contact me regarding Novell products and services."
No thank you, I don't want to be contacted. WTF.
So, the last lazyweb question I asked had good results. Time to try again:
Whats a good python-accessible, cross-platform-and-trivially-installable(windows users) flexible (we have plain text, structured data, etc and a back-end storage area which is only accessible via the bzr VFS in the general case), fast (upwards of 10^6 documents ), text index system?
pylucene fails the trivially installable test (apt-cache search lucence -> no python bindings), and the bindings are reputed to be SWIG:(, xapian might be a candidate, though I have a suspicion that SWIG is there as well from the reading I have done so far, and - we'll have to implement our own BackEndManager subclass back into python. That might be tricky - my experience with python bindings is folk tend to think of trivial consumers only, not of python providing core parts of the system :(.
So I'm hoping there is a Better Answer just lurking out there...
Updates: sphinx looks possible, but about the same as xapian - it will need a custom storage backend. google desktop is out (apart from anything else, there is no way to change the location documents are stored, nor any indication of a python api to control what is indexed).
It looks like I need to be considerably more clear :). I'm looking for something to index historical bzr content, such that indices can be reused in a broad manner(e.g. index a branch on your webserver), are specific to a branch/repository (so you don't get hits for e.g. the working tree of a branch), with a programmatic API (so that the bzr client can manage all of this), with no requirement for a daemon (low barrier to entry/non-admin users).
So I've been playing with Mnemosyne recently, using it to help brush up on my woeful Latin vocabulary. I thought it would be a good idea to get some of that data out of my head an into Ubuntu (which has a Latin translation).
Imagine my surpise when, after installing the latin language pack (through the gui), I could not log into Ubuntu in Latin?!
It turns out that there is no Latin locale in Ubuntu, or indeed in glibc. This is kind of strange (there is an esperanto locale). Remember that locales combine language and location - they describe how to format money, numbers, telephone details and so on. So clearly, I needed to add a latin locale. I could add one for just me (e.g. la_AU), or I could add a generic one (helpfully using AU values) on the betting chance that at this point there are not enough folk wishing to log in in latin (after all you can't!) for us to need one per country. And even more so, doing la_AU doesn't make a lot of sense - there isn't a pt_AU locale even though there are portuguese speakers living in Australia. (The root issue here is that location and language are conflated. POSIX I hate thee). So, a quick crash course in locales, some copy and paste later, and there is a Latin locale.
Installing that on my system got me a latin locale, but gdm still wouldn't let me select it. It turns out that gdm feels the urge to maintain its own list of what locales exist, and what to call them. I thought duplication in software was a bad idea, but perhaps I don't understand the problem space enough. Anyhow, time to fix it.
And because this is something other people may be interested in, and the patches are not yet in Ubuntu because upstream glibc may choose a different locale code (e.g. la_AU), I've finally had reason to activate my ppa on launchpad, so there are now binary packages for hardy for anyone that wants to play with this!
This week I've been at UDS in Prague, and looking at some possible ways to deploy bzr for packaging (which is a hot topic: developers don't want to change workflows without a concrete benefit, and definitely don't want to pay a cost for doing so - e.g. having to have all of history locally just to make a trivial change).
One of the discussions inspired a scalability test for bzr - not how we think we'd deploy bzr for Ubuntu developers, just a test to understand how it would scale *if* we did it this way.
Lars Wirzenius has a habit of testing VCS systems capabilities in various ways, including importing the Debian/Ubuntu source archive into them. He kindly ran a test using bzr, creating a single shared repository, with one branch in it per source packages.
This took a few hours to generate (I'm not sure of the exact figure, we forgot to time it, but it was started in the afternoon and finished in the morning). The resulting repository has 21GB in its .bzr/repository/packs directory, and 500MB in its .bzr/repository/indices directory. There are 30 pack files, the largest of which is 16GB, and the smallest a few hundred kB.
In general VCS terms this repository has 16000 heads, 16000 commits (because we didn't import deep archive history).
But what about performance? Its currently copying to a machine where I can do some serious benchmarks using this repository. I do have some quick and dirty figures though. To branch a single package (libyanfs-java) from its branch within the repository to a new standalone branch with cold cache took ~5 seconds. Branching again from the repository now the needed data is in page cache took 0.6 seconds. Branching from the newly created branch to another new standalone branch took 0.3 seconds.
There is a clear slowdown occuring here. Including startup costs the time to make a new branch is doubled by adding the branch to the repository. However as the repository is 16000 times the size, the scaling factor (2/16000) is pretty darn good. I'm stoked at this result, as I think it demonstrates just what the underlying pack store is capable of. We are working on streamlining the upper layers of bzr to make better and better use of the underlying store. For instance, John Meinel has just done this for 'bzr missing' and 'bzr uncommit'.
Now I must go, time for breakfast!
I'm very happy to announce that Canonical are hosting a Squid meetup in London this coming Saturday and Sunday the 1st and 2nd of March. Any developers (in the broad sense - folk doing coding/testing/documenting/community support/) are very welcome to attend. As it is a weekend and a security office building, you need to contact me to arrange to come - just rocking up won't work :). We'll be there all Saturday and Sunday through to mid-afternoon.
The Canonical London office is in Millbank Tower http://en.wikipedia.org/wiki/Millbank_Tower.
So if you want to come by please drop me a mail.
We'll be getting very technical very quickly I expect - for folk wanting a purely social meetup, I'm going to pick a reasonable place to meet for food and (optionally) alcohol on Saturday evening - I'll post details here mid-friday.
Just an observation on the user interface of mobile phone chargers. My phone runs flat all the time. And it's all UI.
The phone manual sayeth: "Do not leave the phone on the charger once it is charged; doing so will reduce battery life." The phone gives no signal when its charged.
So I have to stand over the phone when its charging, to ensure I unplug it appropriately.
As a result: I don't charge it when I'm in a rush; and it never gets charged when I'm about to do something else - I will forget it.
Blech. How much can a 'disconnect when charged' circuit really cost?
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!