Older blog entries for company (starting at number 139)

video hackfest

This is the result of applying my recent gstreamer-cairo hacking to a real-world application – Webkit. It’s very fast here and could record that video you see there while playing it without breaking a sweat. If you want to run the demo yourself, try http://people.freedesktop.org/~company/stuff/video-demo.html. (Be warned: That link downloads roughly 200MB of movie data. And it’ll likely only run in recent Epiphany or Safari releases.)

Also, thanks to the X foundation funding and Collabora hosting, we will be doing a video hackfest in Barcelona from 19th to 22nd next months. GStreamer, Cairo, X, GL and driver hackers will focus on stabilizing these features so that movies will be first class citizens in the future of your desktop, no matter what you intend to do with them, including doing video browsing lowfat style.
I’m both excited and anxious about this hackfest, I’m not used to being in charge when ots of awesome hackers meet in one place.

Syndicated 2009-10-14 21:22:41 from Swfblag

Cairo is slow

Here’s the pipeline:

gst-launch-0.10 filesrc location=/home/lvs/The\ Matrix\ -\ Theatrical\ Trailer.avi ! decodebin ! queue ! cairomixer sink_0::xx=0.5 sink_0::xy=0.273151 sink_0::yx=-0.273151 sink0::yy=0.5 sink_0::alpha=0.6 sink_0::xpos=72 sink_0::ypos=48 sink_2:xx=1.38582 sink_2::xy=-0.574025 sink_2::yx=0.574025 sink2::yy=1.38582 sink_2::alpha=0.7 sink_2::xpos=20 sink_2::ypos=150 sink_2::zorder=10 sink_1::xpos=300 sink_1::ypos=100 ! video/x-cairo,width=800,height=500 ! pangotimeoverlay ! cairoxsink filesrc location=/home/lvs/the_incredibles-tlr_m480.mov ! decodebin ! cairocolorspace ! video/x-cairo ! queue ! cairomixer0. filesrc location=/home/lvs/transformers.trailer.480.mov ! decodebin ! cairocolorspace ! video/x-cairo ! queue ! cairomixer0.

Here’s the result:

CPU utilization when playing this is roughly 30%, which I attribute mostly to the video decoding. The Intel 945 GPU takes 25% doing this. If I use the sync=false property, the video is done after 59s. It’s also completely backwards compatible when no hardware acceleration is available. In fact I used the same pipeline to record the video, just replacing the sink with a theoraenc.

Implementation details are here and here. Total amount of code written is roughly 10.000 lines, put into the right spots in gstreamer, cairo, pixman and the xserver. Of course, the code is not limited to GStreamer. I expect Webkit and Mozilla will use it too, once it’s properly integrated. And then we can do these effects in Javascript.

Syndicated 2009-10-05 10:53:11 from Swfblag

Byzanz 0.2.0

I wanted to do something “easy”. And with that I mean hack on code that doesn’t need reviews by other people or tests or API/ABI stability. But I wanted to do something useful. And I found something: I dug out Byzanz, the desktop to GIF recorder tool I wrote almost 4 years ago.

The first thing that stood out to me was the ugly functions this code uses. Functions like gdk_drawable_copy_to_image() or gnome_vfs_xfer_uri() are really not to be used if you want maintainable code. So I replaced GDK with Cairo functions and gnome-vfs with gvfs. It looks much nicer now and is more powerful, too. So if you still have code that uses outdated libraries, make the code use modern ones. It’s worth it.
This whole process comes at a bit of a cost though: Byzanz absolutely requires bugfixes that are only in the (so far) unreleased git master trees of Cairo, Gtk and GStreamer. So for now Byzanz is the most demanding application available on the GNOME desktop!

Next I realised that I learned a lot about coding in the last 4 years. My code looked really ugly back then. So I made it not use bad things like nesting main loops (do NOT ever use gtk_dialog_run()), follows conventions (like gio async functions handling) and refactored it to have a sane structure. While doing that I easily got rid of all the bugs people complained about. That was fun.

Then I made sure to document the goals that guided my design of Byzanz. From the README:

  • purpose

    Byzanz records animations for presentation in a web browser. If something doesn’t fit this goal, it should not be part of Byzanz.
  • correctness

    When Byzanz provides a feature, it does this correctly. In particular, it does not crash or corrupt data.
  • simplicity

    The user interface and programming code are simple and easy to understand and don’t contain any unecessary features.

    Byzanz does not attempt to do be smarter than you are.
  • unobtrusiveness

    Byzanz does not interfere with the task you are recording, neither by keeping a large settings window around nor by consuming all your CPU during a recording.

Those goals are a really useful thing, because they ensured I didn’t add features, just because I could. For example, I’d really like to add visual clues about key presses or mouse clicks. But I couldn’t find a way to make this work simple and unobtrusive. And making people fiddle with settings before a recording to enable or disable the feature is bad.

But I added a feature: Byzanz can now not only record GIF images, but also to Theora or Flash video. The web has changed in the last 5 years and supports video formats now, so it’s only following Byzanz’s design goals to add these formats. Fwiw, I only added Flash video because it’s the only lossless format. So if you wanna do post-processing in Pitivi (like add text bubbles), you wanna use the Flash format.

I also updated the UI: It asks for the filename before starting the recording, so it can save animations to your FTP while you record. And I added a bunch of niceties like remembering the last filename. Repeating a recording that doesn’t look quite right is 3 clicks: Click record button in panel, return (to select the same file), return (to confirm overwriting). Nifty that.

So, if you have a jhbuild: Get the shiny new Byzanz 0.2.0.

Syndicated 2009-08-30 21:54:10 from Swfblag

semantic desktop

I named this post “Tracker” first as I started writing from that perspective, but the problems I’m about to talk are more related to what is called “semantic desktop” and not specific to Tracker, which is just the GNOME implementation to that idea.
This post is a collection of my thoughts on this whole topic. What I originally wanted to do was improve Epiphany’s history handling. Epiphany still deletes your history after 10 days for performance reasons. When people suggesting Tracker I started investigating it, both for this purpose and in general.

How did this all start?

It gained traction when people realized that a lot of places on the desktop refer to the same things, but they all do it incompatibly. (me, me, me, me, me and I might be in your IRC, mail program and feed reader, too.) So they set out to change it. Unfortunately, such a change would have required changes to almost all applications. And that is hard. An easier approach is to just index all files and collect the information from them without touchig the applications. And thus, Tracker and Beagle were born and competed on doing just that.
However, indexing has lots of problems. Not only do you need to support all those ever-changing file formats, you also need to do the indexing. And that takes lots of CPU and IO and is duplicated work and storage. So the idea was born to instead write plugins that grab the information from the applications while they are running.
But still, people weren’t convinced, as the only things they got from this is search tools, even if they automatically update. And their data is still duplicated.

What’s a sematic desktop anyway?

Well, it’s actually quite smple. It’s just a bunch of statements in the form <subject> <predicate> <object> (called triples), like “Evolution sends emails”. Those statements come complete with a huge spec and lots of buzzwords, but it’s always just about <subject> <predicate> <object>.
Unfortunately, statements don’t help you a whole lot, there’s a huge difference between “Evolution sends emails” and “I send emails”. You need a dictionary (called ontology). The one used by Tracker is the Nepomuk ontology.
And when you have stored lots of triples stored according to your ontologies, then you can query them (using SPARQL). See Philip’s posts (1, 2) for an intro.

So why is that awesome?

If all your data is stored this way, you can easily access information from other applications without having to parse their formats. And you can easily talk to the other applications about that data. So you can have a button in evolution for IM or one in empathy to send emails. Implementing something like Wave should be kinda trivial.
And of course, you get an awesome search.

No downsides?

Of course there are downsides. For a start, one has to agree on the ontologies: How should all the data be represented? (Do we use a model like OOXML, ODF or HTML for storing documents?) Then there also is the question about security. (Should all apps be able to get all passwords”? Or should everyone be able to delete all data?) It’s definitely not easy to get right.
How does Tracker help?

Tracker tries to solve 2 problems: It tries to supply a storage and query daemon for all the data (called a triple-store) and it tries to solve the infrastructure to indexing files. The storage backend makes sense. Its architecture is sound, it’s fast and you can send useful queries its way. It has a crew of coders developing it that know their stuff. So yes, it’s the thing you want to use. Unless you don’t buy in to the semantic desktop hype.

What about the indexing?

Well, the whole idea of indexing the files on my computer is problematic. The biggest problem I have is thatthe Tracker people lack the expertise to know what data to index and how. It doesn’t help a whole lot if Tracker parses all JPEG files in the Pitures/ folder when the real data is stored in F-Spot. It doesn’t help a whole lot when you have Empathy and Evolution plugins that both have a contact named Kelly Hildebrand, but you don’t know if they’re talking about the same person. You just end up with a bunch of unrelated data.

There was something about hype?

Yeah, the semantic desktop has been an ongoing hype for a while without showing great results. Google Desktop is the best example, Beagle was supposed to be awesome but hasn’t had a release for a while, let alone Dashboard, and it hasn’t caught on in GNOME, either, even though we talk about it for more than 3 years.
But then, Nokia still builds on Tracker for Harmattan, the Zeitgeist team tries to collect data from multiple applications and make use of it in innovative ways. People are definitely still trying. But it’s not clear to me that anyone has figured out a way to make use of it yet.

Now, do I want to use it in my application or not?

Tracker is not up to the quality standards people are used from GNOME software. It’s an exciting and rapidly changing code base with lots of downright idiotic behaviors - like crashing when it accidentally takes more than 80MB memory while parsing a large file - and unsolved problems. Some parts don’t compile, the API is practically not documented and the dependancy list is not small (at least if you wanna hack on it). It also ships a tool to delete your database. Which is nice for debugging, but somewhat like shipping a tool that does rm -rf ~. In summary, I feel remembered of the GStreamer 0.7 or Gtk 1.3 days. Products with solid foundations, a potentially bright future ahead but not there yet. So it’s at best beta quality. I’d call it alpha.
There is an active development team, but that team is focused on the next Maemo release and not on desktop integration. This is quite important, because it likely means that the development focus will probably only be on new applications for the Maemo platform and not on porting old applications (in particular GNOME ones) to use Tracker. And that in turn means there will not be deeper desktop integration. Unless someone comes up and works on it.

So I don’t want to use Tracker?

The idea of the semantic desktop has great potential. if every application makes its data available for every other application in a common data store that everybody agrees on, you can get very nice integration of that data. But that requires that it’s not treated as an add-on that crawls the desktop itself, but that applications start using Tracker as their exclusive primary data store. Until EDS is just a compatibility frontend for Tracker, it’s not there yet.
So if you use Tracker, you will not have to port yor application to use it in the future, when GNOME requires it. You also get rid of the Save button in your application and gain automatic backup, crash recovery and full text search. But if you don’t use Tracker, you don’t save your data on unfinished software, and you don’t have to rip it out when Nokia (or whoever) figures out that Tracker is not the future.

Conclusion

I have no idea if Tracker or the semantic desktop is the right idea. I don’t even know if it is the right idea for a new Epiphany history backend. It’s probably roughly the same amount of work I have to do in both cases. But I’m worried about its (self)perception as an add-on instead of as an integral part of every application.

Syndicated 2009-07-23 19:34:55 from Swfblag

treeview tips

While doing my treeview related optimizations, I encountered quite a few things that help improving performance of tree views. (Performance here means both CPU usage and responsiveness of the application.) So in case you’re using tree views or entry completions and have issues with their performance, these tips might help. I often found them to be noticable for tree views with more than 100 rows, though they were usually measurable before that. With 1000 rows or more they do help a lot.

Don’t use GtkTreeModelSort

While GtkTreeModelSort is a very convenient way to implement sorting for small amounts of data, it has quite a bit of overhead, both with handling inserts and deletions. If you’re using a custom model, try implementing GtkTreeSortable yourself. When doing so, you can even use your own sort functions to make sorting really fast. If you are using GtkTreeModelFilter, you should hope this bug gets fixed.

Use custom sort functions whenever possible

As sorting a large corpus of data calls the sort function at least O(N*log(N)) times, it’s a very performance critical function. The default sort functions operate on the GValues and use gtk_tree_model_get() which copies the values for every call. For strings, it even uses g_utf_collate(). This is very bad for sort performance. So especially when sorting strings, it’s vital to have the collated strings available in your model and avoid using gtk_tree_model_get() to get good performance with thousands of rows.

Do not use gtk_tree_view_column_set_cell_data_func()

Unless you exactly know what you are doing, use gtk_tree_view_column_set_attributes(). Cell data function have to fulfill some assumptions (that probably aren’t documented) and are very performance sensitive. The implicit assumption is that they set identical properties on the cell renderer for the same iter - until the iter is changed with gtk_tree_model_row_changed(). I’ve seen quite a few places with code like this:

/* FIXME: without this the tree view looks wrong. Why? */
gtk_widget_queue_resize (tree_view);

The performance problem is that this function is called at least once for every row when figuring out the size of the model, and again whenever a row needs to be resized or rendered. So better don’t do anything that takes longer than 0.5ms. Certainly don’t load icons here or anything like this…

Don’t do fancy stuff in your tree model’s get_value function

This applies when you write your own tree model. The same restrictions as in the cell data functions apply here. Doing them wrong has the same problems as I said in the last paragraph, as this function is more low level; it gets called O(N*log(N)) times when sorting for example. Luckily there’s a simple way around it: Cache the values in your model, that’s what it’s supposed to do. And you don’t forget to call gtk_tree_model_row_changed() when a value changes.

Batch your additions

This mostly applies when writing your own tree model or when sorting. If you want to add lots of rows, it is best to first add all the rows, then make sure all the values are correct, then resort the model and only then emit thegtk_tree_model_row_added() signal for all the rows. When not writing a custom model, use gtk_list_store_insert_with_values() and gtk_tree_store_insert_with_values(). These functions do all of this already.

Fix sizes of your cell renderers if possible

This one is particularly useful when using tree views with only one column. If you have a know width and height in advance, you can use gtk_cell_renderer_set_fixed_size() to fix the size of the cell renderer in advance. This will cause the tree view to not call the get_size function for the cell renderer for every row. And this will make in particular text columns a lot quicker, because there’s no need to layout the text to compute its size anymore.
A good example for where this is really useful is GtkEntryCompletion when displaying text. As the width of the entry completion is known in advance (as big as the entry you are completing on) and the height of a cell displaying text is always the same, you can use this code on the cell renderer:

gtk_cell_renderer_set_fixed_size (text_renderer, 1, -1);
gtk_cell_renderer_text_set_fixed_height_from_font (text_renderer, 1);

Now no get_size functions will be called at all and the completion popup will still look the same.

And with these tips, there should be no reason to not stuff millions of rows into a tree view and be happy. :)

Syndicated 2009-07-10 17:35:14 from Swfblag

Neat hacks

I’m doing optimization tasks recently. The first one is covered on the gtk mailing list and bugzilla. It involves a filechooser, many thousands of files and deleting a lot of code. But I’m not gonna talk about it now.

What I’m gonna talk about started with a 37MB Firefox history from my windows days and Epiphany. Epiphany deletes all history after 10 days, because according to its developers the location bar autocomplete becomes so slow it’s unusable. As on WIndows I had configured the history to never delete anything, I had a large corpus to test with. So I converted the 100.000 entries into epiphany’s format and started the application. That took 2 minutes. Then I started entering text. When the autocomplete should have popped up, the application started taking 100% CPU. After 5 minutes I killed it. Clearly, deleting all old history entries is a good thing in its current state.
So I set out to fix it. I wrote a multithreaded incremental sqlite history backend that populates the history on-demand. I had response times of 2-20ms in my command-line tests, which is clearly acceptable. However, whenever I tried to use them in a GtkEntryCompletion, the app started “stuttering”: keys I pressed didn’t immediately appear on screen, the entry completion lagged behind. And all that without taking a lot of CPU or loading stuff from disk or anything noticable in sysprof. It looked exactly like on this 6MB gif - when you let it finish loading at least. And that’s where the neatest of all hacks came in.

That hack is documented in bug 588139 and is what you see scrolling past in the terminal in the background. It tells you which callbacks in the gtk main loop take too long. So if you have apps that seem to stutter, you can download the patch, apply it to glib, LD_PRELOAD the lib you just built and you’ll get prints of the callbacks that take too long. I was amazed at how useful that small hack is. Because it told me immediately what the problem was. And it was something I had never thought would be an issue: redraws of the tree view.

And that gets me back to the beginning and my filechooser work, because stuff there was sometimes still a bit laggy, too. So I ran it with this tool, and the same functions were showing up. It looks like a good idea to look into tree view redraws and why they can take half a second.

Syndicated 2009-07-09 19:36:03 from Swfblag

vision

It’s early in the morning, I get up. During breakfast I check the happenings on the web on my desktop - my laptop broke again. People blog about the new Gnumeric release that just hit the repositories. I click the link and it starts. I play around in it until I need to leave.
On the commute I continue surfing the web on my mobile. I like the “resume other session” feature. Turns out there’s some really cute features in the new Gnumeric that I really need to try.
Fast forward, I’m at the university. Still some time left before the meeting, so I can play. I head to the computer facilities in the lab, log in and start Gnumeric. Of course it starts up just as I left it at home: same settings, same version, same document.
20 minutes later, I’m in the meeting. As my laptop is broken, I need to use a different computer for my slides. I grab a random one and do the usual net login. It looks just like at home. 3 clicks, and the right slides are up.

This is my vision of how I will be using my computer in the future. (The previous paragraph sounds awkward, but describing ordinary things always does, doesn’t it?) So what are the key ingredients here?

The key ingredient is ubiquity. My stuff is everywhere. It’s of course on my desktop computer and my laptop. But it’s also on every other computer in the world, including Bill Gates’ cellphone. I just need to log in. It will run on at least as many devices as current web apps like GMail does today. And not only does GMail run everywhere, but my GMail runs everywhere: It has all my emails and settings available all the time. I want a desktop that is as present as GMail.

So here’s a list of things that need to change in GNOME to give it ubiquity:

  • I need my settings everywhere

    Currently it is a hard thing to make sure one computer has the same panel layout as another one. I bet at least 90% of the people configure their panels manually and download their background image again when they do a new install. So roughly every 6 months. Most people don’t even know that copying the home directory from the old setup solves these issues. But I want more than that. Copying the home directory is a one-time operation but stuff should be continually synchronized. In GNOME, we could solve most of this already by making gconf mirror the settings to a web location, say gconf.gnome.org and sync whenever we are connected.

  • I need my applications everywhere

    I guess you know what a pain it is when you’ve set up a new computer and all the packages are missing. Gnumeric isn’t installed, half the files miss thumbnailers, bash tells you it doesn’t know all those often-used commands and no configure script finds all pkg-config files. Why doesn’t the computer just install the files when it needs them? Heck, you could probably implement it right now by mounting /usr with NFS from a server that has all packages installed. Say x86.jaunty.ubuntu.com? Add a huge 15GB offline cache on my hard disk using some sort of unionfs. Instantly all the apps are installed and run without me manually installing them. As a bonus, you get rid of nagging security update dialogs.

  • I really need my applications everywhere

    Have you ever noticed that all our GNOME applications only work on a desktop computer? GNOME Mobile has been busy duplicating all our applications for mobile devices: different browser, different file manager, different music player. And of course those mobile applications use other methods to store settings. So even if the settings were synced perfectly, it wouldn’t help against having to configure again. Instead of duplicating the applications, could we please just adapt the user interfaces?

When looking at this list, and comparing it to the web, it’s obvious to me why people prefer the web as an application delivery platform, even though the GNOME APIs are way nicer. The web gets all of those points right:

  • My settings are stored on some server, so they are always available.
  • Applications are installed on demand - without even asking the users. The concept is called “opening a web page”.
  • Device ubiquity is the only thing the web doesn’t get perfect by default. But a) the browser developers are hard at work ensuring that every web site works fine on all devices, no matter the input method or the screen size and b) complex sites can and do ship user interfaces adapted to different devices.

The web has its issues, too - like offline availability and performance - but it gets these steps very right.

So how do we get there? I don’t think it’s an important question. It’s not even an interesting question to me. The question that is interesting to me is: Do we as GNOME want to go there? Or do we keep trying to work on our single-user oriented desktop platform?

Syndicated 2009-05-12 18:47:11 from Swfblag

decision-making

I’ve recently been wondering about decision-making, namely how decisions are made. I had 5 questions in particular that I was thinking about where I could not quite understand why they worked out the way they did and was unhappy with the given reasons. The questions were:

  • Why does GNOME switch to git?
  • Should the GNOME browser use Webkit or Mozilla?
  • Why has GNOME not had any significant progress in recent years but only added polish?
  • Why did Apple decide to base their browser on KHTML and not on Mozilla?
  • What is Google Chrome trying to achieve - especially on Linux?

If you have answers to any of these question, do not hesitate to tell me, but make sure they are good answers, not the standard ones.

Because the first thing I realized was that all of these questions have been answered, usually in a lot of technical detail providing very good reasons for why a particular decision was taken. But when reading about the GNOME git migration it struck me: There were equally good technical reasons to switch to almost any other version control system. And I thought I knew the reasons for the git switch, but they were not listed. So are good technical reasons completely worthless? It turns out that for the other questions, there are equally good technical reasons for the other answer: There were good reasons for Apple to base their browser on Mozilla or buy Opera, for GNOME to be more innovative or for Google to not develop a browser or invest in Mozilla, just none of them are usually listed. Plus, I knew that if different people had decided on the GNOME vcs switch, we would not be using git now.

This lead me to think that decisions are personal, not technical. People decide what they prefer and then use or make up good technical reasons for why that decision was taken. But this in turn means technical reasons are only a justification for decisions. Or even worse: they deceive others from the real reasons. And that in turn means: Don’t look at the given reasons, look at the people.

But apparently people will not tell you the real reasons. Why not? What were my reasons for prefering git as a GNOME vcs? I definitely did not do a thorough investigation of the options, I didn’t even really use anything but git. I just liked the way git worked. And there were lots of other people using it, so I could easily ask them questions. So in the end I prefer git because I used it first and others use it. Objectively, those are not very convincing reasons. But I believe those are the important ones for other people as well. I think GNOME switched to git because more hackers, in particular the “important” ones, wanted git. Why did it not happen earlier? I think that’s because a) they wanted to hack and not sysadmin their vcs of choice and b) they needed a good reason for why git is better than bzr to not get into arguments, and Behdad’s poll gave them one. So there, git won because more people want it. Why do more people want it? Probably because it’s the vcs they used first and there was no reason to move away. The only technical reason in here was “good enough”.

I don’t think technical reasons are completely unimportant. They are necessary to make an option reasonable. But once an option is a viable one, the decision process is likely not driven by the technical reasons anymore. After that it gets personal. With that it mind, things like Enron or the current bailout don’t seem that unbelievable anymore. And a lot of decisions suddenly make a lot more sense.

And because I asked the other questions above, let me give you the best answers I currently have for them.

  • First of all, GNOME should use Webkit and not Mozilla, because the GNOME people doing the actual work want to use that one. There’s a lot of people that prefer Mozilla, but they don’t do any work, so they don’t - and shouldn’t matter. This was the easy question.
  • GNOME does not have any innovation because GNOME people - both hackers and users - don’t want any. Users want to use the same thing as always and most GNOME hackers these days are paid to ensure a working desktop. And they’re not students anymore so they prefer a paycheck to the excitement of breaking things.
  • For Safari I don’t have a satisfactory answer. I’m sure though it wasn’t because it was a “modern and standards compliant web browser”. Otherwise they would have just contributed to KHTML. Also, the Webkit team includes quite some former Mozilla developers, who should feel more confident in the Mozilla codebase. So I guess again, this is personal.
  • For Google Chrome it’s the same: I don’t have a real clue. Google claims it wants to advance the state of the art in browsers, but this sounds backwards. Chrome would need a nontrivial market share to make it possible for their webapps to target the features of Chrome. Maybe it’s some people inside Google that wrote a browser as their 20% project. Maybe Google execs want more influence on the direction the web is going. I don’t know.

Syndicated 2009-04-01 10:41:46 from Swfblag

browsing in GNOME

This is a loose follow up to GNOME and the cloud. You might want to read that post first.

There are 2 ways to provide access to the web for GNOME. One is to integrate into an existing framework. The other is to write our own. I think I can safely ignore the “write our own browser” option without anyone complaining.
I know 2 Free projects providing a web browsing framework that we could try to integrate into GNOME: Webkit and Mozilla. I’ve recently talked to various people trying to figure out what the best framework to use would be. As it turns out, there is no simple answer. In fact, I didn’t even succeed in coming to an opinion for myself, so I’ll discuss the pros and cons of both options and let you form your own opinion.

current browser usage in GNOME

GNOME’s web browsing applications currently use a Mozilla bridge (epiphany, yelp, devhelp) or the custom gtkhtml library (evolution). However, none of these are actively developed any more, because roughly a year ago we started to switch to Webkit, trying to build the webkit-gtk library. This switch has not been completed. It was originally scheduled to be delivered with GNOME 2.26, but has been postponed to GNOME 2.28.
It should also be noted that vendors shipping GNOME mostly ignore the GNOME browser epiphany and ship Firefox instead, which is based on Mozilla. There is no indication that this will change soon.

Mozilla and Webkit in a nutshell

Mozilla is a foundation trying to Free the web. Most of the work done by the foundation’s members is focused on the Firefox web browser, which ensures that it is very cross-platform by shipping almost all functionality required internally, including their own UI toolkit. A smaller side-project is XULRunner, which aims to provide the framework Firefox uses for integrating into other applications. It is not related to GNOME in any way.

Webkit is a fork of KDE’s KHTML browser engine by Apple for creating the Safari browser and adding web functionality to their desktop. It is also used by Google Chrome and various other projects (Nokia S60 browser, Adobe AIR, Qt).
It only contains a web browsing component and a Javascript interpreter and requires the backends to provide integration into their platform. Among other things a rendering library, a network layer and an API to successfully write an application are required. The webkit-gtk project aims to provide this for GNOME.

development drivers

Mozilla’s development is driven in large part by employees of the Mozilla Corporation, which was founded and is owned by the Mozilla Foundation to develop the Firefox browser as its sole purpose.
Webkit’s core has been developed in large part by Apple after forking from KHTML. Other browsers built on it have usually been forks of the Webkit codebase. Recently this changed when the Google Chrome and Nokia Qt teams started to merge their code back into the Webkit codebase and started to base their work on top of Webkit directly. I do expect to see more formalization of the government model during the next year, though I have not seen any indication of this yet.

community

Both Mozilla and Webkit have an active development community that is roughly the size of GNOME (that’s estimated, I could be off by a factor of 2-5). A lot of them are open source developers. Members of the different projects know each other well; they hang out on the same IRC channels and mailing lists. A lot of the current Webkit developers have been former Mozilla contributors.
Mozilla has also managed to gain respect in the web development community, both by FIXME sponsoring development of FIXME web tools and by FIXME creating channels of communication with them. Add-On development is a strong focus, too. Mozilla encourages development of FIXME sophisticated FIXME customizations to drive the browser. Last but not least they are FIXME reaching out to end users directly. As a result, Mozilla and in particular Firefox have become strong brands.
In Webkit country, outreach to web developers and end users is done by the browser vendors themselves. Google and Apple are well known brands themselves and market their browsers using FIXME traditional FIXME channels. As such, the FIXME name Webkit itself is relatively unknown and attracts relatively few external contributors from the web and add-on development communities. However, Webkit encourages porting the core for different purposes and as such attracts FIXME a lot of browser developers, including the webkit-gtk port.

compatibility with the Web

Mozilla probably has 20-30% of the world-wide market share, Webkit has 5-15%. Both numbers are big enough to ensure that websites test for the browsers, so that in general, websites should work.
However, no website will check for GNOME, so all features that are provided by us will not be tested. This means that all features we maintain exclusively for GNOME can cause issues if they don’t work exactly the same way as in the other backends. And from Swfdec development I know that this is incredibly hard to achieve, as it does not only require an identical featureset, but also identical performance characteristics. This is less of a problem in Mozilla, where the GNOME-specific code is pretty small, but might become a big problem in Webkit, where a lot of code is solely written for the GNOME backend.

featureset

First a note: This point is subjective, you are free to have a differing opinion. If you do, please explain yourself in the comments.

Both browsers implement a similar set of features. I don’t know of a single (non-testcase) website that provides reduced functionality in either of the browsers. Still, the codebases are very different. Mozilla is a pretty old codebase that grew into what it is today and while there were some attempts to clean it up at times, there is still a lot of cruft around and it is easily the bigger size. I know lots of GNOME hackers that use Mozilla code as an example for scary code. On the other hand, Mozilla makes up for this by providing a lot of small niceties that Webkit does not provide and that make for a smoother experience in general. An example is kerning between tags: AVAVA (In Mozilla, the A and V letters will overlap, in Webkit they won’t). Another example I know is windowless browser plugins.
Webkit in general is simpler, more compact and easier to understand code. I attribute this to KHTML being newer and focussing on maintainability and clarity. This shows when Webkit is able to show excellent performance. However, this quality is not always true. Parts that are not maintained by the core team are often very ad-hoc, lack design and bitrot easily. In short, everything out of the ordinary can be very ugly. Also, I am not sure if the old goals are still held in the same high esteem by the current maintainers. Code quality doesn’t sell after all.

code handling

Mozilla is written in C++ and uses a heavily customized version of autotools as the build system. The source code is since recently kept in Mercurial, older branches still use CVS. Webkit uses Subversion with an official mirror in git. The core is written in C++. Each port adds its own build system and API on top; Apple Webkit for example uses Objective C and Xcode, webkit-gtk uses Gobject-style C and autotools.
Both Webkit and Mozilla use a review process where all features and bug fixes have an attached bug in the respective bugzillas (Webkit and Mozilla) and all patches need to be reviewed by at least one reviewer. Webkit uses buildbots to check that the various ports and branches still compile. It pales in comparison to Mozilla’s extensive QA that also includes performance regression testing and important policies that govern checkins.
Releases in Mozilla are handled centrally, all Firefox ports are released with a common release number at the same time. This is built on top of a core branch, which other projects built on Mozilla (Camino, Epiphany) use. This allows an easy matching of browser version to feature availability. Webkit projects releases are not coordinated between different ports, and contain their own branches. so Safari, Chrome, webkit-gtk and Webkit master may all support different features.

initial work required to integrate

The next two paragraphs again are heavily subjective. They assume that GNOME wanted to include a library for integrating the web.

For Webkit, work is being done to integrate GNOME. The problem is that a lot of base libraries are required that do not exist today. So not only does webkit-gtk still need to gain basic features, it also requires testing all the new code excessively for conformance to standards, stability, security and performance. None of this is being done today.
Were such a library to be based on Mozilla, much less code would need to be written. However, a lot of integration work would need to be done, so that GNOME had proper access to the library functions (like for manipulating the DOM, http access or the Javascript interpreter). A lot more work would be needed to convince the Mozilla developers to support such a stable API, as currently Mozilla does not guarantee stability on any internal APIs and even add-ons have to be adapted for every major Firefox release.

Regardless of which project were to be chosen, my expectation would be that if we were to start now, it would take 5 experienced GNOME developers roughly a year to get this work to a point were it would hold up against todays requirements of the web. For Webkit, this would mostly require writing source code. For Mozilla, both writing code and evangelizing inside their community would be necessary.

predicted maintenance required once project is running

If the project in the above paragraph were done today, continuous maintenance of this library would be required. Not only for bugfixing and adding features, but also to keep up with the constant change in the browser cores, which refactor for performance and security or add new features for new web standards. I’d expect the current GNOME browser team to be able to do this.
However, we would need to build up respect in the browser community, both for ensuring our work was still necessary and for driving change in the browser’s core and the standards community to help our goals. Currently no GNOME developer that I know participates in any W3C working group. This would likely require people working on this project full-time, as respect is usually concentrated towards single people. Also, contributing to the core requires knowledge of not only the large code base, but also keeping up with the multiple different ports. I don’t see any developer of GNOME doing this currently.

view towards GNOME

Mozilla and Webkit developers pretty much share their opinion about GNOME: It doesn’t matter. This has two reasons. The first reason is that it doesn’t have a significant enough market share for them to be interesting. Even though they target the Linux desktop, none of the browsers target GNOME in particular, as they try to ship a browser that works everywhere on Linux, like KDE or CDE.
The second and more interesting reason is that the browser development communities think desktops are going to die sooner or later and the browser will be the only application that you run on your computer, so developing “desktop stuff” is not in any way interesting to them. Just like the GNOME community treats the web, the web developers treat the desktop.

view towards Free software

Mozilla has it this in their manifesto:

Free and open source software promotes the development of the Internet as a public resource.

This is shown by the Mozilla code being tri-licensed under GPL 2 or later, LGPL 2.1 or later or MPL.
Webkit is de-facto licensed under the LGPL 2 or later, as that is the license of the original KHTML code. New contributions are encouraged under the BSD license, but code licensed as LGPL 2.1 is accepted, too. All Apple and Google contributions are BSD-licensed. However, while the companies follow the letter of those licenses, they don’t follow the spirit. Both Android phones and the iPhone contain a Webkit browser that is not updatable by users.

conclusion

This is the short version of things I evaluated. In short, Mozilla is the adult browser engine with clearly defined rules and goals, while Webkit is the teenager that is still trying to find its place in the browser world. I don’t know which one would benefit GNOME more, so feel free to tell me in the comments. Also, if you feel things in here are factually wrong, feel free to point them out there or privately.

Syndicated 2009-03-03 21:04:58 from Swfblag

read this - now

If you know what valgrind is, you want to make sure you read this blog post - now.

That is all.
Well, apart from the fact that I’d like to have Nicolas’ blog syndicated on Planet GNOME, but don’t dare to make an official request, as he has not a lot to do with GNOME - or does maintaining valgrind count?

Syndicated 2009-02-27 23:49:28 from Swfblag

130 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!