Older blog entries for rillian (starting at number 80)

recruiting

We're looking for someone to help out with Ghostscript integration in Free Software, on a part-time contract. Yes, that means paid work. Help sort, review and update patches from the distros into upstream, write a Firefox plugin, that sort of thing.

Please send interesting resumes to giles@ghostscript.com.

whacky medieval latin

raph and nymia, google suggests ojusdem might be eiusdem, the third person, singular, feminine, genitive pronoun.

So inter omnes curvas ojusdem longitudinus might be "between each of its curves lengthwise." But I know exactly enough Latin to be extremely dangerous with a dictionary. Caveat lector.

恭喜發財

freetype, the comment about phasing out M4 was from atai, not me.

I don't actually mind M4. The syntax is an odd marriage, which made it hard to learn ("even more fun with quoting!") but it's just a macro substitution language, and I'm not sure what a better alternative would be given the goal of outputting portable sh. You can't even define subroutines in portable sh! (that's why configure scripts are so big)

No, as I've said before, the complexity comes from the fact that you're trying to write an expert system in a combination of sh, M4, and the code of the autotools themselves. Most of the knowledge is embedded in code, and in many different locations and formats. That makes it difficult, and brittle.

As far as replacement goes, I can see wrapping the old macros in a newer scripting language like perl or python, so that the original M4+sh still gets expanded and run in an external shell, but newer code could be written directly in a nicer language. That way you could port macros one at a time and not lose the vast store of knowledge accumulated in the GNU autotools and builds that use it.

autotools

freetype doesn't give any examples of what autoconf is used for that GNU make + pkg-config can't do, but when I said that I was thinking more about autotools as a whole, not just autoconf.

Re configure scripts in repositories, I guess I'd always considered "lack of portability" of configure scripts to be a bug; having autoconf generate a portable sh script is the whole point. The valid objection to checking them into a source repository is that they're not source.

As machine-generated code, it's ugly, irrelevent, and if developers are using different versions of autoconf et al. it can generate a huge amount of noise in the diffs. The only advantage I can see (besides the mentioned working around buggy configure scripts) is that it removes some dependencies for random people building straight out of the repository.

raph, your comments on GNU autotools are pretty must spot on, but I think your point number 3 is inaccurate. It's not that everyone uses the GNU toolchain; there are applications where other compilers are still interesting, Solaris is still alive as a vendor unix in the free software space, and as you point out, Apple has heavily modified the linker in MacOS X. So there are at least three common platforms that require special logic for building shared libraries, for example.

Rather, I would say that the reduced diversity as the vendor unicies become less relevent brings the issue of dependency detection and configuration to the forefront. And those are precisely the areas where the autotools approach of a maximum-portability script that tests the local system configuration is weakest. The imake style where the build tool knows what's installed is much more efficient here.

In fact, if you already know sh (portable or not) and are willing to depend on pkg-config (as most gtk/gnome software now does) you can do almost as well with just GNU make + pkg-config. The text substitution operators of the former do most of the convenience (as opposed to portabtility) features of automake, and the latter can do most of what autoconf is now used for. The rest of the configure script can just be implemented in the Makefile rules.

Still, I'd also like to see one of the more modern build tools take off, especially something that would flatten the learning curve for these things. SCons seems to most promising of these, but it still has a ways to go before it becomes a compeling replacement for larger projects.

Recently, cinamod tacked on a rant about PDF being a non-free format. I'd like to hear more about that. Certainly, my views are probably coloured by working on Ghostscript but I also think I care more about free formats than post people, and it's not really my perception that PDF is evil.

It's true that the format is controlled by Adobe and they don't have an open development process. On the other hand, the is good, freely available documentation of the format and Adobe has generally behaved in a way consistent with their being congnizant of it's value as an interchange format with multiple vendor support. Both of these things contrast with Microsoft Office document formats, the other example under discussion.

It is true that Adobe claims some patents on aspects of PDF. Their is however a blanket grant for applications compliant with the spec.

It is also true that the PDF spec, especially later versions includes a number of non-free (or silly) formats by reference, like JBIG2 and JPEG 2000 (the later is at least getting lower risk as time goes on). The latest release (PDF 1.6) even includes support for embedding the U3D format, which as near as we can tell hasn't been published in any form yet!

But, I don't actually see problems with the issues cinamod mentions directly. The LZW patent expired in the last jurisdiction a year ago. The Compression Labs patent on baseline JPEG (if that's what was being referred to) is generally considered invalid. The colourspace conversion issues are covered by the grant mentioned above.

Better, one advantage we have in Free and Open Source software is that we can choose the features we implement based on technical merit rather than the need to sell this year's upgrade to our software. This is I think were PDF really starts to look good. It's much easier to write a parser for than PostScript, which is/was the old standard, and the updated imaging model, compression, and portable rendering features make it a much better choice for everything we traditionally used postscript (which was another Adobe-controlled standard) for. The only drawback really is that you can't generate simple files with printf() like you can with postscript.

There's no compelling argument for us to be producing documents containing JBIG2 or JPEG 2000 images. The spec does contain support for LZW, but for the most part we continue to do what Ghostscript did until last year, and that's only support compressing with the free Deflate (zlib) filter.

So what exactly is non-free about it? It's easier to get the documentation and start implementing than many "more" free formats, like baseline JPEG or TIFF fax compression. By producing files based on a sane subset we can avoid both the patent and technical excesses and parent format. In fact, a number of such subsets are already defined and available for reference, like PDF/X or PDF/A.

Of course it's nice to have something controlled by and open community (like the Xiph multimedia codecs) but network effects are also very important. I think it's a good idea to just use PDF and make it our own.

redi, the disk does fill up that often. raph and I both use the same machine for email, and basically whenever there's a spike in the virus traffic the disk fills up.

Knowing this is the actual cause of the accounts going missing, I'll try a little harder to keep that from happening. As far as I knew it was something related the disk corruption when the previous incarnation of the system died a firey death last year. This explanation is much more reassuring. :)

Letting people make their own mistakes

rbultje complained that we at Xiph were being idiots trying to implement a new media framework badly, in reference to Arc's OggStream project. Please, give us a little more credit than that. We've never had any interest in implementing a media framework, and nowadays there's no reason too.

Xiph is an open source project, and people are free to contribute what they think is useful under our general umbrella. That doesn't mean the rest of us think just anything someone says they're doing 'for xiph' is a good idea, or will end up 'officially' recommended by the foundation per se. This is such a case. I think Arc has confused here his personal interest in writing something with the community's need for it, that's all. And quite a number of people have told him so, but he's not one to be deterred by such. If you talked to some of the calmer developers you might have gotten a different picture.

There is a need for a convenience library like vorbisfile that handles theora and probably all our other codecs, for someone who wants to use them but doesn't want to tie themselves to one of the big media frameworks. Arc's proposal grew out of that need. What most of us would rather see is something lightweight based on Conrad Parker's excellent liboggz, a libfishsight to go with his libfishsound.

Clearly we do have some kind of image problem, since it's become fashionable to talk about how lame Xiph is. But we are just a very small open source project trying to do something much harder than most people want to be bothered with. If you don't like how it's going, it's up to you to help fix it.

Ogg Chaining

rbultje, glad to hear the chaining bugs are finally getting fixed. It's a bit of a show stopper for me with totem. :)

As far as how to handle them in the ui, I can suggest three options. One is just to treat chain segments as independent clips. Each one gets its own playlist entry, and there's no need to upgrade the seek bar. This makes a lot of sense for things like saved streams and album-as-concatenated-files. There's an xmms patch for this.

Another is to treat them as subclips, and insert visual/jump boundaries in the seek bar, sort of the way older versions of iMovie worked. In totem's case this would mean developing a custom widget to replace the seek slider.

One could generalize the above two, where chained files get their own (sub) playlist, but a playlist (of any origin) can been treated as a whole in terms of the behavior of the seek slider, which would show the item boundaries, and let you jump to any part of any item. I could see that being a nice feature for (shorter) playlists. You'd have some feedback about how many songs there were and where you were within the list without having to scroll through the playlist, as well as having a quick random access option.

The competing point of course is that Ogg chaining is sometimes just used for edits, and the divisions may be meaningless. In that case, it makes sense for a playback app to just ignore them. You may be able to make a guess about the appropriate behavior based on the associated metadata; if the title didn't change, or a segment has no metadata maybe it's just an edit. Likewise in DVD-Video, chapters could be treated as segments, but programs should always be like separate files.

Anyway, that's the direction I'd experiment with.

hating computers

So, advogato is back up-ish. Raph has moved it to the new machine we bought to replace the ghostscript.com host, which expired some time ago and only survives now only as an artificial creature, dependent on life support.

There may be a few more glitches as we transition the machine's primary responsibilities, but hopefully things will stay up a little better now.

Sometimes I hate computers. I feel like I've spent more than half of the last 3 months fighting with broken servers. The hosts for both ghostscript.com and xiph.org imploded about the same time, and getting replacements online was in both cases something of a nightmare. The lesson, at least for me, is that when you're trying to do things on the cheap, build the machine locally and ship; the kind of on-site support you need if it doesn't work costs more than the hardware, and you won't save anything by having someone build something locally.

For Xiph.org we also ended up switching hosting providers. Our primary server is now with the very cool folks at the Oregon State University Open Source Lab. We were also inspired by the pain of the downtime and data loss to set up some redundancy, and in particular mirrors for the websites. If you'd like to help us out, email the xiph.org webmaster; we need both mid-bandwidth web hosts for the sites, and high-bandwidth mirrors for media content and release files.

Still, an end is hopefully in site. At least if my home machine would also stop trashing its disk.

Ex Londonium

I finally received my Canadian immigration papers this past November, and officially became a resident on December 21st last year. After spending xmas was my partner's family in Kingston, we found a nice apartment in downtown Vancouver, then went back to London to pack up there. We came back ourselves at the end of February and the things we shipped finally arrived in June, so we're officially here. It's really nice to be back.

The reverse culture shock was interesting. When I first moved to Vancouver 10 years ago it was the biggest city I had ever lived in. After a while I got used to the scale and enjoyed taking advantage of all the things on offer. Then we went to London, one of the largest cities in the world. So when we came back, we were struck by how pleasant and friendly everyone was, how clean the streets were, but also how small it all was. Now Vancouver is just a place with only two cheese shops.

These things wear off though, and we're still happy with our decision to return. It was a great experience to be able to live in Europe for a few years, but London wasn't our first choice as a place to stay.

71 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!