Older blog entries for cdfrey (starting at number 58)

A little while ago, I wrote a summary on Linux sound. Since then, the Insane Coding blog posted a new summary of its own, which I'm linking for completeness.

The Canadian Conservatives Try Again...

    There's recent news today that the Conservatives are introducing new bills to parliament that let police invade your privacy without a warrant.

    Michael Geist has some historical background on the bills, but it is still early. Looks like these bills are mostly copies of what the Liberals tried to pass in 2005.

    I'm sure there will be more news to come. Get ready to start writing letters to your member of parliament...

A Lean-Computing Curmudgeon's Thoughts On Linux Sound

    After much reading of PulseAudio on Wikipedia and the interviews linked there and the blog posts linked from there (2009/04/30) and the main PulseAudio site and threads on OSS4 on linux-kernel, I've come to the conclusion that PulseAudio has the potential to be a good thing (using less power on a laptop, for example), and also the potential to be a bloated pain if not handled properly. Its tendrils reach many places, admitted by the developer himself, and it is hard to integrate into a distro without doing your homework.

    I'm partly willing to give PulseAudio the benefit of the doubt though, if only for the power saving potential. I find it disappointing that an entire daemon and library system has to be built on top of ALSA and OSS to achieve this, but since Linux has decided that mixing belongs in userspace, and that no floating point is allowed in the kernel (understandable), then mixing has to be done somewhere, and PulseAudio looks like a good attempt by a sane developer, who is saddled with working with the pre-existing sound mess that is Linux.

    The library situation looks interesting... if you want to just program with a library and forget the low level, use libao or libao2 (libao2 comes from the mplayer guys). Libao is crossplatform.

    Another library called libsydney is also intended to be crossplatform (Linux, Windows DirectSound, Mac, etc). I haven't looked closely into that, but it is probably worth a look.

    See http://blogs.adobe.com/penguin.swf/linuxaudio.png for a graphic of other sound APIs and systems. As a programmer, I would aim for either programming directly to OSS, which is portable crossplatform, or use libao2. Maybe libsydney if it doesn't have too many dependencies. My needs as a sound programmer would not be anywhere near the heavy-duty, so these options would work for me.

    As for PulseAudio's state of readiness, I think there is a definite reason why it ain't at version 1.0 yet! :-) But that's not a bad thing. For stable-loving users like myself, it is probably worth waiting for a few distro release cycles before the bugs are worked out to a satisfactory degree. Newer PulseAudio releases will even stress ALSA drivers in new and interesting ways, so I expect there will be a strain on the sound system for a good while yet, from the applications right on down to the kernel.

    If you don't need sound mixing, don't use PulseAudio yet. If you don't need power savings, don't use PulseAudio yet. If you use a laptop on battery then PulseAudio may be very useful, but you may need a good chunk of memory to support it, and it will pay to stay as up to date as possible, with kernel, distro, and PulseAudio.

    From a power standpoint, there are reports of PulseAudio not even showing up on powertop, even while playing music, which is a good thing. I don't know how mpg123 + ALSA or OSS would compare.

    There's a lot of flameage out there regarding ALSA and OSS3 and OSS4. It is easy to get caught up in it, and yes, I was caught up in it too... but my style of getting caught up in something usually involves me wasting many hours on research and reading to get the facts, and at the end of all this, I'm feeling less harsh with everyone, even though my mpg123 + ALSA configuration sometimes uses 40% CPU on a P4. (grrr!) :-) I can see the history and the reasons why Linux sound has evolved the way it has, and while some things still look unfortunate, they are understandable, and people continue to work on improving the situation.

    I must say, though, that a piece of software going from "open" to closed, such as OSS3 did, can cause much disruption in the Free Software community. Even KDE and Gnome were arguably split due to licensing issues, even though they evolved in quite different technical directions as well. I think it would behoove the Free Software community to be more watchful of such situations, and guard against such collateral damage. The side effects can last for decades.

badger, those pull and push settings are configurable. Usually when I do a git-fetch, it grabs everything. Check how your remotes are setup in .git/config, and edit to taste.

Note that pull and push are not two halves of the same coin. For that, use fetch and push. Pull is a mix of fetch and merge, and it doesn't make sense to merge using multiple branch targets all at once. In fact, that is impossible as far as I know. Whenever you do a merge, you are always merging some other branch into your current branch.

chalst and ncm: I confess that I still read the recentlog, nearly every day, without logging in. So a feature that tracked my reading habits would be slightly incorrect.
On Postel's Law

    This might brand me as a heretic, but I'll say it anyway.

    I like brittle software.

    Now, what I mean by that deserves some explanation, but whenever I think of ideal software that does a job for me, I think of it as a harmonious blend of the robust and the brittle.

    First, the robust. Software needs to check its buffer sizes, check OS error codes, behave defensively and not crash even if given complete garbage as input. This side of the equation is very strongly linked to security. The software should be impossible to use for malicious purposes due to a bug in the code.

    Another aspect of robustness is flexibility. It should put the user in control, while defending the user from possible mistakes he might make while learning to use the program, or in just everyday use. It should be hard to use wrong.

    The brittle side of software also shows up in its error detection, and is a critical part of putting the user in control. If there is an error, I want to know about it. If the program is not absolutely certain that my data is safe, I want to know. If an attacker is trying to use the program to harm me, I want it to complain loudly and often.

    I would rather have the program stop running than harm my data.

    Not every program can achieve these lofty goals, but this is the nebulous image my mind creates when I think of ideal software. Some of this comes from my background in writing industrial network firmware, where it was better for firmware to halt completely and go to a known safe state, than to assume it knew what you meant and just let that piston stay on your coworker's arm.

    So bringing Postel's Law into the equation, there are places where I definitely do not want my software to be liberal in what it accepts. Take a git repository, for example. If there is any remote possibility of data corruption, I want to know, and I want git to refuse to proceed. Luckily, git does a fantastic job of data integrity, which is one of the reasons I like it so much.

    Postel's Law was originally about TCP, and so pertains to communication. Taking TCP as a specific example, and trying to apply Postel's law, let's say that a packet arrives with the entire TCP header shifted by 1 bit. I am no TCP stack expert, but I would be surprised if any TCP stack would be able to parse such a packet correctly. The only logical way of dealing with such a packet would be to look at the existing bits with the header format in mind, try to make sense of the non-sensical data, and send back an error if possible, or drop it on the floor.

    How does Postel's Law fix a situation where the very format of the data is not respected?

    I can understand if there is a TCP flag that is out of order, or if some unambiguous packet abnormality exists in an incoming packet. But there are only a limited number of such possible error combinations. Things have actually progressed in the opposite direction: TCP stacks have gotten more strict to deal with various attacks, and firewalls are used to strictly guard inbound and outbound traffic.

    Being too liberal has a cost. And some costs are so high, they are seen as a detriment to society. Just look at the once-popular open SMTP relay.

    Even in places where it would seem obvious that being liberal in what you accept is a good thing, upon further reflection, it turns out to be not so clean cut.

    Take for example, a corrupt OpenOffice document. The user definitely wants to open it. Maybe some invalid XML is getting in the way, or maybe the file is only half there. What should OpenOffice do? It should make a best effort at retrieving what data it can, and it should open the file read only. It should also warn the user loudly that the file was corrupt, invalid, and inaccurate. The user needs to know this. This may be a file from an outside source, and so the error is not important. But it may also be the first error the user receives about a dying hard disk, or a bad network, or a corrupted filesystem, and restoring from backup is the next item on the agenda.

    So in my frame of reference, Postel's Law, which is also called the Robustness Principle, fits right in, but only to half of my ideal software equation. Yes, be robust in what you accept. Be able to process complete garbage input without crashing. I've heard of people who fed a program's own EXE into itself as test input. It should be safe to use input data as a baseball bat and bludgeon the program without it falling over.

    Yet the Brittle half of ideal software doesn't change. It is still as loud as before, warning the user of potential pitfalls and trouble on the horizon, and will refuse to proceed if it can't guarantee his data's safety or determine his intent without ambiguity.

    In the end, Postel's Law is too ambiguous to be a useful guide, let alone a law. Those that accept it are too liberal, and those that don't are too strict. :-)

apenwarr has written an excellent rebuttal to my original rebuttal. I'd like to clarify my different viewpoint.

I think the crux of the argument boils down to these two statements:

    Strict receiver-side validation doesn't actually improve interoperability, ever.

and

    If you didn't catch it, the precise error in cdfrey's argument is this: You don't create a new file format by parsing wrong. You create a new file format by producing wrong.

I obviously disagree with both of these statements, but I understand how you could think they were true.

On the surface, parsing doesn't seem to create a new format, but even in Avery's own example, the majority of browsers accepting an incorrectly quoted option have indeed created a new format. It isn't a documented format. It is actually an anti-documented format, because the spec says it is wrong. But anyone writing a browser today would not be able to merely follow the specs and produce a functional browser... they would have to follow the behaviour of every other browser in existence as well. Just ask the developers of the now long-defunct, but exciting, Project Mnemonic.

Now, obviously this makes it easier for Average Joe to write his own webpage, and it probably did help advance the popularity of HTML and the rise of the web. But there is a defacto HTML standard out there because parsers were not strict enough. I don't know how you can deny that. (Part of the problem was that browsers were developed alongside the spec, so that contributed a lot too. The poor spec didn't stand a chance.) :-)

And strict receiver-side validation does improve interoperability. Can you imagine if the average C++ compiler allowed a relaxed syntax? Suddenly compiling code that "works" on one platform would not work on another. I admit that this is already a problem to a smaller extent than HTML, but differences in validating the C++ language spec between compilers is seen as a bug in the compiler, and rightly so.

This raises an interesting Option 4, to add to Avery's list: let the parser be forgiving with unambiguous syntax, but warn loudly.

This would be a huge improvement to what we have today. We need more web browsers that report the number of HTML errors in a page, by default, in the status bar. And it should be hard to disable, so that a site's non-compliance is widely seen and scorned. (And yes, some of my web pages would be scorned too.)

I believe Opera has a feature like this, if memory serves.

As for the side note claiming that parsing is not the problem, but the rendering is, my argument was based on XML as a data exchange format, and less as a way to display content in a browser. For example, the opensync project uses XML for data interchange, and plugin config. These formats are defined in strict schemas, which are tested and used via libraries like libxml2, which I assume falls into Avery's Big XML Parser category. If these schemas were not correct, I would consider it a bug detrimental to future interoperatility, and something that should be fixed.

The web itself is already so goofy that trying to apply XML to it now is like nailing jello to the wall. So in that respect, I can understand Avery's pain. It just bothered me that someone was boasting about an incomplete parser and claiming they interoperate with XML better than the big libraries. It seemed to me that he was discounting the long-range goals at work in XML in order to avoid some short-term pain. Of course, in the real world of making the customer happy, such shortcuts are often needed, but it's something to hide away in that closet of programming hacks, not published as an example on the web. That sounds harsher than I mean it, but I like those long-range goals, and XML is a solid technical achievement in its own right, even if rather cumbersome. Hey, I'm a C++ programmer... I like strict syntax. :-) I believe strict syntax promotes accuracy, and that accuracy helps you down the road when projects get larger and more complex.

In some ways, you could say that the inaccuracy found all over the web results in an unstable foundation that is generally holding the web back from greater things.

I hope it is also clear that I'm not a fan of XML. (Referring to it as a monstrosity was probably a good hint.) It has its place, but I think a lot can be accomplished by drastically simpler documented formats, and I'm quite willing to hack up my own simple file format if I think it is appropriate. I just don't call it XML.

I have some thoughts percolating in my head about Postel's Law, but that will have to wait for another post.

movement: Yes, I was partly expecting this response. But I'm sure it's because my lawn is so pristine that you kids keep wanting to mess it up. :-)

Seriously though, I have no problem with Javascript as a language that people might want to use to get things done on the desktop. The problem is that, in almost all current implementations of Javascript, it is setup to run any random code from the internet that the user clicks on... or even code he doesn't click on, in some cases.

In order for me to consider using a web-enabled Gnome desktop, I need to be confident that I have the power to enforce this strict separation of church and state. My PC is the church, and the internet is the state. :-)

I need to be able to flip a setting that makes it impossible to run any javascript that comes from outside my machine, whether it be through email, the web, or various files left over in /tmp or .webgnome or /home/cdfrey/Desktop, and only run javascript that I've installed and authorized, such as through apt-get or /usr/local.

This is where my confidence in Gnome's security design falls apart, because history seems to show that it is always more tempting to enable the new shiny web than it is to lock it down securely.

Dear apenwarr,

    I'm sure you know all these things, but I'm afraid I'm as compelled to respond to you as you were compelled to write your own incompatible XML parser.

    I do believe you are missing the point of XML. Yes, it is a horrendous pile of textual complexity, piled high to the sky with syntax and nested markers. Yes, the available API's, in order to deliver the promise of XML to the end user via the application programmer, are complex enough to fall out of your head as soon as you've finished implementing the feature.

    But if you're going to do XML, please do it right.

    Interoperability is hard. Anyone can write their own parsers. And everyone has. That's why the monstrosity called XML was invented in the first place.

    It all starts with someone writing a quick and dirty parser, thereby creating their own unique file format whether they realize it or not. And since they probably don't realize it, they don't document it. So the next person comes along, and either has to reverse engineer the parser code, or worse, guess at the format from existing examples. This is then documented, either in Engrish, or in yet more code, which is likely to have at least one bug in it that makes it incompatible with the original format. This bug means that the second programmer has, whether they realize it or not, created their own unique file format.....

    Oh dear.

    The DTD is the documentation. XML kinda forces you to create it. The nice thing about DTD is that the computer can check it, which means it is less ambiguous than the pages of documentation that came with the old format. And with DTD + XML data, the computer can verify both, guaranteeing that the next programmer who gets the data can parse it the way it was meant to be parsed.

    If your customer had used this XML process, you could have used the Big Fancy Professional XML Library. But instead, your customer is using their own UnknownXML, and you've created ApenwarrXML. The next programmer to come along may be forced to create yet another ThirdXML version, and the pain keeps propagating.

Advogato posters: leech or seed?

    While I'm in the mood to make incendiary diary entries (insert good natured chuckle here), I'd like to write about Advogato etiquette.

    I think the addition of syndicated blog entries, while bringing a chunk of good traffic, also carries a significant drawback. To some degree, it seems like people setup external blogs, turn on the Advogato syndication option, and then forget about Advogato.

    How many people actually read the recent log after they have enabled syndication for their own external blog?

    When I reply to posts that other people make, I used to be able to have a conversation in the recent log. This seems to be a more remote possibility now.

    Is it just my own laziness in not wanting to add a trackback entry in an ever-increasing list of external blog sites? Should I get off my backside and implement automatic email notifications in mod_virgule whenever someone posts a response? Probably.

    But it also seems to me that anyone who avails themselves of the Advogato community, and adds to their own readership by hitching their ride to Advogato's star, also has a responsibility to participate more thoroughly than just tossing their output over the wall.

49 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!