Older blog entries for mbp (starting at number 230)

After a lot of fiddling, I got XFree86 4.2, GARNOME 0.18, Mozilla, and OpenOffice to all agree to use FreeType fonts. I have to say the results look very nice indeed, very smooth and nicely antialiased.

It took me ages to work out that I needed to re-run mkfontdir, because Debian had somehow forgotten. (I guess I probably have used that command once, years ago.) It's fundamental to friendly/reliable software design that caches should regenerate themselves if they're out of date. X11 is, as they say, a complete fucking flying circus.

I guess it's getting better, through FreeType and the GNOME font installer and so on, but at the moment the functionality is nice but the usability is very nasty indeed.

I'm sure Windows is just as ugly on the inside. A friend of mine, by installing a TrueType font using the approved method , managed to trash a Wince device so completely that the motherboard needed to be replaced. (Soldered-on persistent memory, I suppose.)


linux.conf.au registrations are open; the programme should be announced any day.

I am really pleased with the distcc test suite. It catches a decent fraction of bugs, and new ones that are reported can usually be tested to prevent regression.

I think the trick to testing is to start early, and be disciplined about keeping on adding them as you go. Like washing up, if you leave it for three months then it looks very intimidating. :-)


being non digital:

Canberra is accurately described as having pulse-width-modulated weather: Spring days fluctuate between maximums of about 12C and 25C, until eventually warmth wins out.

Last weekend was warm. I took my motorbike up over the Snowy Mountains highway from Cooma to Adaminaby. It was very beautiful indeed. It does feel like the roof of the continent.

Also, I just got the keys to my new apartment, and I'm going to start moving this weekend. Yay.

A year or two after emigrating, she happened to be in Paris on the anniversary of the Russian invasion of her country (Czechoslovakia). A protest march had been scheduled, and she felt driven to take part. Fists raised high, the young Frenchmen shouted out slogans condemning Soviet imperialism. She liked the slogans, but to her surprise she found herself unable to shout along with them. She lasted only a few minutes in the parade.

When she told her French friends about it, they were amazed. ``You mean you don't want to fight the occupation of your country?'' She would have liked to tell them that behind Communism, Fascism, behind all occupations and invasions lurks a more basic, pervasive evil and that the image of that evil was a parade of people marching with raised fists and shouting identical syllables in unison. But she knew she would never be able to make them understand. Embarrassed, she changed the subject.

-- Milan Kundera, The Unbearable Lightness of Being

Zaitcev has a good point about OPN being lilo's demonstrated delivery.

I guess it is hard to tell how much of the value comes from rlevin himself, and how much from the server and channel operators and users. But that's true to some extent for any large project: the founder or leader gets most of the credit, even if their work is made possible by many smaller contributions. Nothing particularly wrong with that.

So there you go.


Another interesting step towards microkernel design in Linux is kernel mode linux, which is supposed to allow user programs to execute in kernel context. I won't say the guy isn't on drugs, but it's an interesting hack.

I found this, by the way, at sweetcode, which is a nice little site.

the status is that rob levin should show us the code

(amateur social psychology for fun and profit)

rlevin writes

I agree completely that, when the issue is code, there is no substitute for at least providing a reference implementation that can be critiqued. My only point in commenting was to note that not every issue is a code issue. Sometimes some community people use the refrain "show me the code" in a way that denies the significance of social or procedural issues, and confuses implementation with goal-setting.

The motto "show me the code" doesn't literally mean that source code is the only acceptable demonstration. Other phrases that get at the same idea are "talk is cheap" and (my current favourite) "all hat and no cattle". (That doesn't, by the way, mean that you need to literally own cows to get respect.)

For a meritocracy to be meaningful, people's contributions have to be visible in some way that can be assessed by their peers. This can be in ways other than code: artwork, writing, speaking, web sites, and organizing conferences spring to mind.

The subtext of Rob's diary entries seems to be that he wants people to value his efforts in things such as goal-setting and procedures. Indeed, a little while ago, he claimed the work was so important, and has was so uniquely qualified to do it, that people ought to pay him to do it full time.

But this directly conflicts with the "show us the code" mindset, and I suspect this is why it irritated so many people. Nobody can actually see what it is Rob does, whether it's worthwhile, and whether he does it well. So who can tell? The default position encoded by "show us the code" is to assume that it's just vapor until demonstrated otherwise.

30 Sep 2002 (updated 30 Sep 2002 at 06:14 UTC) »


24 Sep 2002 (updated 24 Sep 2002 at 07:25 UTC) »

I reviewed the linux.conf.au papers, and put up a little advogato article regarding it. (This is for people, like me, who only read recentlog.) The standard this year should be excellent.

There are so many things I want to fix in the build system at work and not enough time.

Christian Reis wrote a good paper contrasting XP and open source.

proofs I think some confusion is caused by the two different meanings that "proof" has in English. One is to "establish beyond all doubt", and the other is "mathematical argument".

I was recently saying that software cannot be established beyond doubt to be correct, in part because we cannot really completely define "correct" for practical programs. You can make an argument that nothing can ever be proved absolutely beyond doubt, only contingently.

On the other hand, raph and graydon correctly point out that formal arguments can be useful in trying to establish confidence, and I certainly agree with that.

I think there is less of a gap between quotidien testing techniques and proof than some people on both sides of the gap might suspect.

19 Sep 2002 (updated 25 Sep 2002 at 03:18 UTC) »
From: Anonymous-Remailer@See.Comment.Header (Simon Malantrap)
Subject: Who is Stephen Kapp?
Date: 18 Sep 2002 15:37:03 -0000
To: mbp@sourcefrog.net
Comments: This message probably did not originate from the above address.
        It was automatically remailed by one or more anonymous mail services.
        You should NEVER trust ANY address on Usenet ANYWAYS: use PGP !!!
        Get information about complaints from the URL below
X-Remailer-Contact: http://www.privacyresources.org/frogadmin/

Stephen Kapp was the "president" of the UK virus writing team "ARCV"

Here's a quote from "The Risks Digest - Volume 17: Issue 16 Friday 2 June 1995":

"In 1993, another English virus writer, Stephen Kapp, was arrested in connection with telephone fraud charges. Kapp was known as the "President of ARCV," or ARCV virus writing group which stood for Association of Really Cruel Viruses."

Just do a websearh for "Stephen Kapp ARCV virus" and all will be told!

Now you *know* why a bastard like this ripped off your code.


A well-wisher...

(The RISKS article mentions another previously prominent virus author, Clinton Haines, who I knew a little in Brisbane. He died of a heroin overdose a while ago.)

eating pudding

Seeing is deceiving. It's eating that's believing. -- James Thurber

I'm surprised by graydon being annoyed at the statement that it is impossible to be completely sure that a program is completely free of bugs.

graydon describes bugs as being states that we do not want to reach, and he suggests that we can do proofs to demonstrate that the program does not reach them. I agree that this is useful, however: (a) proofs alone cannot demonstrate that a program will not reach a bug state, and in fact (b) nothing can demonstrate that beyond doubt.

We always have an imperfect understanding of the way our system will behave in the real world. Therefore we are not able to completely identify ahead of time which states ought to be considered bugs, let alone which bugs are most important.

For distcc, I've tried to make a good effort at both demonstrating ahead of time that the design would work, and also at testing it thoroughly. However, during the course of testing I discovered a kernel bug and an undocumented behaviour in gcc, amongst other things. distcc has a bug because it needs to work around these limits.

These are states which turn out to be bugs but which were initially believed not to be. Unless we make unrealistic assumptions of omniscience, then really I don't see how these could have been discovered other than through testing. (This is even more serious in Samba, where the protocol it's meant to perform is undocumented and very complex.)

Let me try to state it more precisely. Represent our program as a function f(i) => o from input to output -- for interesting programs, the input and output will be large ensembles of values, and so are astronomical in number. Now define a correctness function cf(i), which is true for inputs where f is correct.

The proposition P is that "there are no bugs in program X", which is to say that for every i, cf(i) is true. As I understand it, Popper's point of view is that we cannot ever be sure that we have proved P is universally true. There are too many inputs to examine them all, and if we make a theoretical argument it may be flawed, or it may not correspond to reality. Some people disagree with him.

Rather than trying to gain confidence by piling up evidence in favour of P, Popper says that it is better to try hard to falsify it, either by finding inputs for which is is not true, or by showing that P is internally inconsistent. Considering formal description of f is one good way to find ways in which it is not always correct.

Popper describes science as a stepwise process of formulating theories and then trying to falsify them. This seems broadly like the way people do software. However, perhaps we can do better science by making programs which better support falsification by either tests or proofs. For example being well factored, well documented, and deterministic helps both approaches. Popper's results don't stop us finding scientific theories that are both useful and also almost certainly true. Similarly, never being absolutely sure that a program is bug free ought not to discourage use from developing good software and being appropriate confident in it.

minor bug

Every year in developing countries, a million people die from urban air pollution and twice that number from exposure to stove smoke inside their homes. Another three million unfortunates die prematurely every year from water-related diseases. [The Economist]

Wow. (Yes, I am taking those numbers with a grain of salt, and yes, everybody has to die of *something*, but still.)

giants, shoulders of

graydon, you might like distcc. It's somewhat similar to your doozer program, but perhaps a bit faster (in C, not Perl) and easier to set up (it doesn't need a shared filesystem). (Or perhaps not.)

12 Sep 2002 (updated 12 Sep 2002 at 03:29 UTC) »
testing and proof

raph refers to Djikstra's famous quote that "testing can only show the presence of bugs, never their absence."

This rather reminds me of Karl Popper's sensible and insightful arguments that scientific theories can never be proved, only disproved. No number of experiments can prove that F = mg, or that there are no bugs in TeX. However, one (valid) case in which gravity is not proportional to mass demands that the theory be discarded or improved.

This leads at first, to a kind of intellectual nihilism: we can never know anything about the outside physical world absolutely for certain.

However, as Popper pointed out, it is reasonable to gradually gain confidence in a theory as people try and fail to disprove it. People attacking a theory have several possible strategies: they can find experimental conditions under which it fails, or they can demonstrate that the theory is inconsistent with itself or some other theory, or they can perhaps demonstrate that the theory is "unfalsifiable": not amenable to reasonable testing. The proposition that there is an undetectable invisible tiger in my study falls into this category: it may or may not be true, but it is certainly not scientific.

Quantifying the level of confidence is hard, but broadly this method seems to work reasonably well. The more bona-fide attacks that a theory survives, the stronger it appears -- but of course never above reproach.

So Dijkstra and Popper would agree that we can never prove that a program has no bugs. But (from my limited knowledge of them) Popper seems to have some more useful ideas about how as non-omniscient humans, we can rationally gain increased confidence.

Neither empirical methods (e.g. testing) or theoretical methods (e.g. proofs) ought to be privileged. In either case, it is possible that the trial might be carried out incorrectly, or that it may not be as strong an attack as was thought. Both have their place. In addition, the boundary between them is not entirely clear: experiments are formulated from theory, and theory is based on previous experiments. So for example, in software, we can annotate with preconditions and postconditions, as in Eiffel. This stands neatly between empirical measurement and reasoning.

Theoretical proofs have the advantage that they build upon extremely well-established mathematical theorems. But they do perhaps have the disadvantage that they can't test the program fulfills its real-world requirements in the same way that, say, interactive free-form testing can.

Tests, obviously, ought to be designed to prove that the program is *incorrect*. It's too easy to construct experiments which confirm a theory, but Popper would say that they are worthless, particularly if designed by the theory's author.

So it seems to me the crucial question of testing is "how can we economically produce the most excruciating tests?" I think there is a body of knowledge about this which is not really completely captured, by either the XP people or the QA theory people.

One well-known approach is to write test cases for bugs when they're reported and perhaps before they're completely diagnosed. We know that there is some kind of weakness in that area: possibly trying out different tests will discover a related fault.

Another one I have been thinking about recently is "defect injection". Suppose you intentionally temporarily change a line of code to be incorrect -- off-by-1, for example. Do your tests catch the mistake? If not, they're obviously too weak. If you injected 100 mistakes, how many would be caught? That gives some idea of the strength of testing. If you have everything in CVS, and a good "make check", then doing this is actually pretty easy.

Now this is all very well in theory, but in practice humans get attached to their ideas, and don't want to see evidence that contradicts them. This applies to programmers and scientists alike.

Knuth is supposed to have had a comment along the lines of

(* WARNING: I have only proved this code correct, not tested it. *)


raph, Java's bytecode system allows the implementation to use any garbage collection system it wants, from never-free through refcounting through to a mark-and-sweep gc. Various different ones might be appropriate depending on the target: for short-lived tasks, or v0.1 interpreters, being able to use a simpleminded gc is very nice. The "generational gc" that was current last time I looked behaves a lot like Apache pools.

All of this is done with no compile-time knowledge, and with a C API that looks a lot like Python's.

(Perhaps I'm wrong, it is a long time since I looked.)


The Linux kernel map that I did with Rusty and Karim Lakhani is now up on OSDN.

On the one hand, I'm pretty happy with how it turned out, and that its performing pretty well under heavy load. On the other hand, I find it a bit strange that BCG wanted to do it when the map is pretty, but not actually useful as a way to navigate or understand the kernel.

Amongst all the usual garbage on Slashdot there were at least three insightful comments: why is BCG interested in open source hackers?, and here, and here.

221 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!