Older blog entries for mbp (starting at number 240)

thregister.co.uk

As Raph said a while ago, Google's response time is fairly competitive with that of DNS.

Mistyping or misremembering a domain name (or even using an old URL) is likely to take you to a very shonky javascript-infested porn or scam site: exhibit 1, 2. Google basically never makes this mistake, and indeed handles typos in a very friendly manner.

Partially reputation-based ranking seems in fact to be *more* democratic than ICANN.

On the other hand, a single source for essential services is undesirable.

see figure 1

jdub demonstrates that sarcastic elitism is always entertaining.

fonts

Linux font support is getting better. I probably have at least fifty fonts installed on my home machine now without really trying. Choosing fonts by picking a name from a list is really not a scalable solution.

I'd like to see GNOME acquire a "more like this" manipulation dialog as a standard widget, and use it for choosing fonts. I don't know if there's a proper name for them -- you see them sometimes in graphics programs where they demonstrate the effect of changing several possible variables, such brightness, saturation & contrast.

So the first page will be about broad styles of font: serif, cursive, sans-serif; or perhaps display or text. You can drill down towards e.g. different variations on roman serif fonts, or cursive fonts, or cartoony fonts.

I really need a mocked-up dialog to show what I mean. And I must remember not to smoke drugs while writing about GUIs.

Threads

The other thing I should have said earlier is that of course sometimes ugly performance hacks are the only way to get the job done using the tools available. So for example for Apache to use threads on NT is a necessary concession to the poor fork implementation on that platform.

What most recently got me thinking about this was the internal Microsoft whitepaper on MSSecrets in which they admint that implementing IIS as shared-everything threads was an enormous mistake.

I fairly often attach gdb to a single Apache process to see what's going on. Since the process handling a single TCP connection is pretty much isolated from all the rest, this is quite straightforward and it doesn't interfere with anything else on the machine. The writer complains that this is impossible on IIS, because it would jam up all other threads in the process.

Similarly, if a particular process dies because of a bug it doesn't necessarily affect anything else.

pphaneuf, I had the impression that Ulrich might have said that in private conversation with bje, but I will check later. (Unless one of them responds here. :-)

MichaelCrawford, the thing about "using SMP" is that nobody really wants to just "use SMP" unless they're a "how about a beowulf cluster of those" slashdot weenie. People want to get a task done more quickly. We have to ask first of all, is the task parallelizable, and how? For example, if the system wants to handle incoming network requests, then you can do that using either threads or isolated processes. Or if you have a lot of data to digest you can divide it up and work in parallel.

What I'm asking about is how a user program can do SMP via state machines without the use of threads. Saying to run two state machines in different processes isn't the right answer. That's the same as using two threads and presents all the same difficulties.

Well, I would say that it presents many fewer difficulties: the processes are isolated and so don't affect each other if they crash, they can be debugged separately, etc. As pphaneuf points out, shared-everything threads will possible cause more SMP contention than processes that use special mechanisms to share only what is necessary.

I think things like tridge's tdbs that provide a simple safe abstraction on top of shared memory are an advance in this direction. So too are rusty's futexes (fast user-space mutexes): they give you mutual exclusion and rescheduling *faster* (IIRC) than most thread implementations, even if you're using processes. (Incidentally, rusty and tridge will both be at linux.conf.au.)

If the only way to represent your problem is as a single tightly integrated state machine then that suggests that perhaps it is not parallelizable at all.

lukeg, I think what Alan was getting at is that there is no getting away from the fact that mainstream CPUs *are* state machines. (They have registers, a PC, etc.)

Since consensus is no fun, let me suggest that both threads and state machines have advantages and disadvantages,

I didn't mean so much to structure programs explicitly as state machines, but rather to suggest that data should be private by default and shared where there is a good reason, rather than the shared-everything model used by threads in C. I think often only a few data structures will need to be shared to get an appropriate degree of parallelism.

I don't know Erlang as well as I would like, but I suspect lazy functional languages are more or less an exception to the idea of threads being bad, because they're not something the programmer deals with directly.

By the way, Squid is a fascinating example of continuation-passing in C, because it wants to do select-based async IO without using threads. It's clever, though I think it demonstrates C is not well suited to the problem.

Thanks for the pointer to Communicating Sequential Processes. I'll look out for it.

Perhaps you'd like to post a precis of how threads are used by Erlang?

20 Dec 2002 (updated 20 Dec 2002 at 05:45 UTC) »

bje had a good quote from Ulrich: "threads and stupid people attract each other." It goes with Alan Cox: "A computer is a state machine. Threads are for people who can't program state machines."

We thought at lunch the other day: except for very rare cases where you really do want to simulate many asynchronous processes it's hard to see threads as anything but a performance hack. Instead of using threads, you really want:

  • Cheap structured IPC and sharing, so that data can be explicitly shared as necessary, rather than sharing everything.
  • Good async IO.
  • Good flow-control mechanisms for doing background tasks.
  • ....

---

Have a happy holiday, everyone.

People in the northern hemisphere might like to imagine me going for a swim in ~36C (~95F) dry heat.

Don't forget to get ready for linux.conf.au. It's going to rock all over the place. I think there are going to be some pretty cool surprise guests.

Heard around the office: "ClearCase is so good, I encourage all our competitors to buy it." (Oops, I guess they did! :-)

I started writing a macrobenchmark/test for distcc. Inspired by GAR and GARNOME, it downloads, configures, and tries to build various large packages, timing the local and distributed build times. It complements the test suite, which checks correctness on small interesting cases, by feeding through a lot of valid diverse cases.

It reveals that performance across 3 machines is typically 2.0 to 2.9 times better. For any given project the results are quite reproducible. Presumably the slow ones have either lots of non-parallelizable or non-distributable work, or something about their Makefiles is not handled well.

Another way to look at this is that distcc is about 60% to 90% of the theoretical limit of 3.0x faster. Typically parallelization incurs some cost; 90% is not bad. I wonder how much of the loss is unavoidable? distcc itself does not use many cycles, but the scheduler that distributes where to compile a particular file is not optimal.

Python is excellent for this -- so easy to write very concise and clear tests.

Testing is so fun once you get into the swing of it. There's really a lot of creativity in trying to work out how to exercise a particular aspect, either by improving the program's testability or by writing a harness or driver.

I'm reading an ACM anthology on automated testing. I forget the name. More on this later.

Seth Schoen makes a doubleplusgood point

"Trying to design a limited-purpose computer is like trying to design a limited-purpose spoken language. Imagine trying to design a language that can express only some thoughts but not others."

Seth replies with a worthy comparison of this approach to manipulation of language in Orwell's 1984.

Jem Berkes wrote a good essay about 1984 a while ago.

I entered the Shell / Economist Essay competition earlier in the year. I didn't win, but the winners are so well written that I can't feel bad about it. I think the copyright on my entry now returns to me, so I will put it up later. In particular, the gold prize winner Milksop Nation is just brilliant.

In the entire state of California there is no saloon with a clientele so reckless and depraved that the law will avert its eyes and permit them to take the insane risk of drinking a beer in a building occupied by a person who might smoke a cigarette.

(Good rhetoric is slightly exagerated and simplistic.)

We went to the California Academy of Sciences to see the skull exhibit, fish roundabout, and Eames Powers of Ten exhibit. (Didn't you watch Powers of Ten at school? Don't you have nerdy nostalgia too?). Very cool. mjs says that the Academy of California Sciences ought to have reiki, dolphin telepathy and homeopathy.

I've been listening to the BBC Radio Play of The Lord of the Rings while driving around California. I like it as a story, but I find the underlying philosophy a bit strange. The bad guys are evil in their bones -- there is no possibility of even a single orc joining the other side, or any question that there might be fault on both sides. Whereas in the real world, given sufficient perspective (say, a thousand years), it often seems that there is fault on both sides, or at least that evil is not so easily apportioned by race.

A war without death, but not what you might think:

Armored Combat Earth Movers came behind the armored burial brigade, leveling the ground and smoothing away projecting Iraqi arms, legs and equipment.

(I expect enthusiastic praise for US Army landscape gardening from mglazer.)

13 Nov 2002 (updated 13 Nov 2002 at 02:36 UTC) »

update:

movement, here is at least one reference for malloc returning memory to the OS:

Doug Lea's malloc (If anyone wants to be a better programmer, I would suggest they should read stuff by Doug Lea.)

The ``wilderness'' (so named by Kiem-Phong Vo) chunk represents the space bordering the topmost address allocated from the system. Because it is at the border, it is the only chunk that can be arbitrarily extended (via sbrk in Unix) to be bigger than it is (unless of course sbrk fails because all memory has been exhausted).

"wilderness" is such an excellent, vivid, clear name.

I agree that it will often not be the case that there is contiguous memory at the top that can be returned to the OS. However, (as dl says), for programs that allocate memory in phases, or in a stack pattern, it may well be that memory which is allocated last is freed first.

Big, long-lived allocations perhaps should perhaps be in mmaps (perhaps containing arenas), so that they can be returned. For example, Samba now stores a lot of private data in .tdb files, which are mmaped. When they're not used, the memory is returned.

However, I think being able to return memory is perhaps atypical. Most programs run to completion, allocating memory all the way (e.g. gcc), or reach a steady state and then remain within it (e.g. servers or applications.)

It would be nice if Linux let you find out how many pages were being used by a particular map, but I don't think there is any easy way at present. Perhaps with rmap...

Of course, the more common case of "returning memory" is just allowing pages to be discarded by not touching them. This also indicates why it can be worthwhile to have swap on boxes which have plenty of memory: data pages which are still allocated but never touched can be written out, allowing more ram to be used as a disk cache. Apparently swapfile support will be better in 2.6, reducing the problem of needing static allocation of swap partitions.

A Java implementation that used handles and did not rely on objects not moving in memory would have the option of defragmenting itself to allow wilderness to be returned to the OS, or even just to avoid paging. I don't know if this is ever considered worth the code complexity and CPU cycles that it would cost.

The "hotspot" effect would suggest that for most programs where memory usage is a problem, it will be a few routines or classes of allocation that use most of the memory. Changing them to use mmap, or less memory, or an external file might fix it.

Perhaps oprofile would let you find out what programs are "causing" paging? (Not that it's really any one process's fault...) I haven't tried it, but I really want to.

I checked quickly and Debian sid's libc malloc uses mmap by default for allocations of 200kB or more. (I'm too lazy to find the exact value.) They're unmapped when freed.

12 Nov 2002 (updated 12 Nov 2002 at 20:38 UTC) »
fxn writes:

The solution came yesterday night, I read in Perl Debugged that Unix processes only grow! On page 176 it says: Note that the amount of memory allocated by a Unix process never decreases before it terminates; whenever it frees memory it makes it available only for the same process to reuse again later. (It is not returned to the free pool for other processes to use.)

That is mostly true and a good way to start understanding it, but it is not completely true.

You can think of Linux as having a two-level memory allocation system: the kernel gives memory to the C library (via sbrk, mmap, etc), and then the C library gives it to the application (via malloc etc).

There is a little bit of slack in the C library: sometimes it will ask the OS for more than it needs at the moment, and it will not necessarily return freed memory. Instead, freed memory is hoarded because it will probably be needed again soon.

Above a certain high water mark the C allocator may return memory to the OS. I think there are some parameters that you can tune to control this behaviour but in general the defaults are fine.

And this explanation is a generalization too: some programs, particularly databases, request memory of their own using mmpa, independent of the C allocator.

In addition, some programs map files into memory, and if they release that mapping then the memory will be returned to the OS straight away.

Of course all this is only at the level of virtual memory. Normally we're interested in physical memory because it's more scarce. Even if the C library never returns memory to the kernel, the kernel may eventually page it out to disk and free up the physical memory for other uses.recentlog.html?thresh=3

itamar

The fourth talk was about raising exceptions in signal handlers in Python, and the problem this causes.

What an interesting problem!

If I remember correctly (and it's been a long time), the Java specification says something sensible about asynchronous exceptions. I suppose the Python people have read that.

7 Nov 2002 (updated 19 Nov 2002 at 02:24 UTC) »
Interesting post by Linus explaining the patch acceptance process.

I've been running a little development weblog for distcc. I'm not sure if people like it, but I think it works well -- at any rate, it's something I would find interesting if I was looking at somebody else's project. It lets people know what features are coming up, or what's going wrong, or what bugs are being worked on, without requiring them to read the mailing lists. It can be interesting to know if a project's stable or not, or active or not. Perhaps it's like a poor man's (or small project's) Kernel Traffic. Some game developers do something similar with a .plan file visible on the web.

I wonder how it would work to let people see interleaved posts from samba developers commenting on what they're doing? (Not that I have time to write that.)

At work I am doing some performance tuning on our Python project. This is easier than it might sound: in any language, most of the time is spent in just a few routines. Rework them, or at worst rewrite in C, and things get much better.

give war a chance

Well, hey, it's always worked so well in the past...

Based on past experience, a USA-Iraq war would cause a hundred thousand civilian deaths, or at the very least a few tens of thousands, and of course a greater amount of injury, homelessness, and ruin. This is only counting people who are not combatants and who just happened to be born in the wrong place at the wrong time.

One might make an argument that it is necessary for those people to die so as to prevent a greater war later but I don't think it's right to do it without grave consideration, or to feel jubilation at the prospect.

ps So good

I've just been to a ClearCase training course by Island Training. It was really good: the instructor knew his stuff, made it not boring, and nicely catalyzed discussion about how it relates to our own build system. Tim says that HP's in-house training is always excellent and it seems to be true.

I have more respect for ClearCase now: in it's favour, it is a nicely generalized database, and it's very consistently unixy (mkview, lsview, rmview, ...). It avoids some problems with CVS, like not handling directory restructuring, and not being able to mark tags or branches obsolete without removing them. I even worked out how to do merges in emacs rather than through the silly built-in tools.

On the other hand, I think putting it into the kernel is a perfect example of a design that looks good at first but really ought to be dropped on consideration. (At the moment, our build machine is rebooting probably because ClearCase somehow interferes with loopback devices.) And it's still the perfect antidote for distcc's speed.

I've now mostly moved into my new apartment. I feel really grown up to actually (fractionally) own it. I have taken advantage of my newly acquired right to bang and drill holes in the walls.

After much prodding earlier in the year from my brother in law, I finally read The Science of Discworld. It's really excellent. The title might make you think it will be a bit too silly, but it's not at all. As well as a nice and entertaining overview of major areas like physics and evolution, it also has probably the best explanation of the epistimology and social processes (Popper and Kuhn) of science that I've seen in a general audience book. It is quite fair to compare it to Gould or Dennett, and perhaps gives an even better under standing of the way science actually progresses, rather than presenting it as a body of immutable facts. And I like their suggestion that space elevators will have elevator music.

----

I was flabbergasted to read this story in the AFR, quoting John Moses speaking at a memorial for Bali bombing victims:

In Canberra, Prime Minister John Howard lit a candle in StPaul's Anglican Church and listened while priest John Moses sermonised on the inadequacy of "sentimental humanism". [...]

"Decency without doctrine ... spiritual laziness," he called it. [...]

Speaking to the Prime Minister, Professor Moses said it was necessary for Christ to become the conscience of the state and the role of the state was to be an instrument of God.

I don't know whether it's more distressing to me that somebody would espouse such an opinion in this century, or that it would apparently get a hearing from Howard. I suppose I'm not surprised by the second, but it still disappoints me to see it.

"Decency without doctrine" actually sounds to me like an superb practical approach to morality in a pluralist modern world. I cannot understand the imbalance of mind that makes one want to respond to supposed religious violence by re-establishing a state religion.

In any case, even if we wanted to go along with that, it seems to me that Jesus would have encouraged people to turn the other cheek. I don't suppose anyone holding that opinion would be invited to speak at official functions.

search-and-replace:

It is necessary for the Koran to become the conscience of the state, and the role of the state is to be an instrument of Allah.

Mm. Doesn't sound so reasonable (to western ears) now, does it?

231 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!