Older blog entries for raph (starting at number 346)

Fonts and hinting

What David Turner said, with a few additions.

First, I'm obviously concerned about displaying PostScript and PDF documents, for which the goals of high-fidelity, accurate rendering and high-contrast, legible text are often in tension. These document formats, for better or worse, are deeply rooted in scalable font technology. Trying to use bitmap fonts, no matter if they're pretty, is not going to work well.

Second, as the resolution of screens goes up, the tradeoff between accuracy and contrast shifts in favor of (unhinted) antialiasing. At 200 dpi, which will be standard in a few years, the contrast of unhinted aa text is plenty good enough for just about everybody. The challenge is how to get there from here. One of the obstacles is the large installed base of software which is incapable of scaling with display resolution. It's a Catch-22: there isn't the pressure to fix the broken software until the displays become cheap, and the motivation isn't there to do high volume manufacturing of the displays until there is software that works with them. Microsoft is in a position to break through that, and if they do, I'll be quite grateful.

By the way, a really good place to start would be to make double-clocked pixel rates on CRT's work. Commodity video cards typically support pixel clocks in the 360MHz range. That'll handily run 2560 x 1024 (in other words, the standard 1280 x 1024 res double-clocked in the X direction) at 95 Hz. Of course, because of the shadow mask or aperture grille, CRT's can't actually display the full resolution. However, you still get the advantages of improved contrast and glyph positioning (spacing) accuracy. It's very easy to play with this - just double all the horizontal numbers in your XFree86 modeline, then run Ghostscript with a resolution such as -r144x72 or -r192x96.

Worth reading

A conversation between Jim Gray and Dave Patterson, via Tim Bray. Linger for a while at Tim's blog; it's one of the best reads out there.

Bullshit continued

I have two quantitative questions about bullshit:

  • How does the bullshit level vary between types of communication fora?

  • How does the bullshit level vary between various topics of otherwise similar intellectual content?

I was thinking about the latter question, especially, when responding to a rant by jwz about gamma correction. Gamma is not all that complicated or difficult, but a lot of people get it wrong, a huge fraction of what you find on the Web is bullshit, and you even see your share of kooks (and see Poynton's refutation).

A quick experiment using Google searches shows that it's a lot easier to find bullshit about gamma correction than, say, the structure of rhodopsin. The query "rhodopsin structure" yielded 9 functioning links, all of which appeared to be high quality and free of bullshit. The same search for "gamma correction" yielded 7 independent links, of which one was an ad for a product, and all of the remaining 6 had problems. The first hit is typical - it suggests that the nonlinearity between voltage and luminance in CRT's is a "problem" that needs to be "corrected", rather than a sound engineering choice for video systems. Their sample images are poorly considered, and reinforce this faulty notion.

Why is gamma correction so cursed? I think the main reason is that it doesn't belong to any discipline which is taught well in school, so there isn't a core of competent, respected people who know what they're talking about. Color science in general suffers from this problem. Even though color is a very basic part of everyday life, it intersects a wide range of academic disciplines, including physics, electrical engineering (particularly video), chemistry (less so these days now that digital cameras are replacing silver), psychology, computer science, and so on.

I use gamma correction as an example of a subject which needs good bullshit discrimination. How well does the web do this? Not very, at least measured by Google. There are some good resources on gamma out there, but they don't make Google's top 10, which presumably means that it's not popular to link to them. Do blogs do a good job? That's harder to answer because my own response skews things, but my sense is no.

Of course, I am thinking about a form of communication that seems to succeed in filtering out much bullshit: peer reviewed scientific publications. There are limitations, largely those of scope; for most important things that people care about, you can't find any scientific literature on the subject. Indeed, it would be very difficult to publish a paper about gamma correction in a prestigious journal, because it's a solved problem (in fact, television engineers got it right a long time ago, and it just took computer people to screw it up). The dollar cost of producing a peer-reviewed publication is also very high, but in many cases could be considered worth it.

PDF: Unfit for Human Consumption

Of course, it's possible that one of the big reasons that Poynter's Color FAQ is not a popular link target is the fact that it's in PDF format. Jakob Nielsen, in the above linked essay, argues that PDF has very serious usability problems as a format for Web pages. It is tempting because you have far more control over the aesthetics (and it works way better for printing), but overall I have to agree with Jakob.

The good news, I think, is that many of these usability problems are not inherent to the PDF file format, but can be fixed. Indeed, many of the complaints Jakob raises have to do with the awkward integration between the PDF viewer and the Web browser. Acrobat has its own UI, but in the free software world, there isn't any viewer whose UI is similarly entrenched. It shouldn't be hard to integrate a PDF engine into a Web browser, so that you can browse fluidly between HTML and PDF formats without caring all that much which is which.

13 Jul 2003 (updated 18 Jul 2003 at 20:58 UTC) »
LTNB

Why has it been such a long time since I last wrote a diary entry? I'm not totally sure. I guess I've just been more inwardly focussed lately, especially on family issues (drop me an email if you're curious - I just don't want to write on the family's permanent Google record). But I've also been a bit of a hermit - I like it when there's

no email or phone calls.

Even so, I have stuff to write about.

Sleep apnea

I've been trying to build a home sleep study so I can determine which factors affect the seriousness (I'm especially interested in weight, even though my BMI is right in the middle of the curve). I'm about done, but it's taken more time and energy than I counted on.

Basically, the ingredients are:

  • A pulse oximeter (available from eBay for about $200-$300). I have the Ohmeda 3740, which I can recommend and seems to be very popular for sleep.

  • Strain gauge belts for measuring "respiratory effort". I have two of the Grass Telefactor 6010, at $60 each.

  • A LabJack and the EI-1040 instrumentation amplifier for getting the signals into the laptop.

Basically, you plug the stuff in to the LabJack. The Ohmeda has outputs in the right voltage range, just use 1/8" mono audio cords. The strain gauges need to be amplified - I use a gain of 1000 on the EI-1040. Since the impedance of this amplifier is so high, you'll need some resistors for the input bias current return path.

The Linux driver for the LabJack is still very alpha, so for the time being I'm just using the Windows stuff. All I need is to log the data, and the LJlogger makes a very easy-to-use ASCII file (suitable for gnuplot).

I'll probably make a Web page with a more detailed recipe and the results as I find them.

Font rendering

Really high quality text and font rendering is challenging. It's not just a question of there being a "right way" to follow; there seem to be many ways to improve font rendering. Also, what constitutes "good" text is highly subjective. I personally favor a high fidelity reproduction of print fonts, even with some loss of contrast, while others prefer their fonts highly hinted. If you're in the former camp, OS X pretty much nails it, and if you're in the latter camp, the RH desktop with Vera fonts and TT hinting enabled.

But the ultimate goal, of course, is to combine fine typography aesthetics with high contrast rendering. This is a harder problem when the text metrics have to match the source exactly (as is the case for PostScript and PDF viewing), but is still challenging even when they don't. My favorite so far is Adobe Acrobat 5, but it's still not perfect. The big problem is that spacing errors are typically in the half-pixel range, which is not really pretty (the repeated letters emphasize the spacing errors; it's not as easy to see in body text). Also, in this sample you can see that the 'm' lacks symmetry, which bothers me.

Other attempts, in my opinion, don't work as well. In particular, the screenshots I've seen of Longhorn suggest that it'll distort the stroke weights to integers, but still suffer some loss of contrast in the case of subpixel positioning. Of course, they've still got some time to improve it before they ship.

Longhorn may have an even more significant consequence for us: it promises to support very high resolution displays. So far, there's been a bit of a catch-22 situation. High resolution displays are available but expensive, so very few have been shipped, and almost no software supports them, so there isn't the motivation to figure out how to manufacture and sell them cheaply. But if Microsoft puts their weight behind them, it could easily break this cycle.

I haven't seen any of the technology involved, but I'll take a guess. Since high resolution displays are around 200dpi, it makes sense for non hires-aware apps that do bitmap drawing to just double the pixels. In most cases, text should be able to go at full res without any software changes - the requirements are rather similar to simple low-res antialiased text.

So this is what I think they'll do. The default graphics context will be set up to double all coordinates before drawing, and zoom bitmaps accordingly. Apps that expect to draw to a 96 dpi screen will look about the same as they do now. Then, there'll be a call to get a hi-res graphics context if available, with 1:1 pixel drawing, and correspondingly higher precision for positioning glyphs. It'll be important to take this path for the Web browser, the word processor, and graphics software, but for a lot of other stuff it won't be as important.

The consequences could be dramatic. For one, if Apple isn't working on something similar, they'll face a mass defection of graphic arts types to the MS platform - once high res displays are affordable and really work, people will not want to go back. It'll be like trying to sell a black-and-white only lineup when the rest of the world is moving to color.

Second, I'd expect high res displays to come down to commodity pricing. I'm not an expert on the economics of displays, but I'd expect that the actual cost of manufacturing a high res LCD isn't much higher than a low res. I think that's much less true for CRT's, but they're on their way out anyway.

If the Linux desktop folk have any real vision, they'll start working on support for high res displays now. A lot of what I'm talking about, with the 2x coords and bitmap zooming, could be done at the X level so that legacy apps would work without ridiculous tininess. Then, modern GUI toolkits could grab the higher resolution context and get crisp, accurately positioned text. I'm not holding my breath, though.

If high res displays become widespread, then the need for high-tech font hinting basically goes away, in much the same way that the need for fancy dithering algorithms went away when video cards went from 8 bits to 24. So, long term, I'm not sure it makes sense to invest a lot of work into hinting.

Bullshit

chalst: Your link to the essay on bullshit is most excellent. I've been toying with the idea of doing a blog essay on the theme of lies and lying, but I now see that "bullshit" is the superior concept. It applies so well to so many things I've been thinking about recently: the "evidence" justifying the war against Iraq, the SCO lawsuit, the way Time magazine flogs TW/AOL product on their front cover, and countless more. It's all bullshit.

Now here's a question: is the blog form more or less prone to bullshit than mainstream media? I'd say there's a lot more diversity in blogs, so if you're seeking to cut through the bullshit, you have a much greater chance of success in blogs. But blogs also seem to be pretty good at spreading bullshit.

Hopefully, in this age of high-tech communications gear, the study of bullshit can become both quantitative and prescriptive - to design, and choose, communications fora with the explicit goal of minimizing it. Sign me up!

P2P-Econ Workshop

I spent the day, and will spend tomorrow, at the Peer-to-peer Economics Workshop at UC Berkeley. Most importantly, I'm getting to meet friends such as Bram and Roger, as well as new people such as Tim Moreton and Andrew Twigg, who are taking my stamp-trading ideas forward into new and interesting territory.

I've always been skeptical of economics as an intellectual discipline. It was originally called "the dismal science" because of the pessimism of some of its predictions, but I'm sure the name has stuck because of the quality of the science. Indeed, some of the presentations fit the stereotype perfectly: taking simple questions and muddying them all up with hokey math and fancy sounding theories.

However, the presentation by Hal Varian, was a wonder to behold. In his hands, economics feels much less like a science than an art - an art of explanation. He presented a very simple model of how sharing (including both traditional forms such as libraries and new forms such as P2P networks) affects both the pricing of media works. In fact, it was a dramatically oversimplified model, but that was ok. The simplicity of the model means that it's easy to understand, but it still captures something about the real world. Impressive.

There's good news and bad news on "reputation systems". The good news is that academics are starting to study the topic seriously. The bad news is that the system they're studying is eBay. There's nothing especially wrong with eBay, but I still find it sad to see directions of academic study driven so obviously by commercial popularity.

Regarding trust metrics, it's obvious that a lot of people who should know about them, don't. That's easy to fix, though, and indeed a workshop such as this one is one of the best ways to do so. And, of course, it reminds me that I really need to write up my ideas in academically citable form, for which finishing my thesis will do!

Long time no blog, again.

Ghostscript

We have new releases out, both GPL and AFPL. I recommend upgrading, if only to get the new security updates.

The 8.10 release has some seriously enhanced font rendering, thanks to Igor Melichev. Try experimenting with -dAlignToPixels=0, which enables subpixel rendering. I think we'll make this the default in a future release.

I spent two days last week at a customer site. I'm glad we have both paying customers and free users - addressing immediate needs is fun, and the commercial world is often better at expressing appreciation for work than the world of free software.

You probably saw that Ghostscript is leaving the GNU umbrella. This has been brewing for some time, but only last week got publicity. I think we weathered it pretty well, all things told.

Responses to threads on Advogato

nymia: It's interesting that you say DOM is the future, because I was very much of the same opinion four years ago. Then I tried to actually implement stuff with it, and became disillusioned.

The ideas behind DOM are sound, but the spec itself has a lot of bad engineering. Among the highlights: It's virtually impossible to implement DOM in a memory-efficient way. Character encoding is forced to be UTF-16 (a Java-ism). It's difficult to do DOM memory management without garbage collection. The event propagation model is broken, and doesn't support multiple Views in a Model-View pattern.

I hope somebody engineers a better DOM-like tree access API, and that's the future.

yeupou and dobey: yes, I'm also unhappy with the state of viewers based on Ghostscript. They don't have to suck, but they seem to.

At this point, I think the best bet may be GSView, which is scheduled to be released under GPL around the same time 8.0 is, which is November. There's a good chance that it will become the viewer of choice.

Meanwhile, I note that xpdf 2.0 is out. The good news is that they've replaced their hand-rolled GUI toolkit with a real one. The bad news is that it's Motif. I've been meaning to contact Derek for a while - perhaps I will soon.

Worth reading

Here are some interesting things I've read recently:

Retroactive Moral Conundrum, a piece by Tim Bray on the Iraq war. I agree with his main point - while getting Saddam out was a good thing, having our administration lie about things constantly, and having nobody really call them on it, is a bad thing. Take the time to browse the rest of Tim's blog while you're there - it's now one of my favorites.

Philip Greenspun on Israel. I'm not sure I agree with what Philip says, but it's very thought-provoking.

Sleep

I just had my second sleep study this morning, and this time they did find some apnea. Next, we'll see what to do about it.

There are a few factors which have changed since the last time, one of which is that I'm 160 lbs now, about 20 more than when I had the first study done. I'm going to experiment with losing weight again, and this time hopefully I'll be able to track any improvements. So, again, I'm interested in putting together a home sleep study. The sound is fairly easy, but it's doesn't give a clear indication of breath cessation. I think the least invasive technique for measuring that is a flow meter, but real sleep studies also add an EEG and a pulse oximeter.

Again, if anyone knows a good, inexpensive source for this equipment, or has experience doing something similar, suggestions are greatly appreciated.

Xr, fonts

I talked in depth with Keith Packard a few days ago. We spent a fair amount of time talking about font rendering, and also about the Xr project. Xr is interesting - it overlaps in goals somewhat with Fitz, but with a primary focus on interactive display applications.

Among other things Xr gets right is that it's cross platform - earlier Xrender work seemed much more Unix-specific.

I think the question of how to do high quality text rendering on the desktop is still open. One local maximum is the current performance of Xft with the Vera fonts and with TrueType hinting enabled (screenshot). This configuration succeeds in rendering high-contrast, fairly visually uniform text (stroke weight and spacing are quite uniform, but curves and diagonals are softer than vertical and horizontal lines). I personally find the 1-pixel stems to be a bit light, especially on my 132 dpi LCD screen, and in general prefer text that looks a little more like original, unhinted fonts. In particular, I like different sizes of the same font to be roughly consistent in darkness. With Vera, medium-big print is much, much lighter than small (as soon as the stroke weight goes to 2px, then it's darker again).

This kind of rendering is perfectly reasonable for GUI elements and HTML rendering, but not for WYSIWYG viewing (such as PostScript and PDF). For this, I think the tradeoffs shift a bit. Even aside from matters of personal preference, to ensure even spacing you need subpixel positioning. That, in turn, basically forces lower stroke contrast (although not necessarily as soft as completely unhinted rendering, such as OS X). The Xft API doesn't support subpixel positioning, but no doubt a more sophisticated text API for XRender will.

In any case, with all this playing with fonts and font renderers, I've rekindled my own font, LeBe, a revival of a font from 1592. Some of the glyphs still need work, but overall I'm pleased with the way it's going.

Formalism

Formal proofs don't mean that mathematics is reduced to no more than the manipulation of strings. A proof reflects the personal style of the person devising it, whether or not the individual proof steps can be formally checked. The need to recount the proof steps in detail is a constraint, just as the moves of chess or Go are a constraint.

Mathematics seems to get along pretty well without strict adherence to formalism, but I think that using mathematical techniques for computer programming is a different story. The mathematical content of most relevant theorems is mind-numbingly tedious, so I think you need a computer to check them, and probably to help generate them, for realistic programs.

Languages

I have a confession to make. I've been designing programming languages since I was nine years old (the very first was syntactic sugar for 8085 assembler). Most of them have never been observed in the wild, but one seems to have escaped and taken on a life of its own. Now, I find that there's an implementation of "99 bottles of beer" in it, as well as an interpreter written in OCaml.

I am no longer excited by Io; continuations seemed a lot more interesting when I was a teenager than now, although it is of course useful to know how to program with inverted flow of control. It is, perhaps, a useful illustration that continuations, like a number of other language primitives, are powerful enough that you can build an entire programming language from them and nothing else.

Also, I enjoyed chromatic's essay What I Hate About Your Programming Language. I tried responding on the comment page, but it wouldn't take my login. I'm sure the login problem is due to the website being written in some dynamic language that encourages messing with stuff, rather than good ol' C, which requires you to think through what you want the code to do first.

Seriously, while chromatic's mention of mod_virgule as a website written in C is gratifying, if I had to do it from scratch, I probably would use Python. I like programming in C (especially deep algorithms), and I like programming in Python (especially prototypes, and gluing things together).

Hacking is like painting

Paul Graham has a fine new essay entitled Hacking and Painting. It's very dense with ideas, and does a good job of describing the flavor of creative programming, something difficult to get across to a mass audience.

Hacking is like painting in some ways, but the analogy breaks down in others. Here's my take on some of the points.

If you want to hack, or paint, get a day job. In hacking, as in painting, it is very difficult to find a situation that rewards artistic accomplishment directly. Business and academia reward something that's different enough to be frustrating. I think that many people had the hope that the explosion of "open source business" would solve this problem, but it hasn't. The "day job" works for painters, musicians, and other artists, and is a good model for hackers as well.

Studying works of art is vital to learn to do art. The great painters learned by studying and copying other great painters, and refining the ideas. One of the best criteria for being a good writer is reading voraciously. Yet, as far as I know, it's rare to teach computer programming by studying great programs. Historically, a big part of the problem was the dearth of great programs available for study. Free software fixes this problem. I considered myself a hotshot programmer before getting involved in free software, but grew enormously through reading other people's code. I'm not sure how much code people study in academia; the Lions book is certainly one early encouraging example.

But: paintings last a long time, code rots fast. In museums and reproductions, we enjoy a body of work going back hundreds of years. In many ways, these old paintings are better than the work of today. Yet, with few exceptions, code that's hasn't been maintained for more than a year or so is dead. One such exception is code that performs a well-defined ask, and does it well (such as libjpeg). Another such exception is "retrocomputing".

Getting the spec right is more important, and more interesting, than implementing the spec. Once you've got a good spec, any competent programmer can implement it. (this is a good working definition of competence :) But coming up with a good spec is very hard. Usually the best way to do it is to incrementally refine an implementation. It's the same for painting - X-ray analysis frequently reveals things painted over and reworked. Oil paints are good for this kind of reworking. Analogously, dynamic languages are better than static languages for this kind of process.

There's a lot of discussion in blog land about static vs dynamic languages, and I expect to return to the topic. I consider programming in C to be like sculpting in marble, and programming in Python to be like sculpting in clay. They're both worth doing, but for different kinds of work. Also, giving a chisel and block of marble to a newbie sculptor produces similar results to a newbie C programmer: a pile of marble chunks covered in marble dust. Paul uses the similar analogy of drawing in ink vs pencil.

Programmers envy mathematicians too much; they should aspire to be more like artists. I see Paul's point here, but in the long run I disagree. Paul uses the analogy of paint chemistry as a scientific theory underlying painting, in much the same way that computer science theory underlies programming. But I think the analogy breaks down. Paint chemistry is obviously useful in predicting the appearance of colors, the way they'll interact with each other, the brush and the canvas, and other important stuff such as glossiness. But the limitations of paint chemistry are equally clear - as a theory it speaks not at all to composition, emotion, or the infinitely subtle issues of aesthetics.

But where are the corresponding limits in the theory of computer programming? This is an open question; certainly many attempts to apply mathematics to real programming have been disappointing. But I think this is because mathematicians aren't good at doing the kind of mathematics needed for programming, the kind where the spec (or the "theorem to be proved") is extremely complex. Indeed, the theorems most highly valued in mathematics are those where the statement of the theorem is very simple, but the proof is deep (Fermat's Last Theorem is a shining example).

Fortunately, there are people exploring that edge, "proof hackers" if you will. Metamath, HOL, and proof-carrying code represent some of the most interesting work in that direction. If these people succeed in expanding the useful scope of mathematical technique, then it will dramatically change the way we do programming

In this distant future, the use of mathematics will depend greatly on the type of programming being done. Aesthetics will still be important (without question in any area that deals with human interaction), but for well defined tasks and well defined properties (especially security properties), the best programs will be provably correct.

tree ISA vehicle

I see yet another suggestion that Max has the hacker mind. The other day, he discovered an undocumented feature in "Reckless Drivin'" - if you press the Alt key while clicking on Start, it pops up a dialog box that lets you choose a vehicle number.

So we started going through the numbers, and he enjoyed being able to drive cars, buses, go-karts, helicopters, and so on. Then, we found a vehicle code that corresponded to a tree. In a 2D driving game, it makes sense to consider a tree as a type of vehicle; it's rendered the same way, and even though it doesn't share all methods, it's probably easier to just leave methods for motion unimplemented than to branch out the class hierarchy.

Anyway, when Max saw the tree, he thought it was very funny, and laughed out loud. Was this because he saw a glimmer of the design of the program, and found the incongruity between the internal "tree ISA vehicle" and the real world, or is he just a happy kid who likes to laugh? Either way makes me feel good.

Urgh, haven't updated in a while. Last weekend, we went to a Quaker retreat at the beautiful Ben Lomond Quaker Center, and then the next few days an Artifex staff meeting.

RH 9, fonts

My laptop's hard drive failed (another quality product from IBM :). This time around, I decided to install RH 9 from scratch on the new drive (it was Debian before).

So far, I like it. I miss apt-get, but more stuff seems to just work. Also, the antialiased fonts are a nice big jump. I am sad that support for subpixel positioning isn't there yet, though. In general, you can get away with integer quantization on the widths when you're doing imprecise text layout (GUI labels and HTML rendering, as opposed to, say, PDF), but there are still definitely cases where the spacing gets wonky.

As far as I know, there is only one text rendering engine that does antialiasing, hinting (specifically, contrast enhancement by subtly altering the position of stems), and subpixel positioning: Acrobat (Reader) 5. Mac OS X does AA and subpixel, while RH 9 (by means of FreeType 2) does AA and hinting. I'm looking forward to the first free software implementation of all three.

At the staff meeting, we've decided not to move forward with our funded project to integrate FreeType as the font renderer for Ghostscript, rather concentrating on improving the existing font renderer. I'd still like to see the FT integration happen, though. The best outcome, I think, would be to recruit a volunteer from the free software community to take over this project.

Fansubs

I've discovered anime fansubs. These are basically Japanese anime shows, with English language subtitles added, then encoded (usually to MPEG-4) and distributed over the Internet. Their legal status is murky at best, but a sane code of ethics prevails: fansubbers release shows that have not been licensed to English-speaking markets. Under this code, everybody wins. Copyright owners of shows don't lose revenue directly, because there isn't any from those shows. Indeed, it's likely that popularity of the fansubs fuels interest in official licensing. And, of course, viewers win because of access to great shows like Hikaru no Go, which would otherwise not be available, or only at great difficulty. Alan has started watching Naruto (I still read the subtitles aloud to him, but I'm sure his reading speed will catch up soon), and enjoys the insight into Japanese culture as well as the Ninja-themed action-adventure storyline.

The best of the fansubbers do really good work on the translation, subtitling, and other stuff; arguably much better than many "official" versions. I think Hollywood could learn much from their example.

People leaving

I've been feeling a bit down the past few days. The departure of some good people from Advogato is probably a factor.

I want this to be a good site, and bring people together as part of a community. I know I can't please everybody all the time. But I am wondering if there are some basic things I can do to make this a more congenial place.

First, I deliberately chose a light touch for applying the trust metrics. They're basically opt-in, especially the more recent diary ratings. I had thought that users of this site would have a fairly thick skin, and that simply giving people the tools to filter out stuff they didn't want to see would be sufficient. But perhaps that assumption isn't right. Maybe the default should be to present the recentlog with a trust metric computed relative to a "seed", so that most people wouldn't see low-ranked entries unless they deliberately chose to seek them out.

I've been thinking of doing something like that for an RSS feed of the recentlog anyway, as there aren't good client-side tools for filtering those. So the question is: would stronger filtering bring back the people who left? Is this an important goal, in any case?

There's always been a tension between people wanting to find their own blog hosting, or have them hosted here. As a blog host, this site has been fairly minimalist, although I can definitely see adding in the really important features over time. But perhaps it's more distributed, more Web-like, for each person to be responsible for their own blog hosting, and use other tools to integrate blogs from disparate servers. In the meantime, I think our recentlog provides a useful and interesting mix of individual postings and communal discussions.

Inline functions

After some more thinking, I don't really like the DEF_INLINE macro I wrote about last time. The simplest approach, I think, is to define the "inline" keyword to that of the compiler, or, if the compiler simply doesn't support inlining, then to the empty string, so that each .c file that includes the .h with the inline functions gets its own static copy. An interesting question is: are there any compilers in widespread use today which do not support inlining? Certainly none of the ones I use.

337 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!