Older blog entries for raph (starting at number 395)

Almost ready to emerge from my cocoon

Wow, it's been a long time since my last blog entry. The big news is that I now have an apartment in Berkeley. I'm still not 100% moved (there are books and so on at the old pace), but all the furniture is here, and it's already starting to feel like home.

The past couple of months have been fairly emotionally demanding, and I'm still feeling behind on a bunch of things (a lot of it mundane paperwork seemingly needed for living in modern times), but I sense this period is about to end, and I'm going to become a lot more social again. This apartment is only 4 blocks west of the North Berkeley BART station, and a nice hike to campus, so the logistics of getting together with friends is going to be a lot easier than it was way out in Benicia.

One place in particular I hope to meet up with friends is LinuxWorld in SF, the second week of August. I'll be helping Till Kamppeter man the linuxprinting booth, and will be showing off some of my work with high end graphic arts-quality printing using Ghostscript and my screening technology. I'm not expecting much from the conference itself, but it should be a reasonably good place to reconnect with friends.

Maybe I'll see you there!

Update

Looking over my last entry, there's progress on at least one front: I have a car now, so I've finally joined the petroleum-burning bandwagon. The thesis also continues to go very well. I have working code for solving sophisticated sets of constraints (especially those needed to make smooth corners and transitions between straight and curved segments), and have been steadily refining the numerical techniques. I can barely wait to release all this stuff publicly.

The primary focus of my work lately has been screening and image science. I'm currently doing drivers for the Konica Minolta 5430 and the HP DesignJet 110, with the goal of acheiving even higher quality than the vendor-supplied drivers. All of this work will shortly get checked into AFPL Ghostscript, with the GPL release to follow not long after. I really like this kind of work, not least because of the immediacy of the results.

Metro

Microsoft has announced their Metro printer language, and their beta implementation is floating around. This probably won't make much difference in the free software community for some time, as there is precious little that Metro can do that PDF can't. However, PDF doesn't ship inside many consumer printers (and probably won't, because it demands so much RAM to run), so this might revitalize the dream of device-independent page description languages started by PostScript, but then largely abandoned by Adobe due to greediness and lack of focus (their desktop apps make real money; their printer business doesn't).

If there are other people out there in free software-land working on Metro, I'd really like to hear from you.

Firefox plugin

It's not too much of an oversimplification to say that page description languages are now used in two places: printers and the Web. Unfortunately for Adobe, they've dropped the ball on the latter as well. While the PDF file format is very nice for distributing statically formatted documents, most users hate to follow PDF links, mostly the fault of Adobe's wretched browser plugin.

So there's an opportunity to do better, and tor has been hacking on a Firefox plugin based on mupdf (soon to be renamed ghostpdf). It looks really promising.

Toys

For some reason, I find myself wanting a Nokia 770. Maybe it's just the 226 dpi screen, maybe it's that it looks really easy and pleasant to develop for. Nokia seems serious about being developer-friendly, as clearly evidenced by their no-nonsense website. If they really want to stir up some buzz, they'd seed a few dozen devices to the free software community.

Checking in

The energy I have for writing in this diary ebbs and flows. The last few months, it's mostly been ebbing, which is not surprising considering the current texture of my life: too much emotional energy going into trying to get an amicable and fair divorce settlement, and wayyyyy too much time schlepping around the Bay Area on various forms of public transportation to meet with assorted counsellors, lawyers, and so on.

I expect things to improve in a good many of these areas. I plan to get a car soon, and also plan on moving to Berkeley some time around June. Overall, I'm feeling a lot better than I was, and look forward to having the energy to interact more with the free software community and others.

I know Advo's trust metric is currently broken, and will fix it when I get a little time.

My thesis is going splendidly. I understand splines far more deeply than when I started. After I publish the thesis, I will find out whether the rest of the world gives a rat's ass. Either way, I'm having serious fun working on it now.

I feel brimming over with ideas for various code projects, considerably beyond my personal ability to implement them. What should I do with all these ideas? The answers are far from clear to me, and it will probably continue to be a challenge to figure out.

The weather is absolutely beautiful here, and I feel pretty good.

Thesis work

My thesis work is going great, although I'm now less in the mood to write about it publicly in detail, because I might seek patent protection on some of the ideas.

For one, I've started to seriously dig into the study of real splines (i.e. thin strips of wood and the like). I just got Love's Treatise on the Mathematical Theory of Elasticity and am trying to wrap my head around the math and physics concepts contained therein. I think it's possible to do as well as, or even better than, real splines, as long as you're not afraid of using advanced math to make the numerical problems tractable.

The math involved is mostly Calculus of variations. I came across it a little when I was a taking math classes over my head as a young teenager, so it feels like nice closure to actually be using it now.

Grayscale bitmaps

The response to my query on grayscale bitmap fonts was disappointing, to say the least. Hrant Papazian is working on a series of monospace grayscale fonts called Coda. He is offering to release them for free assuming the community gets its act in gear and is actually able to use them.

Why have grayscale bitmaps utterly failed to take hold? One theory is simply that they are too hard to design, but I don't think that holds water; there is now a small community of grayscale font designers producing very impressive work. A more disturbing theory is that, failing copyright or other "intellectual property" protection, the incentive to create the fonts has been missing. It's a strange quirk of US copyright law that typeface designs and bitmap images are free of copyright, while packaging a typeface design as "font software" (presumably, executable code rather than a static data structure) confers upon it the protection given to software in general. Thus, we have something of an testbed for a public domain utopia advocated by anti-copyright extremists.

And the empirical result of that testbed are devastating. The technological limitations of the early '80s created a golden era of monochrome bitmap design (most especially the work of Susan Kare for Apple and others). But as soon as technology made scalable fonts feasible (with the corresponding upgrade in copyright protection), bitmaps were abandoned as fast as possible.

Even in the scalable domain, we still have lots of effort poured into making hinted TrueType fonts as opposed to nurturing the development of unpatented font technology. It's the triumph of GIF and MP3 over PNG and Vorbis all over again.

I think there is an optimum level of copyright-style protection to serve the public interest. The current Mickey Mouse-dominated legal environment almost certainly errs on the side of too much protection. The experience with bitmap fonts provides another datapoint on the "too little" side. Who will make the case for the baby bear?

Just testing

Not sure what happened to the recent diaries. I tweaked the db by hand and hopefully it will work now.

More later.

Grayscale bitmap fonts

Hand-drawn grayscale fonts have been an interesting idea for some time, but have never really caught on. They certainly take a lot more skill and patience to do well than ordinary bitmap fonts.

That said, I think they can fill an interesting niche in the 100-130 dpi resolution range, which is of course where the vast majority of all users live today. At lower resolutions (in particular 72dpi, which was standard for a long time), the grayscaling is quite visible, which can be distracting. At much higher resolutions (say, 200dpi), hand-tuning is a waste of time because outline fonts can deliver somewhere around 95% of the optimal contrast levels even without sophisticated hinting techniques.

And, indeed, grayscale bitmap fonts are starting to come into existence. Check out this thread on typophiles about Firefox text rendering, and this one about monospaced fonts for programming. Also see the bitmap critique forum for some striking examples of grayscale (and monochrome bitmap) work.

Can we use grayscale bitmap fonts on our free software platforms? In theory, we should be able to, because Freetype2 supports the grayscale bdf format, as I was able to verify with the ftview demo program and a hacked-up font. However, if I put that bdf file in /usr/local/share/fonts (with a corresponding <dir> tag in .fonts.conf), then "xterm -fa Graytest" bombs quickly with a floating point exception. I don't really feel like debugging that right now.

So, I want to know a few things. Is it possible, by hook or by crook, to get fontconfig-enabled apps to load grayscale bitmap fonts out of the box? If there are stupid bugs keeping that from happening, how do we get those fixed?

I sense there is something of a chicken-and-egg problem here. If there are no good grayscale bitmap fonts out there, then there isn't a strong motivation to build support into infrastructure and apps. Conversely, if it's nearly impossible to deliver grayscale fonts, why spend the time drawing them?

But it may well be the case that the energy barrier is not that high, and that with a little directed force, we can break through it. Does the community have the collective will to make this happen, or is this just yet another of my crazy ideas that nobody cares enough to do anything about?

Coins and stamps

Thanks to everyone who sent Alan coins, stamps, and emails in response to my begging. He really appreciates it!

Spiro

I've tentatively chosen a name for my curve editor: Spiro. I posted a "code escape" here and on the Typophile forums. So far, I've gotten little or no response from my posting here, but lots of people checking it out and sending feedback on the Typophile board. Interestingly, many of the commenters are encouraging me to patent the idea and/or otherwise try to make money off it.

I'll have a new code snapshot soon. In the meantime I've been tracing the miniscules from 72pt metal Centaur (see screenshot). There's nothing like trying to use the thing for real to point out what needs improving :)

There's no rush, as I'm still tweaking the core algorithms, but obviously this is something that should go into other free software applications, most especially Inkscape. What I'm most inclined to do is release all the core code as GPL, package up nice Win32 and MacOS versions with refined GUI's (especially cut'n'paste with the dominant apps on those platforms) as commercial apps, and also try to license the code for inclusion in proprietary apps. We'll see how all that plays out.

Alan wants your stamps and coins

Alan has revived his blog after some period of inactivity. Lately, he's really been into collecting stamps and coins from various countries. He's gotten some from tor (Swedish), rillian (Canadian and euro), and Igor Melichev (Russian), but I figured the Advogato crew could help him out with lots more exotic countries.

He'd also like to get mail about stamps and coins, N64 games, or how he puts up with having such a crazy dad - send 'em to alan dot levien at gmail dot com.

Curves

I've been playing with my curve editor, tracing characters from old ATF fonts and so on. It's quite fun, and it seems to be a lot faster to get good curves drawn than Bezier-based editors.

There's always just one more feature I want to add before doing a release, but at the same time I want people to play with it, so I've put up a new code escape. You'll just need a Linux/Unix/Fink system with Gtk+ and libart installed. I'm always pleased to get feedback on it.

The election

I'm still in a state of shock. I think things are going to get quite a bit worse before they get better again. I also think this outcome is likely to be bad for free software in many different ways, including strong anti-American sentiment around the world, shoring up of corporate power, and a general devaluation of the public interest as an important factor in political decision-making.

Curves: Ikarus

Ikarus is a near-legendary system for digitizing outline curves for fonts. I had a copy of Karow's 1987 book explaining the system (complete with FORTRAN source code, no less), but somehow managed to let it go. I've been meaning to get back to it for a while.

A month or so ago, I wrote that I was pretty sure that Ikarus used Hermite splines. In response, haruspex disagreed and said it was circular arcs. I was dubious because assigning one circular arc between each pair of adjacent knot points yields an overly rigid numerical system.

It turns that we were both right. The actual Ikarus system uses a hybrid approach. The first step is to fit a cubic spline to the knot points (with weighting based on distance between knots, to improve cases when the knots are not evenly spaced). However, from this phase, only the tangents are preserved; the spline curves themselves are discarded.

At this point, you have position (the original input) and tangent constraints for each knot. Those constraints uniquely determine a biarc for each segment between two knots. Since a biarc is made up of two circular arcs (tangent continuous at the join), you have an extra degree of flexibility that avoids the rigidity I was worried about.

So now I can state clearly the similarities and differences between Ikarus and my Cornu-based approach. They both try to fit a smooth curve through the input points, and both work in two passes, the first of which finds the tangents and the second of which draws instances of the primitive geometric curve. However, I use a Cornu spiral segment where Karow uses a biarc, and I choose the tangents so that the curvature of the final curve is continuous. Ikarus, by contrast, chooses the tangents so that the curvature of the intermediate cubic polynomial is continuous. By nature, you can't make biarcs continuous in curvature, but Karow's solution certainly counts as a rough approximation of that goal.

I need to make a visual illustration of the differences, but I expect the Cornu approach to be better in all respects. Cornu spiral segments, while resembling biarcs (and having precisely the same control parameters) have smoother variation of curvature. Similarly, I expect the tangent solution to be more accurate, not least because it guarantees curvature continuity of the final curve. When there is a large number of points (Ikarus recommends one knot for every 30 degrees of arc, and the illustrations in Karow's book seem to be even finer), the two solutions should be nearly identical. However, I think you can get away with about half as many points with the Cornu spline without sacrificing smoothness. That should make it easier to maintain high quality while editing and otherwise distorting, as well as the obvious savings in time because you don't need to enter as many points.

Btw, the Ikarus book also clears up my hazy memories about previous spiral approaches to curves. I thought there was a historical spiral format due to Coueignoux, but the one I was thinking of is actually by Purdy and McIntosh. A survey paper by Lynn Ruggles has all the details for the curious, or read the patent.

Update

I spent this last week in the Boston area, visiting a customer site. The work went well, so I'm feeling pretty good about the trip.

While I was there, I had dinner with Norm Megill (of Metamath fame). I'm really glad I finally got to meet him in person. While we have had a great correspondence solely over the Internet, there's nothing like really getting to know a person. I want to make more room in my life for this kind of thing.

Cross-platform UI design

There's been quite a thread on the recentlog, with contributions by elanthis, haruspex, mathrick, and davidw.

I'm not claiming that my way is necessarily best, but for the time being I am building a separate UI for each platform, and trying to factor as much as possible into platform-independent code.

And so far it seems to be going well. It helps that (at least so far) my app is not very "UI-heavy", meaning that I don't have lots of buttons, menus, special widgets, and the like.

There's some duplication, but actually less than I expected. I deal with duplication all the time when I code. When Excessive duplication is a sign that the code needs re-factoring, and I generally find I can do that. On the other hand, if such refactoring would add more bloat with the new abstraction layer or whatever than saved by eliminating the duplication, it's maybe not such a great idea.

I also want to make a distinction between the problems of coding, which I don't mind, and problems of building and packaging, which I do. I expect that my lightweight C approach, without taking on lots of dependencies to libraries, scripting language implementations, and so on, will generate small, simple executables (or app bundles or whatever the hell platforms want) that should be generally pretty easy to build and package.

I've been thinking about these issues a bit, so...

A small rant about building and packaging

It's not hard to get a program written, compiled, and running on your own development system. It is generally harder to get that program packaged so that others can do development on it, and so that end users can get it installed correctly.

The problem gets a lot worse if you want your program to be cross-platform. Most IDE's and other development tools have a strong tendency to lock you into a specific platform. For example, I generally find using XCode pleasant, but it's not really helpful if you want to deploy those projects to other platforms. I'm sure the reasons for encouraging lock-in are more political than technical, but I also think there is a technical issue at the core.

An analogy comes to mind. Many, many programmers feel that they shouldn't have to deal with manual memory management. A good implementation of garbage collection in the runtime avoids the time investment needed to match up all the mallocs with corresponding frees, not to mention avoiding the problems that inevitably happen when the programmer gets it wrong: memory leaks, double-free and similar bugs, and, in many cases, security holes.

I actually don't feel this way strongly about memory management, at least for the type of programming I usually do. Typically, I spend so much time and effort working out the algorithms that I don't really mind the additional time needed to nail down the memory management. And, of course, since most of what I do is very performance-sensitive (2D graphics), I really appreciate the control over things like memory layout, with its strong implications on cache efficiency and so on. But if I were doing a different kind of programming (say, focussed on user interaction), I could totally understand the feeling. Why spend the time?

And I do feel that way about the build and packaging issues. Why should I have to spend a significant fraction of the time it takes me to write a program to write makefiles, config scripts, and so on? Further, why should I have to do so much more of that work if I want my program to be cross-platform, even though for the most part the algorithms should be equally valid on any platform?

To make matters worse, I consider most of the extra packaging work to be man-made, and quite ephemeral. Twenty years from now, I think there will still be runtimes with manual memory management, for which carefully crafted code is still useful (I grant that GC runtimes are likely to gain in popularity, though). However, I doubt that .pbxproj directories, MS Visual Studio Solution files, or the like will still be of much use.

Interpreted languages, or JIT bytecodes for that matter, have the potential to solve some of these packaging issues, but just being interpreted is nowhere near sufficient. I want to be able to take a source directory, press a button, and have ready-to-use packages for Linux, Win32, and Mac pop out. By "ready-to-use", I mean that the end user doesn't have to manually install prerequisite packages or libraries (as is almost invariably the case with, say, Java). In theory, it shouldn't be hard to do something like this; you just glue your source directory to a template that contains everything you need, and tar it up in whatever package format the target system expects.

When it comes down to it, though, even using an interpreted language doesn't buy you all that much. Cross-compiling C (or, for that matter, most any currently popular language) into x86 or PPC machine code is a solved problem. As far as I can see it, the only thing that's really missing is the will to actually make an infrastructure for cross-platform package building.

386 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!