Older blog entries for raph (starting at number 227)

Chicago: You seem to be rediscovering coding theory, a subject pioneered by Claude Shannon in his landmark 1948 paper, "A Mathematical Theory of Information." Unfortunately, I can't find a version of this paper online. Here's an up-to-date, freely available text. You'll find that page 4 of the introduction covers your recent postings.

http://www.mth.msu.edu/~jhall/classes/codenotes/coding-notes.html

A couple more rating FAQ's

Where is the interface to rate another's diary? At the bottom of their personal page. Part of the reason it's there is to encourage you to rate based on several entries, not just one.

Why do different people see different ratings? That's the way the engine works. Within a subcommunity, you might expect to see ratings more relevant to that subset. If you want to see the raw ratings seen by foo, go to http://www.advogato.org/rating/report/foo

Learning TCP/IP

I didn't mean to be condescenting to David McC. I mostly felt that learning TCP/IP at a lower level would be useful, and consistent with his approach to other technical issues. After all, "new" is a perfectly good abstraction for memory allocation, but that hasn't kept David from exploring allocation at a lower level, to solve quantitative problems.

Also, I feel that the knowledge encoded in the TCP/IP protocol is deep and enduring. I wouldn't be so eager to recommend learning a passing fad.

Some trust musings

Wes is doing a fine job reporting on the TCPA and related issues. Whit Diffie talked several years ago about the "marijuana-ization of crypto", and I feel this could actually come to pass with DRM-free media players. Of course, there will be massive civil disobedience, but there are massive numbers of illegal drug users (and dealers) now. The platform, and the laws mandating it, could have a very profound effect on free software.

The use of the word "trust" is interesting in this context. Is it the same trust I talk about with trust metrics and the like? In one way, yes. The real question is: who do you trust? No sane, thoughtful person would trust the likes of Microsoft and VeriSign. But I think people do trust their social network, and are right to do so. We need massive research to explore how to map social networks into the digital world. I often feel quite alone in thinking about this stuff; very few people seem to understand it. Oh well. I'll just keep pushing it patiently.

The financial events of late have been interesting too. I think there's been a widespread faith that large corporations are implicitly deserving of trust, in large part because of the laws, regulations, and processes designed to keep them that way. But Worldcom and friends have shown that these processes are capable of massive failure. People are going to start thinking about this more seriously, and that's a good thing.

A few rating FAQ's

Can I rate my own diary? No. However, your own diary does show up with a rating. You can think of this as your friends rating of you.

You won't see ratings unless you have certified someone else. Ratings flow along trust edges. It might take 20 minutes or so.

Why the fascination with ratings? Primarily as a research project. In particular, I'm very interested to see whether these ratings are more accurate than the trust metric. So far, the results are encouraging.

Is the rating engine the same algorithm as PageRank? No, it's more powerful. For one, you can put in information which lowers rankings in addition to raising them. For two, it can evaluate general metadata assertions.

What types of assertions are implemented on Advogato? At this point, only diary ratings. As the total number of metadata assertions in the system grows, scaling problems set in.

Will you implement <pet idea>? Probably not.

Will you take a patch for it? Probably.

Ease of use

David McCusker is having lots of difficulty setting up his new TiBook and Airport. This shouldn't be. After all, he paid a significant premium for the Apple gear.

I think part of the problem is that he's playing partly the role of a hacker (home network configured with static IP addresses), and partly the role of a user (he admits to not understanding networking very well, and seems disinclined to learn it). I think things go more smoothly when you're one or the other.

In fact, if I were him, I'd just break down and learn the basics of TCP/IP and friends. It's actually pretty cool technology.

Networking is a somewhat hard problem. Attempts to hide the underlying complexity under a pretty GUI may or may not work.

Somehow, this connects to Bob Frankston's passionate argument against "special networks" (echoed by Dave Winer), but I'm not sure how.

Also, there's lots of food for thought in David's blogs about the importance of community for providing good tech support. I've observed that Mac users tend to be fairly staunch advocates of the platform, and often provide tech support as part of this advocacy. People who work in Mac stores (in my limited experience, and David's) are even more rabid in their advocacy, but the way tech support is provided has more to do with business than community. David is also running into the ubiquitous problem of low-talent professional tech support, and with a snotty attitude to boot.

As David points out, free software tech support is quite good if you are part of a community, but not so good if not. It's an interesting question: how can you scale it up?

Diary ratings

If you're logged in, then you'll see 1 to 10 rankings in the colored bars on the recentlog. That's your personal view, in other words the rating computed with you as the seed.

If you want to look at someone else's view, the url http://www.advogato.org/rating/report/theirname is publicly readable (at least for now). The presentation leaves much to be desired, but the fields are: metadata subject ("d/" means "diary of"), rating, variance, and confidence factor.

So far, the experiment seems to be going very well. The ratings I'm seeing look very accurate, even though the raw inputs contain some dubious rankings. If you do see a rating you disagree with, then just follow the link to the person's page, and scroll to the bottom to enter your own rating. It will take a while to propagate.

For the most part, this is a straightforward implementation of the algorithm I proposed in my HOWTO a few months ago. tk: note that a rating and a certification are two different things. Are you saying you want three different things? If so, what is the advantage?

There are a lot of ways of presenting the information, and a lot of ways of doing filtering. I'm tempted to use color coding and other cool visualization techniques, but I think if it's going to fly, it has to be really simple. Here's what I'm most tempted to do now for filtering: above 7.5, render full entry as now; between 5 and 7.5, render first five lines, between 2.5 and 5, render first line.

Healing

The scab from Max's burn is starting to fall off. The skin underneath is red, but otherwise appears healthy. It looks like it will heal well.

We've finally found a person we really like to help us with Alan's anxiety, and have a very sensible plan. I'm deliberately being a bit vague about the details, but ask me in person if it seems relevant.

Heather and I often feel that we're slacker parents; we don't enforce strict discipline, the house is usually messy, the kids get too much junk food, and so on. But when we express love for our children through helping them heal, screw all that. We rock. I mean no disrespect to my mom or the memory of my dad, but I wish I had parents like us when I was growing up.

Diary rankings

Yes, as tk noticed, you can now enter your ratings of others' diaries. There's eigenvector-based code to compute the ratings too, but it's not really ready for prime time. For one, it allocates all its memory in one pool per request, but it needs to be finer grained. For two, I haven't done anything about displaying the rankings. Eventually, I'll want to do a custom recentlog, where you can specify thresholds.

I think this rating system will be very robust. Among other things, there's no bias toward generosity as there is for certs. Of course, merely being able to accurately evaluate the quality of diaries doesn't directly lead to a high quality site. One of the problems is coherence. If some people are reading low-ranked diaries and others aren't, it makes for some fragmented threads. But it ought to be a good start.

In any case, the mod_virgule code lives on casper now. Follow the directions there, but replace ghostscript with mod_virgule.

Stories

Alan was negotiating his allowance with Heather this evening, and I was reminded of the classic story about the doubling of grains of wheat as a reward for inventing chess. I told this to him, and he was quite captivated.

I just got done reading Max his bedtime stories. Tonight, it took him fifteen books to get to sleep. This reminds me of one of my favorite Alan stories, from five summers ago. The Rockridge Library in Oakland had tables out on College Avenue to promote their summer reading program. As it happens, Heather happened to know one of the volunteers, so she stopped to chat. I was holding Alan and probably looking a bit bored, so the other volunteer started trying to sell me on the reading program. I must have seemed skeptical, because she insisted that it was pretty easy. I just had to read him eight books. "Eight books?" I protested. "And I only have until the end of the summer to read them to him?" She pointed out that board books would do. At this point, Heather overheard our conversation and blew my cover.

We're also reading the Phantom Tollbooth to Alan. It's a pretty good book. There was a patch of a few months where he didn't seem very interested in books, but now we've gotten back on a reading track. It's very good, I think, as it's one of the few things that will reliably calm his anxiety.

XCB

A few weeks ago I wrote a little piece called "File format, API, or protocol?. X is a well known example of an interface implemented as a protocol, in an area where API's are more common. Indeed, most clients of X use the Xlib library.

In any case, a couple of recent Usenix talks highlight one of the advantages of the protocol approach: it makes independent implementations feasible. The new implementation, XCB, has two of advantages over Xlib. It's much smaller and lighter weight (27kB compared with 665kB). Also, it's designed to thread transparently, while threading with Xlib has always felt like a kluge.

Another thing that makes the work interesting is that the authors have actually tried to write down the specification (using the Z formal method. This turns out not to be quite practical yet, in large part because Posix threads don't have formal semantics, but the effort was worthwhile. This resonates for me; I often find that thinking "how would I prove this correct" is useful, even though I hardly ever follow through and do a formal proof.

Metadata

I wrote a lot of code tonight, and have a reasonable first cut at the diary rating system. I'll probably apply it live some time over the next few days.

I just got back from a one-day Artifex staff meeting in Las Vegas. Tonight's entry will be light.

Thanks to everyone for the positive feedback on my diaries. I am having fun writing them. Similarly, I hope David's self-proclaimed thin patch is shallow and brief. I particularly enjoy the interplay between his blog and mine.

gary: your work on mod_virgule and Advogato is very much appreciated, but please don't feel that you have an obligation. If your hands still need to heal, give that priority. But of course if you do some work, I will be happy.

Psyco

Mamading Ceesay wrote me a nice email calling me on my inability to find references from Psyco. It turns out that Psyco is inspired by the Fabius compiler. I'm not directly familiar with this work, but I do know Peter Lee from my ML days. He's done some ML performance work, including a rather interesting Web server written in ML. It's good that Armin is reading the literature.

A couple more haiku

AFPL: free for use and change / but if you charge money / we get a fair cut
SCSL: we give you the source / but we control the process / for better or for worse

(I slightly prefer these versions over the strict 5/7/5 variants)

ER

Most of the evening was spent in the ER, because Max had burned himself with a pair of tongs. As far as we can tell, he stuck them in the flame on the stove, then touched them to his cheek. We spent over two hours waiting, then the doctor looked at him for about 10 seconds. In any case, it was only a partial thickness burn, so with some antibiotic cream it should heal up nicely.

Stamp idea from Bram

Bram gave me a really nice gift this afternoon: an optimization for the bandwidth and storage consumed by stamps. Basically, his idea starts with stamps of value 2^k, then lets them get split in half up to k times.

In my previous thinking, a stamp contains some random bytes, an issuer id, connection info for the issuer (ip address and port), and an expiration time. When a node creates a new stamp, it uses /dev/random or whatever to make the bytes. Then it stores the stamp in its own database, and "issues" it to a peer, who may then trade it around the net. Finally, it gets sent back to the issuer for "redeeming". At that point, the issuer checks to see that the bytes are in the database (and haven't been redeemed already by someone else).

Bram's idea is simple (one of those "why didn't I think of that myself" ones). Stamp entropy is the root of a k-high tree. For each node in the tree with entropy k, the left child is AES_x(0), and the right child is AES_x(1). Now, instead of an opaque random block, you have a tree id (can be random; is shared by all nodes in the tree), a path (sequence of 0 and 1 bits to choose left and right nodes, respectively), and the result of the hash.

It's a nice technique, and it'll almost certainly be in the prototype when I finally release it. But what makes it a nice gift is that Bram obviously understands what stamps are all about and has been thinking about them.

Haiku licensing

Aaron Swartz has beautiful haiku explaining free software licenses (there's another one linked, but it's not relevant to free software):

MIT: take my code with you / and do whatever you want / but please don't blame me
LGPL: you can copy this / but make modified versions / free in source code form
GPL: if you use this code / you and your children's children / must make your source free

Aaron has placed these in the public domain. In other words:

I could even lie / and say I wrote it myself / the author cares not

Big companies

Somebody (I don't remember who, they've scrolled off the recentlog) said "that large corporations and free software are interested in" me. Sure, I can't help the fact that my altruism benefits these kinds of entities in addition to real people, but the fact doesn't excite me. When dealing with business entities, I prefer a business relationship; I provide them with something of value, they pay me money.

Free software licenses don't promote this goal, but dual-licensed GPL libraries are consistent with it. I recommend the approach highly, and plan to do most of my future work under it.

Graphs are not a precise model for software food chains

David McCusker writes:

Another exception related to performance occurs with caching. A reverse proxy server depends on an origin web server for content, and yet can serve that content faster than the origin web site when either of two things occurs. First, the proxy might be on a better network than the origin web site. And second, the proxy might have content cached most directly in the form for serving.

Don't get me wrong, I'm not overly attached to my theory, but I'm not sure this counts as an exception. Yes, the proxy depends on the origin server for content, but it might also make sense to say that the origin server depends on the proxy for efficient distribution of that content. Here, I don't think a graph edge says enough about the relationship.

Fault-tolerant techniques like RAID are a clear example, though. Here, the dependency relationship is a clean edge. The disk is clearly at a lower level than the RAID cluster, yet disk failure is hidden. Disk bandwidth has a linear relationship with bandwidth at the higher level, but the cluster has better bandwidth than the individual disk. When it comes to latency, though, everything is hard-limited by the disk.

There's another aspect in which simple graphs aren't a powerful enough model. A simple Unix utility may have only two dependencies: to a kernel, and to a C implementation. However, the kernel could be Linux or BSD, and the compiler could be (say) gcc or lcc. So we have four edges, but they're not all the same. The real structure is two groups of two.

An analogous situation occurs in food webs. To a first approximation, food sources are interchangeable. But again, there is structure. Humans need both starch and protein. It doesn't matter whether the starch is wheat or rice, but a diet of starch and meat is considerably more complete than one of two starches.

So a graph is an interesting starting place, I think, but it is an oversimplification. It would be very interesting to see whether people trying to apply network theory to other domains are developing more sophisticated models. I'm not aware of any.

Psyco vs Parrot

I feel that my last entry was a bit unfair. After all, Psyco and Parrot are both speculative projects with the potential to vastly improve the performance of dynamic scripting langauges, but also with a significant risk of failure.

Even so, my gut feeling is that Parrot has a solid chance of success, while Psyco is much more speculative. Parrot is based on fairly conservative ideas. There's a lot of literature on virtual machines, and the Parrot people seem fluent in it. By contrast, the Psyco webpage and distribution contain absolutely no references to any related work (that I could find). Maybe it's just my academic background, but it strikes me as a red flag.

In any case, I wish both the Parrot and Psyco teams huge success. Having to choose between multiple high-performance dynamic languages would be a good problem to have.

Complexity

Thanks to David McCusker for carte blanche to beat him up. As he no doubt knows, it's not exactly my style, though.

Of course it's not useful to decide whether something is "necessary" on the basis of computational equivalence. S and K are necessary, the rest is a matter of notational convenience. (I is for weenies).

So let me try to state the question in more precise terms:

Is it possible to use a much simpler technique to accomplish nearly the same results, nearly as well?

Applied to the Mithril and SEDA/Java runtimes, the answer depends on how you count. The Java runtime is a horrifically complex beast. If you included it, then I would agree that Mithril is much simpler. But it's also more or less a solved problem (less, admittedly, if you restrict yourself to free software). If you count only the SEDA part, it's quite a bit simpler, because you have the GC daddy taking care of cleaning up all the memory.

I'm not arguing one way or another as to whether you should include the Java runtime. I think it's a good question.

The food chain

Now for some more general musings. The above emphasizes that software systems today are not isolated artifacts, but exist as part of a larger network. Dan Sugalski uses the term "food chain" to refer to this network in a recent interview. Coincidentally, Barabasi in his book "Linked" finds that food chain networks and the Web have similar scale-free topology.

Like the food chain, the software project network has one-way links. Mozilla depends on C++, but C++ does not depend on Mozilla. (Of course, as in real food chains, all kinds of complex interactions can occur once you start looking at second-order effects). I think it's worth looking at the properties of these links in more detail. What follows is a rough cut. I'm sure I didn't think of everything.

In an "A depends on B" relationship, B is always more focussed on performance, robustness, and stability than A. If B is lacking in any of these, then the combined A+B system is also. (Fault tolerance would seem to be the exception that proves the rule.) At the lower levels of the food chain, you see an almost fanatical attention to these issues. CPU's come to mind, especially. The price of a 2.53GHz Pentium 4 is currently more than double that of a 2.2GHz one, for a 15% speed improvement at best. Yet, for many applications, performance is measured in terms of order of magnitude.

Compilers, runtimes, and OS kernels all similarly have an intense focus on performance and stability. Of course, the need for these things varies. Windows 98 is woefully inadequate to run, say, an Oracle database, yet is considered a perfectly fine platform for gaming.

The higher levels of the food chain start to be concerned with user needs, which are famously complex. If you had to meet these needs and optimize performance to the bone, the problem becomes intractable. That's why the food chain exists. Filesystems are a lot closer to user needs than raw disk. Thus, applications rely on the filesystem to map performance and robustness reasonably well, and save a huge amount of complexity. Once this relationship is established, then quantitative work at the lower level (improving caching algorithms, say) has a positive effect on the entire system. Note, however, the downward pressure when the next level up is a database.

Just as the food chain is critical for the health of an ecosystem, I believe the software project network is critical for the health of a software community. We are blessed with many good things at the lower levels: the Linux kernel, gcc, etc. They're not perfect, of course, but they don't prevent you from doing good work at the higher levels either.

A good example of dysfunction, I think, is X during the late '80s and early-to-mid '90s. Here, the lowest level (the X protocol and Xlib) was reasonably good, but the next levels up (Xt and Motif) were awful. As a result, X failed to get attention at the lower levels too, and lots of things that should have happened didn't (better font handling, more advanced imaging model). Now, there are numerous healthy projects situated at the next level up from X (Gtk+ and Qt are the most popular), and we see X getting good quantitative improvement now too.

Now I'll make some people mad. In language communities, these relationships are especially important. Everything written in a language depends critically on the performance and stability of that languages's implementation. Thus, there is an important role for a project to do hardcore quantitative performance work, combined with a process that ensures stability. If that role isn't strongly filled, I think the language community is in trouble. And here is what I see: in Python, this role isn't really being filled, but in Perl 6 it looks like it will be, by Parrot, the subject of the Sugalski interview above.

That said, the dependence on the language implementation for performance may not be as critical in the Python community as it is in others, because the CPython+C amalgam is a reasonable approach in many cases. But if Perl 6 and Parrot succeed, it will be appealing to write high performance systems entirely in the high level language. And there are some things happening in Python performance land as well.

Food for thought, in any case.

20 Jun 2002 (updated 20 Jun 2002 at 06:32 UTC) »
bgeiger: yes, expiring certs after a year was always part of the plan, I just haven't gotten around to implementing it yet.

Joel on Software has a nice piece on how large corporations are adopting free software. Perhaps esr's dream is becoming realized after all, just a bit more slowly than he predicted. In any case, I don't care much about big corps and free software. The lifeblood of the movement is still individual, human people, and will probably remain so for quite some time.

The subtext of yesterday's async musing was to express skepticism that all the city/fiber/dock/boat mechanism in Mithril was actually needed. But, of course, not having seen the design in detail, I can't really criticize it.

218 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!