Older blog entries for raph (starting at number 370)

Choice thread

I've posted the choice thread on ghilbert.org. For those of you who are slavishly following the development of Ghilbert, or fans of the Axiom of Choice, it should offer a glimmer of enlightenment.

The New York Times reaches about 1.5 million people. This posting is possibly of interest to two dozen. But the difference between my blog and the NYT is that my post will reach those two dozen :)

Version control

As NTK says, No self-respecting Thinker Of Hard Thoughts these days is without their own Deep Theory Of How To Do Version Control. It's not surprising to see so much activity in this sphere now. CVS has been broken for a long time, and it's now clear that Subversion only solves some of the problems of CVS.

I haven't actually played with Codeville yet, but I look forward to it. When Bram puts his mind to something, it often turns out well. I was also very interested to see Ken Shalk's CodeCon presentation on Vesta, a project I've actually been following since its inception about a decade ago.

The bottom line is that I think Vesta gets a few things very right, but some of the design decisions are going to hold it back from hitting the big time. Vesta is a source repository, a configuration manager, and a build tool. If you buy in to the Vesta way of doing things, all these pieces interact in a very nice way. For example, because you keep not only your source files but also the tools needed for building in the repository, you can always go back to a specific build, bit for bit. It uses some neat tricks to work - the files in the repository are exported through NFS, and, not so coincidentally, that's how the build knows what the dependencies are. If the file is accessed during the build, it's a depenedency, otherwise not.

The biggest downside, I think, is that it's quite Unix-specific. It's not impossible to run NFS on Windows or Mac, but it's not exactly convenient either.

I think it is possible to take the best ideas from Vesta and put them in a portable framework. Rather than a build being a script which runs random commands and litters directories with temporary and result files, it should be a functional program from input to output. All intermediate results should be considered a cache. Indeed, I see no reason why you shouldn't be able to take a source package, run a simple command, and have it spit out a .deb for Debian, .rpm's of the various flavors for the Red Hat-based distros (including some intelligent analysis of how many variants are actually needed), a .pkg or .dmg or whatever the Mac people decided is the preferred way to distribute OS X apps, an InstallShield-like installer for Windows, and a .pdb for Palms. Throw in a couple flags, and the Unix build is instrumented to support debugging and profiling, or maybe gcc bounds checking. Better yet, have it run in an interpreter such as eic, so that you can debug runtime violations at a source level.

What exactly is standing in the way of such a thing? My guess is that the main thing is inertia.

6 Mar 2004 (updated 8 Mar 2004 at 01:48 UTC) »
I voted touchscreen! I think

At least I think I voted. There's no way of knowing for sure, because it was on one of those fancy new Diebold machines.

The problem is, of course, familiar to those experienced in computer security. Because it's impossible to see security flaws, and very difficult, even for experts, to understand them, people get very complacent. Judges, for example, are wont to dismiss knowledge of such vulnerabilities as "speculative", rather than an "actual threat".

What changes the status quo is invariably an actual exploit. If people can see a voting machine being hacked, then they'll believe it. Fortunately, in this case it ought to be relatively easy.

I don't think it's time yet for large-scale civil disobedience about hackable voting. The most important thing, I think, is to spread the word about the dangers. As Avi Rubin points out, many election judges are elderly folk without a deep understanding of security flaws. I spoke with my election judges briefly. I told them that I was a computer scientist, and that maybe I know the secret to cracking the smart card they gave me. When I gave it back to them and informed them that I didn't, in fact, mess with it, they were visibly relieved.

I also liked Avi's idea of volunteering as a judge. That's probably the single best way to get the word out in a positive way.

In any case, after I voted I got a little sticker that says "I voted - touchscreen" with an American flag on it. Maybe for the election this November, we should hand out stickers with the slogan "I think I voted - touchscreen", and a flag with the 50 stars replaced by a BSOD.

Letter quality LCD's

I think this is going to be the year they go mainstream. They're showing up in more and more devices. It's also interesting that Tiger Direct has the IBM T221 for four grand, "while supplies last". It's possible, I think, that a newer model is in the pipeline.

Logic update

I got a very gratifying response from my last post, resulting in a very enlightening email correspondence with Michael Norrish, Bob Solovay, and Norm Megill. I'll post it soon, for those of you out there just dying to find out whether there really is a closed form expression for HOL's epsilon in ZFC.

CodeCon

CodeCon was great fun, and I'm really glad I went. The highlight was the Google party. How good was that? Let's put it this way: I was so absorbed with intense conversations that I completely missed Knuth crashing it.

Patents

elanthis: yes, IMHO of course. I've got one more in the pipeline, and very possibly some more coming.

Logic

Ghilbert was cited in a recent preprint by Carlos T. Simpson. One of the central themes of this paper is a preference for untyped, set-theory flavored math as opposed to the more type-flavored variety you see in systems such as HOL, NuPRL, and Coq. He's also not a big fan of the constructive flavor of the latter two systems (meaning that x || !x is not an axiom).

The main thing I'm doing in Ghilbert now is constructing HOL in set theory. It's slow and steady progress, and very satisfying. One issue I'm running into, though, is HOL's epsilon operator. Essentially, if f is a function from some type T to bool, then epsilon(f) chooses a value (of type T) for which f is true. This construction is very useful, but does pose some challenges.

In the special case where there is only one value for which f is true, there's no problem (in this case, the similar "iota" operator suffices). But it's not hard to come up with situations in which f might hold for infinitely many values, and choosing one out of the many is a problem. If you accept the Axiom of Choice, it tells you that it can be done, but it doesn't tell you which value to choose. So it doesn't provide much help for actually constructing HOL in set theory.

The HOL documentation adds further twists to the story. The HOL description document contains a paragraph (at the end of section 1.1) suggesting that the universe of HOL terms is actually a pretty small set (by the standards of set theory, anyway - it's still quite infinite), and that it should be possible to choose an element from any subset quite straightforwardly, even without assuming the axiom of choice. I am beginning to get a vague sense what they're talking about, but don't really understand it. If anyone out there reading could explain it to me, I'd really appreciate it. If not, I think I know who to ask.

CodeCon

I'll be at CodeCon this Friday and Saturday. I'm really looking forward to meeting up with my tribe. As an extra bonus, Heather and the kids are coming to pick me up Saturday afternoon, so it'll be a good opportunity for people in my tribe to meet my family.

Advogato maintenance

lkcl is absolutely right that I need to hand over the regular maintenance of the site. I'll start up a recruitment thread on virgule-dev after I get back from CodeCon.

The spammers are winning

When Paul Graham first published his Plan for Spam, I thought it fairly likely that his basic idea was sound, and that Bayesian-style classification would, if not eliminate spam, then at least make it manageable. Now I'm sure it won't work.

The problem is that there are two attacks that spammers can do, both of which are damned hard to do anything about. First, they can include bits of legitimate, high quality text in their messages [for example, I now see wikipedia text in Google-spam sites]. Second, by running as a virus-zombie, they can take over whatever authentication tokens are available for real mail. Note that this includes hashcash.

I still believe that a trust metric can be a part of a healthy balanced breakfast to end spam. But Google, the highest-profile deployment of a trust metric, seems to be experiencing a marked decline in quality, to the point where some people are questioning whether PageRank is such a good idea after all.

I don't know how to fix PageRank, but my current thinking, reinforced by my experiences here, is that negative recommendations are needed too. I feel quite empowered by the trust that Google places in me to rank a page up, but at the same time helpless to tell Google, "this site is pure spam".

Negative recs are not easy. For one, they don't really fit well in the PageRank model (the biggest difference between PageRank and the eigenvector-based tmetric used to power the diary ratings). Second, any simplistic approach (such as having an unauthenticated (or underauthenticated) "report this as spam" link) would be very vulnerable to DoS-type attacks, likely rendering the whole negative rec process unusable. There's a good technical reason to prefer monotonic trust metrics, and when I started my thesis project I concentrated on those, but I now think that their inability to use negative recs is crippling.

And, Zaitcev, I did notice that Orkut's trust model seemed quite primitive. I didn't kick its tires carefully, but I didn't see any obvious differences from Friendster's, which is simply based on the existence of a path of length no more than four. If Orkut is at CodeCon, I'm going to pin him down and see what he has to say for himself.

See you all at CodeCon!

ltnb

Wow, it's been a long time since my last post. I've been kinda in hermit-mode, spending more time with the family, and not feeling as productive as I should be with work. I think I'm emerging now. Things are going fairly well, and I have quite a few things I want to write about here.

OT

I also have a few things I want to write about which are more political and less related to free software. I finally did a bit of a brain-dump on my other blog. If you've been following the updates on my boys, then by all means don't miss the bit of improv radio theater we recorded a few days ago.

Life

It's been a busy few weeks - my mom, then my brother and his girlfriend came to visit, and somewhere in the middle of all that we had an Artifex staff meeting.

Things are quieting down now. We had a nice family evening, playing video games and doing a little papercraft. I tried the tiger, just on cheap paper and b/w laser printing, and it came out ok. Of course, Max then wanted to do one of the motorcycles, but I convinced him that we would some other day.

BitTorrent and RSS

There's a thread going around the net on the benefits of combining RSS with BitTorrent. I agree there's something there, but want to make a distinction between the "easy" combination which is quite feasible right now, and one that requires a bit more rocket science (actually, Internet protocol design, but from what I know of both, the latter is more difficult to do well). In the "easy" combination, you have your whole RSS infrastructure exactly as it is now, but use BitTorrent to distribute the "attachments". People have been experimenting with RSS enclosures (for speech, music, video, and whatnot) for a while, but they're not hugely popular yet. One of the reasons is the difficulty and expense of providing download bandwidth for the large files that people will typically want to enclose. BitTorrent can solve that.

In fact, BitTorrent's strengths seem to mesh well with RSS. BT shines when lots of people want to download the same largish file at the same time - it's weaker at providing access to diverse archives with more random patterns of temporal access. Also, BT scales nicely with the number of concurrent downloaders - you get about equally good performance with a dozen or ten thousand. So if someone shoots a really cool digital video, posts it to their blog, then gets Slashdotted, it all still flows.

Integrating BT with a daemon that retrieves RSS feeds in the background has other advantages, as well. If the person opens the file a while after the download begins (which might be as soon as the RSS is updated), most or all of the latency of downloading that file can be hidden. Further, since the BT implementation is released under a near-public domain license, it should be relatively easy for people to integrate it into their blog-browsing applications.

An example of a blog that would work superbly with BT is Chris Lydon's series of interviews.

But Steve Gillmor's article isn't primarily about enclosures - it suggests that we can use BT to manage the RSS feed itself. I think there's something to the idea, but the existing protocol and implementation isn't exactly what's needed. BT is best at downloading large static files. You start with a "torrent" file, which is essentially a Merkle hash tree of the file packaged up with a URL where the "tracker" can be reached. All peers uploading and downloading the file register with the tracker, and get a list of other peers to connect with. Then, peers exchange blocks of the file with each other, using very clever techniques to optimize the overall throughput. After each block is transferred, its hash is checked against what's in the torrent file, and discarded if it doesn't match.

But RSS files themselves are relatively small, so it's unlikely that all that much bandwidth would be saved sending torrent files and running a tracker, as opposed to simply sending the RSS file itself. Further, the big performance problem with RSS is the tradeoff between polling the RSS feed infrequently, resulting in large latencies between the time the feed is updated and viewers get to see it, or polling it frequently and chewing up tons of bandwidth from the server. BT doesn't do much to help with this - you'd be polling the torrent file exactly as frequently as you're polling the RSS file now.

I believe, however, that the BitTorrent protocol could be adapted into one that solves the problem of change notification. The protocol is very smart, and already has much of the infrastructure that's needed. In particular, peers already do notify each other when they receive new blocks. That's not change notification because the contents of the blocks are immutable (and that's enforced by checking the hash), but it's not too hard to see how it could be adapted. At heart, you'd replace the static hash tree of the existing torrent file format with a digital signature. The "publisher" node would then send new digitally signed blocks into the network, where they'd be propagated by the peers. There'd be essentially no network activity in between updates, and, as in the existing BitTorrent protocol, the load on the publisher node would be about the same whether it was feeding a dozen or ten thousand listeners. I'd also expect latency to scale very nicely as well (probably as the log of the number of peers, and with fast propagation along the low latency "backbone" of the peer network).

I'd hate to see such a beautiful work of engineering restricted to just providing RSS feeds - ideally, it would be general enough to handle all sorts of different applications which require change notification. One such is the propagation of RPM or Debian package updates, which obviously has strong requirements for both scaling and robustness. The main thing that's keeping it from happening, I think, is the dearth of people who really understand the BitTorrent protocol.

Proof systems

I've been hacking a bit on my toy proof language. Aside from slowly bringing the verifier up to the point where it checks everything that should be checked, I'm also hacking up an implementation of the HOL inference rules constructed in ZF set theory.

It's immensely satisfying to construct proofs that are correct with high assurance, which is such a contrast from hacking code - any time you write nontrivial code, you know it's got lots of bugs in it, many of which no doubt can be exploited to create security vulnerabilities.

Fat chance

Oooh. Turns out the FAT file system is patented, and Microsoft has a commercially reasonable licensing plan for it. Of course, given the way the patent system is structured, this kind of thing is absolutely inevitable.

As I've written before, I think Paul Krugman got it wrong when he argued that Microsoft's monopoly position is not much of a threat. On the other hand, kudos to him for publicizing the rotten state of the voting machine industry. Those in the know have been aware of serious problems for some time, but the media so far has been doing a good job of keeping the unwashed masses blissfully unaware. Love Krugman or hate him, it'll be harder to keep this issue quiet now.

Color profiling

A followup to my recent post calling into question Argyll's multidimensional interpolation algorithm. I had some email back and forth with Graeme, and it's now clear that the multidimensional interpolation itself is quite good. The "wiggliness" in my profiles seems to have been caused by an overzealous attempt to optimize the per-channel curves. With those disabled, I'm now getting some really, really good profiles.

I still think that my idea of a diffusion equation based interpolation algorithm might have merit, but considering the good results I'm seeing now from Argyll, it pretty much goes on the back burner along with two or three dozen others. It might become interesting if reasonably priced handheld spectrocolorimeters with a USB port ever make it to market - typical professional profiles are made from 3000 or more patches, but it might be possible to get good results from a much smaller sample, if done right.

More proof links

Claus Dahl sent in a link to yet another proof system/language. In the meantime, I've been corresponding with Norm Megill of Metamath fame, Freek Wiedijk, and Michael Meyling of Hilbert II. There's something addictive about formal proofs. How else to explain why there are so many projects to work on them?

Turkey day

We had a good family Thanksgiving at Heather's father's house, with her youngest sister also. We had a roast turkey and all the traditional trimmings. Sometimes I'm pretty cynical, but this year all the tradition, and being close to family, felt good.

Yet another letter-quality display

Sharp has introduced its latest Zaurus in the US. It's the successor of the C7x0 series, which had a nice 216 dpi screen, but was only available in Japan. This one looks pretty good. It's got impressive wireless connectivity, a good CPU (400MHz xScale), and, unlike the first widely-available letter quality handheld, runs a decent operating system (it uses a Linux kernel).

It's pretty clear to me by now that handhelds are quickly moving to high resolution displays. The maxed-out handheld and ultralight laptop categories are blending into each other, so don't be surprised to see these displays become even more widespread in the next couple of years or so.

Semantic Web

I've long been skeptical of the Semantic Web. It's always had the feel of academic research to me. Standards organizations are best at codifying existing practice, not at doing basic research, and I haven't been all that impressed by the W3C in other domains either, so I've had low expectations.

Of course, I'm not the only one to engage in W3C-bashing. Lately, Clay Shirky wrote an essay which pretty much shreds the idea of the Semantic Web to ribbons. I've been reading some of the ensuing discussion, and much of it is quite interesting.

If you find Shirky's anti-Semantic Web arguments compelling, please read what Tim Bray has to say on the matter. Tim's been involved with the Semantic Web and the W3C for a long time (he's in their Technical Advisory Group now), so you could say he's not an impartial observer. But his essay is a skeptical, hardheaded look at what the first step toward a real semantic web is likely to look like.

I want to make one observation. The metadata Tim talks about (roughly, the kind of data you find in corporate annual reports, only fresher and potentially more detailed) does not have a serious trust issue. In general, corporations will post their own data, and they will be accountable for any errors. Nothing on the Web is really free of authenticity problems, but this kind of data is as close as it gets.

I think this property is important, and stands in stark contrast to most of the metadata you might be interested in. For example, a really nice piece of metadata would be "e-commerce store X has excellent customer service / an alarmingly high rate of complaints." Since this particular bit of metadata is likely to have a significant effect on the flow of revenue, the incentive to manipulate it is simply too great for amoral businesses to resist. People manipulate Google for much the same reasons, and with pretty good success, even though Google has sophisticated mechanisms in place for resisting such attacks.

If the Semantic Web is to become a general tool for automatically generating useful inferences, rather than merely a research toy, the questions of how to make sure the metadata is trustworthy will have to be resolved. I think that's especially true for the more AI-like goals of the project. In the mean time, I think there is much lower-hanging fruit, as Tim Bray describes; data which inherently has a lot of structure, and clear criteria for authenticity and accuracy, but which is now published on the Web in a hodge-podge of flashy web-page formats.

Proof systems

I stumbled across Yet Another Proof System. This one looks still somewhat embryonic, but still possibly interesting. I'm definitely not convinced that the GFDL is the right license for mathematical proofs. It's gone through a few name changes recently, but it was interesting to see the similarity between one of their recent ones, "Hilbert II", and Ghilbert, the name of my toy proof system.

Meanwhile, my system is steadily gaining capabilities. It now accepts the entire set.mm of Metamath (translated into Lisp-like syntax). I've also got a Python program which can prove statements of the form "x + y = z", where x and y are arbitrary natural numbers (currently expressed in binary). I'm also playing with axiomatizations of Z2 and HOL, and indeed have been able to prove the first dozen or so derived inference rules in Chapter 8 of the HOL System description (Kananaskis version). If I can get two major axiom systems to connect in some meaningful way (so that proofs in one can make use of theorems in the other), that would be cooool.

Video jukebox

I want to get one of those little low-noise 1GHz Via C3 systems for the family this Christmas. I'm not so concerned right now about the software it'll run it (in particular, it's not essential to record video off-air, nor provide splashy on-screen graphics). On the other hand, it must run MPlayer, output to NTSC video, and have 802.11b. I'd really like it to run Linux, but I understand that certain things like video-out can still be a bit painful. An outboard VGA->NTSC converter would be the easiest way to solve that problem, but I worry that it might not deliver quite the quality of a direct NTSC output, especially when it comes to smoothness of motion.

Has anyone here successfully put together such a system? I'd especially like recommendations for things that work smoothly with a Linux kernel, and/or recipes for configuring the pieces to work together. I'll happily post the winning recipes when I get the stuff going.

chema

It's hard to believe that Chema Celorio has indeed passed away. We spent quite a bit of time together at the first Linux Printing Summit. I was really impressed by his passion, and have enjoyed working with him since. He will be sorely missed.

Multidimensional interpolation

The Argyll color management system uses a very sophisticated technique based on thin-plate splines to for its profile creation. The core operation is to take the colors for a few hundred patches and fit it to a smooth multidimensional function, typically from R^4 to R^3 (CMYK tint values to a 3-dimensional colorspace such as Lab).

I'm seeing some wiggliness in the profiles I'm generating. I'm not sure why; it might be measurement error on my part. But I'm starting to suspect it may be the interpolation algorithm itself. I've read a recent survey, but it doesn't provide clear guidance on which technique provides the smoothest results.

My own personal inclination would be to create the interpolation function by solving the heat-diffusion equation with a source at patch point, so as to fit the measurements. I'm convinced that this approach would give an optimum level of smoothness, but I see problems at the boundaries, even when using the free boundary assumption.

To simplify the discussion, assume that we have to extrapolate beyond the range of colors measured. Even in the simple case of two points, one at (0, 0) and another at (1, 1), the only solution I'd consider optimally smooth is the straight line f(x)=x. Yet, a straightforward solution of the heat diffusion equation with sources at 0 and 1 is going to have a slope of zero at the asymptotes.

I'm thinking of two possibilities: first, there might be a different boundary assumption that converges to a slope of 1 at both asymptotes. It's not too hard to imagine that somebody else might have come across such a thing; you'd need to to minimize the global curvature (magnitude of second derivative) for the solution.

Another possibility is to solve the heat diffusion equation for the derivative of the desired solution. In the 1-D case, it's pretty easy to see how it would work. For your sources, you use a derivative-of-Gaussian shape. For the example above, the derivative would obviously converge to a constant 1 (the free boundary assumption for the derivative seems like the right thing here). However, it's not immediately obvious to me how to generalize this to higher dimensions (in fairness, rillian tried to explain the usual finite-difference multidimensional integration technique, but I didn't quite grasp it at the time; I'll ask again).

But it's possible I'm going over well-known territory, and a reader better versed in diffusion problems can just tell me the answer. I'll find out soon, and even if not I'll report again on what I find.

Politics

My new favorite blog is Talking Points Memo. That's where I found a link to this video clip in which Wesley Clark kicks the ass of some lame Fox News reporter. One of the most satisfying bits of video I've seen in a while.

8 Nov 2003 (updated 8 Nov 2003 at 08:26 UTC) »
How much resolution is enough?

Faithful readers of this blog will have already noted my fascination with high resolution displays. Desktops seem stuck at 100dpi dot matrix quality for the moment, but the portable space is heating up nicely. For one, it looks like the recent release of the Toshiba e805 is a harbinger of the next major rev of the Pocket PC platform. And if 200dpi isn't good enough, Sharp has started showing a prototype of a 300 dpi model.

Just how much resolution is enough? Are pixels going to keep getting smaller, or is there some natural resolution beyond which there isn't a noticeable improvement? I've just submitted the manuscript for my paper on this topic, which I'll present at the EI04 conference in San Jose next January. The quick answer is that, for desktops and laptops, this natural resolution is around 170 dpi, and about 240 for handhelds. Devices within striking range of these targets are available now, but not yet mainstream.

Attacks on the GPL

I'm not sure people realize how seriously the GPL is under attack these days. Yeah, SCO is trash-talking it, but I think this could ultimately be good for us, as they're on the verge of becoming thoroughly discredited, even to the point where shill reporters won't be able to repeat their baloney with a straight face.

I find the official policy of the BSA more disturbing. Of course one expects the BSA to lobby for the benefit of the proprietary software industry, but don't forget that IBM and Novell (the proud new owner of SuSE) are also members in good standing. (Thanks to this Groklaw article for the heads-up)

Take a look at this paragraph in particular:

2. Ensure that government funded research is available to all. Government plays an important role in financing basic software research. When public funds are used to support software research and development, such funding should be made equally available to all developers, regardless of the development model chosen to do the research. The innovations that result from this work should be licensed in ways that take into account both the desirability of building a commons of knowledge and the desirability of applying basic research advances to commercialized products. The dissemination of results broadly in this manner has contributed to a virtuous cycle of innovation in which government funding for basic research advances the set of knowledge available to the public by spurring advances in commercial products. These commercial products in turn generate growth and tax revenue necessary to fund future rounds of public research.

Unless I'm totally misreading it, it's saying that government-funded computer science research projects should not release GPL software, preferring BSD-like licensing if it's going to be released freely. I'm not sure how close this kind of trade-association policy is to becoming the law of the land (not that such things ever happen here in America), but I consider it highly alarming. At the very least, I consider it plausible that this lobbying is having a chilling effect on American government funding of GPL'ed projects. At least other countries seem to be starting to see the light.

Is ZF set theory a hack?

It's hard to believe, but apparently some offhand comments made here have inspired a serious paper comparing various axiomatizations for the foundations of mathematics, primarily in terms of the complexity needed to represent these axioms. Check out "Is ZF a hack?" on Freek Wiedijk's notes page.

I don't think there is a One True Axiomatization for mathematics. In fact, I think the lesson we should take away from Gödel's incompleteness theorems is that, in a very real formal sense, such a thing is impossible.

But, if we can't have that, what can we use? As I've written before, I think the answer is a variant of Postel's Law: be conservative in what axioms you require, but liberal in what axioms you're compatible with.

In that case, you'll use ZFC if you want to prove things about hairy infinite objects, but probably a weaker axiomatization such as Z2 or HOL for most everyday work.

Actually, what I think is needed is a system of polymorphic axioms that express basic concepts such as finite data structures and functions, that can be linked with any of the popular primitive axiom-sets. Finite data structures are fairly straightforward (although I did manage to make an embarrassing typo in the formulation for pair in ZFC - it should be (a + b) * (a + b) + b). Functions are going to be harder. Among other things, I'll need to deal with partiality in some reasonable way. But I think that good engineering solutions are possible.

Such an axiom system wouldn't justify better from a philosophical perspective, but I do think it would be a serious improvement in one practical sense: making formal theorems as portable as possible. I think ordinary mathematicians don't worry about this too much, because it usually goes without saying that ordinary, everyday mathematics can be ported (at least in principle) to any appropriate formal framework, so it isn't actually necessary to do it. Things are different, though, if you're trying to stitch together a world-wide repository of formal proofs.

Paul Krugman

Oh, and berend, if I had to pick an entry for the Krugman Gotcha Contest, it would probably be this piece saying that Microsoft's monopoly is not so dangerous after all. Does it surprise you that I don't agree with everything the man has ever written? And yes, I've also been disappointed at the level of sloppiness in some of his columns, if for no other reason than the ammunition it gives to his detractors.

But I find that he has a lot to say. Even when I don't agree with him, I find that he writes thoughtfully, and in such a way as to provoke thoughtful discussion. I'm not sure I see the point of the more vicious Krugman detractors (of whom Luskin seems to be staking out a leadership role). It seems to me that their central point is that we should disagree with everything that Krugman has ever written because he's been wrong a few times. That's one big giant ad hominem attack, and I have little patience for such nonsense.

361 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!