Older blog entries for raph (starting at number 315)

How blogs are better than mainstream media

The Washington Post recently ran a "journalist checks out blogs, doesn't quite see what the big deal is all about" story recently. A lot of these have been appearing lately; this one seems entirely typical. I've been thinking about the differences between blogs and mainstream journalism for some time, so the appearance of this story in a highly regarded newspaper, and Dave Winer's criticism of the piece, inspired me to speak to the issue.

The main theme of the piece, as usual, is that blogs are an interesting phenomenon, but cannot take the place of professional news organizations. The typical blogger, according to the piece, posts mostly opinion and links to news stories from the mainstream media, as opposed to real reporting.

This is basically true, I think, but rather misses the point. Blogs are incredibly diverse, with a wide distribution of things like writing quality, fairness, objectivity, originality, passion, and so on. The average blog, frankly, scores pretty low on all these scales. But I tend not to read to many of those. I seek out the exceptional blogs, the ones that inform and delight me, move me with their words, bring stories to life, make me think. Even though these are a small fraction of all blogs written, I'm able to find quite a few of them.

By contrast, mainstream media tends to be uniformly mediocre. The actual difference in quality between a top newspaper and an average one is small. In fact, thanks to wire services, they tend to run most of the same content. In computers and software, aside from a handful of good technology reporters such as John Markoff and Dan Gillmor, there is almost no good reporting.

I don't read blogs the same way I read the paper, and that difference, I think, captures how blogs can be so much better. My "toolkit" consists of three essential elements: blogs, critical reading, and Google. In combination, they give me a reading diet that is, on most topics, vastly superior to what I'd get from reading the mainstream media.

To me, critical reading has two major pieces. First, trying to separate the wheat from the chaff. This is especially hard on the Internet (and in blogspace), because there is a lot of chaff out there. Second, reading multiple different views on a story, and trying to determine the truth from the bits for which there is consensus, and also to understand the real disagreements at the root of the differing views.

Synthesizing an understanding from multiple views is important because I don't have to depend on the objectivity of the writer. It is, of course, very important to judge how credible the writer is, what their biases are, and to what extent they let that distort the story. This isn't easy, and it's possible to get wrong. Even so, I find that I get a much clearer picture after reading two or more passionate stories from different sides, than one objective, dispassionate story.

Objectivity, while a noble goal, comes at a price. In the context of the media business, it usually guarantees that the reporter doesn't know much about the subject at hand. This, in turn, is most clearly detectable as a high rate of technical errors (Dave Winer points out some in the article under discussion), and the more worrisome, but less quantifiable, lack of insight. Ignorance about a topic also makes journalists more vulnerable to manipulation, at worst simply parroting press releases and "backgrounders". More typical is the way the mainstream papers accepted the SF police's estimate of 55,000 at the Jan 18 marches, even though the actual number was about triple that.

And on a lot of topics, learning about an issue leads one almost inevitably to take a side. Take the management of DNS for example. Of the people who know what's going on, those who do not have an interest in the status quo are almost all outraged. It's hard to find somebody who's both knowledgeable and objective, so insisting on the latter serves the story poorly.

The importance of Google

If you are going to read critically and sift through various viewpoints, the key questions are "what are other people saying about this?" and "how do these viewpoints differ?". As mentioned above, it's not trivial to find good alternate sources. But it's a skill one can learn, and there are tools that can help. Among the most important of these is Google. On any given topic, construct a nice search query, pass it to Google, and in a hundred milliseconds or so you'll be presented with lots of good links to choose from. Not all will be relevant or well-written, but you only have to sift through a dozen or two before coming up with a winner, and you can tell quite a bit from the results page, without even having to visit the link.

I'll give a couple of examples on how Google can provide more information than mainstream press articles. First was Nicholas Kristof's July 2, 2002 editorial in the New York Times entitled "Anthrax? The F.B.I. yawns". This editorial referred to a mysterious "Mr. Z". For whatever reasons (fear of libel suits, perhaps?), the New York Times saw not fit to print the name of this individual, so people reading the papers were in the dark for a while. Googling, of course, revealed the name readily.

A more mundane example is this Washington Post story on a terrorist scare. A kid on the plane asked a flight attendant to pass to the pilot a napkin inscribed "Fast, Neat, Average". This is an Air Force Academy catchphrase, the standard response to an "O-96" dining hall feedback form, and, according to USAF folklore, also used in Vietnam as a challenge-response. Cadets and graduates sometimes write the phrase on a napkin in the hope that the pilot is USAF-trained. In this case, the kid turned out to be a neighbor of an AFA cadet, without much good sense about how cryptic notes might get interpreted. In any case, the Washington Post article carefully omits the response ("Friendly, Good, Good"), even though it's easy enough to find through Google, among other places in a speech by President George H. W. Bush.

Other newspapers do worse. The Washington Times manages to misquote the three-word phrase. The AP wire story, as published by the Seattle Post-Intelligencer, CNN, and other papers, doesn't even bother with the O-96 dining hall part of the story.

Why isn't there any coverage of Teletruth?

The systemic corruption of the telecom industry is one of the most important stories since Enron, but you won't find it in your newspaper. Why not? Bruce Kushnick has written a book detailing the crimes of the telecom corporations, but nobody on the mainstream press is following up on it. A Google News search returns exactly one result from either "Teletruth" or "Bruce Kushnick", and that appears to be a press release.

I'm having real trouble understanding why this story isn't getting any coverage in the mainstream press. I'm having even more trouble reconciling this fact with the ideals of objectivity as professed by journalists. If you're a working editor or journalist, especially in the tech sector, did your publication make a decision not to run the story? Why? I'd really appreciate more insight. Even if Bruce Kushnick is a complete nut (which I doubt), it seems as relevant as the Raelians.

I consider it quite plausible, even likely, that this is a huge story, but for whatever reason, readers of newspapers are completely in the dark about it. Critical readers of blogs, though, aren't.

Conclusion

Just about every time I've had the opportunity to check a mainstream news story, I've found it riddled with errors. Every time I've been interviewed by the mainstream press, the resulting story significantly distorted what I was trying to say, and from what I read in other blogs, this experience is very common. Even in the off chance that a tech story is factually correct, I don't learn much from it. There are important voices missing from mainstream media, especially those critical of big companies, or, more importantly, providing a credible alternative.

By contrast, the best of the blogs I read are passionate, well-informed, topical, and insightful. They don't make a lot of stupid factual errors, but those that slip through are corrected quickly. The best blogs are partial but fair, and up-front about their biases, as opposed to pretending to be totally objective.

It's not just technology reporting, either, although that's obviously close to the hearts of the early blogging community. I think the flaws of mainstream reporting, and the potential of blogging to address those flaws, generalize to many other areas of interest. I'm sure, though, that newspapers are a very information source for sports gamblers, and will continue to be important in that role for quite some time.

It takes more time and effort to get one's information through critical reading of blogs than it does to read the paper, but the results are well worth it. To paraphrase Thomas Jefferson, were it left to me to decide whether we should have newspapers without blogs, or blogs without newspapers, I should not hesitate a moment to prefer the latter.

Enfilade theory

I had a truly great irc discussion with tor today. Among other things, he brought up Enfilade theory, which is one of the things the Xanadu folk came up with. It's easy to be put off by the presentation, including the strange terminology and all the boasting about how secret it all used to be, but I think at heart it's a useful tool for talking and reasoning about computations over trees.

As far as I can tell, the "wid" and "disp" properties are very similar to synthesized and inherited attributes in attribute grammars. The canonical example of "wid" is the number of bytes contained in the node and its descendants. Given two subtrees and their byte counts, it's easy to compute the byte count of their combination - just add them. Byte counting is simple, but there are lots more properties that can be computed in this manner. "Widiativeness" describes the generalization. One key insight is that all widiative properties are also associative.

Whatever properties are chosen, an enfilade is basically just a tree with the enfilade properties stored in the interior nodes. The tree engine has to update these values whenever the tree gets updated, but the cost is only proportional to the depth of the tree - O(log n) when the tree is balanced.

While "wid" properties propagate up, "disp" properties propagate down. In Fitz, the graphics state is essentially a disp property. You have interior nodes that set graphics state parameters, and affect all child nodes. When there are multiple nodes on a path from the root to a leaf node that set the same parameter, the deepest takes precedence. Similarly, clip paths intersect (which happens to be commutative), and coordinate transform matrices multiply.

Bounding boxes are a "wid" property, with the combining rule being rectangle union. In the PDF 1.4 data model, RGBA renderings of groups (whether nested as a Form XObject or q/Q pair) are almost wid, the exception being non-Normal blending modes. For these, the RGBA rendering of an object may depend on the colors that were rendered beneath it. This has implications for caching (I won't go into details here, because they're probably only interesting to one or two other people).

Another "wid" property is the tree representation for Athshe I've been thinking about for a while. You want to efficiently represent a tree with arbitrary structure, meaning it may or may not be balanced, and nodes may have ridiculously small or large fanout. The solution is to store a serialization of the tree in a B-tree, using open and close parentheses in addition to leaf symbols. There is not necessarily any relationship between the tree structure and the b-tree structure; the latter is always balanced with a tight distribution of fanouts. I wrote about storing some summary information about parenthesis balancing in B-tree nodes.

It makes sense to describe this summary info in enfilade terminology. My proposal is an enfilade that stores a "wid" property at each B-tree node. The "wid" property is a tuple of two integers. For open paren, it's (1, 0), for close paren it's (-1, -1), and for leaf nodes, it's (0, 0). The combining operator is defined as combine((a, b), (c, d)) = (a + c, min(b, a + d)). When this info is stored at each B-tree node, something magic happens: it's possible to do all the standard node navigation methods (parent, first child, prev and next sibling) using only O(log n) accesses to B-tree nodes (proof of this claim is left as an exercise to the reader).

I hope I've presented a wide range of issues for which enfilade theory is a useful reasoning tool. I think it's also useful to criticize bad designs. My favorite example is (of course) from the W3C.

The application of CSS stylesheets to XML trees is almost a disp property. However, the adjacent sibling selector in CSS2 breaks disp-ness. When I was working on implementing CSS, I looked through a lot of stylesheets, including searching all the XUL glorp in Mozilla at the time. I found a couple of uses of the adjacent sibling selector, but all could be factored out. In other words, the actual stylesheet had the disp property even though the adjacent sibling primitive lacked it. Again, this hurts performance and makes implementations far more complex than they should be. Perhaps if the designers of CSS2 had been aware of enfilade theory, they could have avoided this mistake.

(at dinner tonight, Heather asked me what I was thinking about. I responded, "enfilade theory". she asked me what that meant, and I promised her that after reading tonight's blog, she'd know more about it than she ever wanted to. hopefully, i've made good on my promise :)

librsvg and batch formatting

It warms my heart to see librsvg thriving under the care of Dominic Lachowicz. I'm more of a starter than a finisher, so for me it's a sign of success when a project of mine makes the transition to a new maintainer.

There's one lesson from Gill and librsvg that I think generalizes to other projects. Looking back, I really wish I had done librsvg (a batch renderer) first, then tackled Gill (a full interactive editor). It would have been possible to attain useful results and a good quality codebase much more quickly if I had done that.

In particular, I think there's a huge role for a batch Microsoft Word formatting engine. There are a bunch of free word processors that can read Word documents, but given their broad focus and the difficulty of interactive GUI apps in general, I think it'll be a long time before any of them can open tough Word documents and format them perfectly. But for a batch formatter (with, say, PDF output), I think it's a very reachable goal. Aside from its obvious practical utility and value as a code resource, it would also be a great tool to test GUI word processors against.

Crowd estimation

Since I wrote about estimating the Jan 18 peace march in San Francisco, I was interested to see Lisa Rein's link to a Forum program on the topic. Among the guests was Farouk El-Baz, who is the author of the paper on the Million Man March estimates. In any case, the consensus now seems to be 150,000 to 200,000 people. The original SF police estimate, widely reported, of 55,000 was a gross underestimate. I'm gratified to learn that my own numbers are closer to the mark.

Gems

I ordered an assortment of gems from Pehnec Gems a couple of weeks ago, and have been enjoying them with Alan and Max since they came. Small very sparkly objects give rise to a surprisingly strong emotional response. The cubic zirconia are quite beautiful, very similar to diamonds but a bit more colorful, and I'm also fond of the sapphire and emerald (real but lab-grown).

Batteries

Slashdot linked my ThinkPad battery page a few days ago. Still no response from IBM, which is really not the behavior one expects from a reputable company.

A grab bag of responses today, plus some actual technical content.

Venezuela

Thanks to guerby for his response to my entry on Venezuela. The situation there is clearly very complex, and I absolutely agree that trying to become informed by reading one blog is unwise. It's cool that he's delved into the issues without being partisan to one side or the other; that seems to be rare.

Indeed, one of the great strengths of the blog format is the ability to respond; to provide even more context for readers. The newspaper I get sucks, but there's precious little I can do about it.

Trolls

I agree with djm that a "don't feed the trolls" policy is probably the wisest. I usually read the recentlog with a threshold of 3, so I don't tend to even notice troll posts unless someone else points to them.

Crowd estimation

jfleck's story about estimating the Rose Parade crowd sounds quite a bit like this one. One clarification: my Market Street numbers are based on a per-person area of four feet by four feet, or 16 square feet. Based on this, I am quite confident that my figure of 80,000 is a lower bound on the total who participated in the march. Now that I have some idea how the police come up with their crowd estimates (basically, guess), I see no reason to prefer their numbers over any others.

The war

I'd like to thank Zaitcev for his thoughtful criticism of my opposition to the war. He's made me think, which is a good thing no matter what you believe.

I agree with his point that Islamic fundamentalism is a powerful and destructive force, especially when used as the justification for dictatorships. Coexistence between the Moslem sector of the world and the West is clearly going to be one of the biggest challenges in the coming decades.

But I think that even if one agrees with the fundamental premise that military action is the best way to respond, there is plenty to criticize in the US administration's war plans. For one, from everything that I see, Iraq isn't the most virulent source of Islamic fundamentalism, not even close (it doesn't even show up on this map). Second, a pre-emptive attack based on no hard evidence, or possibly lack of compliance with UN resolutions is virtually guaranteed to fuel hatred of the US in the Muslim world, not to mention strong anti-American feelings throughout the world. No need to speculate; it's starting now just based on the rhetoric of war, not (yet) thousands of people dying.

Finally, even if the warmongers are dead right, starting a war with potential consequences of this magnitude demands very careful debate and deliberation, at least in a free society.

The Onion's take would be funny if it weren't so darned close to the truth.

Free(?) fonts

Bitstream has announced that they're donating freely redistributable fonts. It's always nice to see more font choices. Now seems to be a good time to remind people, though, that the URW fonts that ship with every Linux distribution were purchased by Artifex and released under GPL license. I'm not sure whether license Bitstream chooses will be Debian-free or now, especially given that they haven't given the text of it yet.

Distributed, web-based trust metric

Inspired by discussions with Kevin Burton, I've been thinking a bit recently about using Web infrastructure to make a distributed trust metric. I think it's reasonable, if suboptimal.

The basic idea is that each node in the trust graph maps to a URL, typically a blog. From that base URL, there are two important files that get served: a list of outedges (most simply, a text file containing URL's, one per line); and a list of ratings. In the blog context specifically, this could be as simple as a 1..10 number and URL for each rating. But, while the outedges are other nodes in the trust graph, the ratings could be anything: blogs, individual postings, books, songs, movies, whatever.

So, assuming that these files are up on the Web, you make the trust metric client crawl the Web, starting from the node belonging to the client (every client is its own seed), then evaluate the trust metric on the subgraph retrieved by crawling. The results are merely an approximation to the trust metric results that you'd get by evaluating the global graph, but the more widely you crawl, the better the approximation gets.

A simple heuristic for deciding which nodes to crawl is to breadth-first search up to a certain distance, say 4 hops away (from what I can tell, this is exactly what Friendster uses to evaluate one's "personal network"). But a little thought reveals a better approach: choose sites that stand to contribute the largest confidence value to the trust metric. This biases towards nodes that might be more distant, but are highly trusted, and against successors of nodes with huge outdegree. Both seem like good moves.

What are the applications? The most obvious is to evaluate "which blogs are worth reading", which is similar to the diary rankings on Advogato. Perhaps more interesting is using it to authenticate backlinks. If B comments on something in A's blog, then A's blog engine goes out and crawls B's blog, determines that B's rating is good enough, then posts the link to B's entry on the publicly rendered version of the page.

I see some significant drawbacks to using the Web infrastructure, largely because of the lack of any really good mechanism for doing change notification. There's a particularly unpleasant tradeoff between bandwidth usage, latency, and size of the horizon. However, with enough spare bandwidth sloshing around it might just work. Further, in the Web Way of doing things, you solve those problems at the lower level. It may well be that this is a more reliable path to a good design than trying to design a monolithic P2P protocol that gets all the engineering details right.

I'm not planning on implementing this crawling/tmetric client any time soon, but would be happy to help out someone else. For one, it would be very, very easy to make Advogato export the relevant graph data.

Mainstream media sucks

You'd think that Saturday's peace marches would be considered fairly important, given that we are on the brink of war. But the Contra Costa Times (the Sunday paper we get) saw fit to run a one-column story below the fold, with the headline "Thousands in S.F. rally against war", as opposed to a 5-column above the fold report on a ball game, headlined "Ground Zero".

Most media reports of the march accept the police estimate of 55,000, while the organizers estimate 200,000. In this age of helicopters and instant high-powered image processing, the magnitude of this discrepancy is surprising.

It turns out that crowd estimation is quite a tricky business. There's an excellent paper on the efforts to count the number of marchers at the Million Man March in Washington in October 1995. I ran a few sets of numbers myself, and came up with estimates ranging from a minimum of 80,000 to a maximum of 180,000 (the number of people who can fit in Civic Center Plaza). The estimate I have the most confidence in is the number who filled Market Street from Embarcadero to Civic Center. Given four feet square per person (which I consider very conservative), 2 miles by 120 feet, that works out to about 80,000.

The energy at the march was great. Heather and I are happy that we could be a part of it. What really struck me was the diversity of the crowd. While there were a few lefty radicals in evidence, mostly we saw ordinary folks of all ages, largely white but with plenty of other races represented. The message is clear. Lots of Americans don't want us to go to war. One of my favorite signs read simply, "Why?"

I'm sure the story about the Raiders was really good.

Mainstream media sucks more

From Dave Winer, I came across a blog on the situation in Venezuela. It's beautifully written, and conveys events with clarity, compelling narrative, and passion. These qualities are not highly respected in mainstream journalism, particularly the latter. The presumption is that journalists should be impartial and objective, so actually having an opinion on something is frowned upon, much less expressing it publicly. Thus, the New York Times, for which Francisco was a beat reporter, brought up "conflict of interest" concerns, which Francisco felt the best way to resolve was to resign.

Meanwhile, wire service reports are, as usual, bland, if factually correct. There's no way anyone without a personal interest in Venezuela is going to learn what's really going on from reading about it in the papers. They'd probably get a pretty good idea, though, that the instability there is responsible for an increase in gas prices here in the US.

So here's a clearcut case, I think, where blogs are simply better than mainstream journalism. I wish Francisco and all the people of Venezuela the best through the present crisis.

New laptop

Heather needs a laptop - the Thinkpad 600 is starting to go flaky. We settled on a 14" iBook. A big reason is the battery. Heather often goes to cafes to write, and often finds it hard to plug in, so reasonable battery life is very important. Even going by the specs, the iBook is about twice as good as IBM's R series, and given IBM's past performance, the latter could well be a lot worse after a few months of real use.

It's not a perfect machine. To me, the biggest drawback is the low resolution of the screen (91 dpi). The CPU is slow by modern standards (800 MHz), but not having to burn battery to fuel a multi-GHz chip is a Good Thing. Running Safari, it ought to be plenty zippy, certainly a dramatic upgrade over RH 8 Mozilla on the TP600.

With Jaguar pre-loaded, it'll take considerably less time to get up and running than a PC-class notebook, and this way we won't be forced to buy a Windows license.

Rule of law

I've come across quite a number of stories recently that struck me. A few days ago, I found a thread which I think binds them: the concept of "rule of law" seems very much under attack.

Dumpster diving: is it legal or not? The police in Portland felt perfectly justified in doing so, with the blessing of the Multnomah County DA's office. However, when the local weekly did the same back to the powers that be, they weren't happy.

The main issue here is whether the same law applies to those in power and those who aren't. As Anatole France observed, ideally "The Law, in its majestic equality, forbids the rich, as well as the poor, to sleep under the bridges, to beg in the streets, and to steal bread."

Ambiguity in the law is a big part of the problem here, and a good way to fight against it is to force the powers that be to pin down exactly where the line is drawn. That's the basis behind Dan Bernstein's Dear Ms. Tarzian campaign, and also my request for export review of the perl-RSA t-shirt.

Do the laws on assault apply to airport security agents, or just regular citizens? Penn didn't press charges, but his experience is a good role model nonetheless.

Can the phone companies get away corrupt and illegal business practices, or are they somehow above the relevant laws? If I had more free time, I would definitely read this book and verify some of its claims.

Incidentally, for those are into the "blogger vs traditional journalism" debate, I consider the mainstream media seriously negligent in not following up on Bruce Kushnick's work. I really hope the blog community takes hold of this, reads it, checks it, and gets the word out. If so, it would be an impressive demonstration of how we can delve into deep, complex, and important issues. The corporate-owned newspaper hegemony could use a little ass-kicking.

Myers Carpenter has a blog entry on his experience trying to get DSL and what he plans to do about it.

The Bush administration. Don't get me started. The problem here is too many examples to choose from. A particularly egregious example is the appointment of Hill and Pfotenhauer to the National Advisory Committee on Violence Against Women, which will oversee the implementation of the Violence Against Women Act. These two can be counted on to completely thwart the spirit of the law, and probably the letter too. After all, who the hell will stop them?

ICANN is out of control. Again, a big part of the problem is that ICANN's rulers have consistently refused to be bound by their original bylaws, and have conveniently changed them to be less accountable to the public. I swear, if I collected all seven dragonballs, I'd bring back Jon Postel.

Safari

I like Safari. I also think they did the right thing choosing KHTML over Gecko. Size does matter, you know.

Of course, if they had wanted a really fast browser engine, they should have picked Gzilla (dillo) :) . But of course, the goals are quite different. They're trying to support the rich Web standards of today, including CSS, DOM, and JavaScript, where Gzilla is really only for displaying HTML.

I think this move is also really good for standards. Web developers are used to just testing on Explorer and maybe Netscape. With the inevitable popularity of Safari, it will make more sense to just make standards-compliant web pages instead.

I was going to try to tie this in to the "rule of law" thread above, but will leave it as an exercise for the reader.

March

The whole family will be at the Jan 18th peace march. I know a lot of my friends will be going too. Maybe we can get together. Probably cell phones are the best way to coordinate, so email me for my number. It's likely, though, that the crowds will make logistics difficult. Oh well, that's a good problem to have.

Conversations

I met with Henri Poole on Friday, and Peter Deutsch on Tuesday. Both times, we had deep, far-ranging, stimulating conversations.

Henri is interested in implementing the web services version of the trust metric for Affero. (see Bram's diary entry and links for more background). This could be fun and exciting. If so, we'll probably make Advogato an early client.

Audiobook

I love reading stories to Alan and Max, and have lately become somewhat taken by the idea of recording audiotexts and releasing them under a Creative Commons license. So far, I've just been playing around, but you might be interested in listening to what I've recorded anyway.

In particular, a few days ago I read the prologue of Cory Doctorow's Down and Out in the Magic Kingdom. It's a good story, and of course the whole "free Internet distribution isn't bad for sales" meme is a brilliant marketing ploy.

If 10 other people each record a chapter, it's a distributed audiobook. That would be kinda fun.

IJS

I just sent out notes towards a 1.0 rev of the IJS spec. As a protocol, it's been pleasantly stable, and I get the impression that there are quite a few people using it without a lot of fuss and bother. This is good; I don't consider protocol design a shortcut on the road towards riches and fame.

MPEG2 telecine

We got an Apex 1100W DVD player for Christmas, which is a pretty cool toy. One of the things I've been playing with is transcoding DVD's to SVCD format. When done right, the quality is quite good, and I'm a fan of the CD-R format. It's quick, cheap, and easy to duplicate, readers are nearly universal, and it's good for archival as well.

Unfortunately, some DVD's are difficult to transcode with existing free software. I've had the best success with transcode and mpeg2enc (from the mjpegtools suite). But sometimes, particular for cartoon sources, there's jerky motion, visible interleaving artifacts, and A/V sync problems. I did a little digging and think I know what the problem is.

Most films are encoded into MPEG2 using "3:2 pulldown" telecine. Each 1/24th second film frame is converted into either 3 or 2 1/60th second video fields. By alternating between the two, the frame rates come out about right. (In fact, in the NTSC standard, the field period is exactly 1001/60000 second, which, if ignored, can lead to fun drifting of sync).

And indeed, most of the time, decoding of the MPEG2 stream will yield the original frames at (roughly) 24fps. Just scale to SVCD res, re-encode with the 3:2 pulldown flag set (-p option to mpeg2enc), and you're golden.

Only problem is, 3:2 pulldown is not exactly one flag in MPEG2 streams. In fact, it's implemented as two flags per "picture": "repeat first field" and "top field first" (described in more detail here). And, as one might expect, some DVD's use a fancier mix of these flags than vanilla 3:2 pulldown. For example, "Elmo in Grouchland" starts with 60 black frames with no pulldown, followed by the movie proper with 3:2 pulldown.

And the problem is that, as far as I can tell, there's no way to preserve the RFF and TFF field pattern from input (DVD) to output (SVCD) MPEG2 streams using free tools. In particular, mpeg2enc hardwires these flags on output to either all-0's (no pulldown) or a 3:2 pulldown pattern with a repetition period of 4 frames.

Modifying mpeg2enc to support arbitrary RFF and TFF, and transcode to provide these fields from MPEG2 input, would seem to be a very good idea. I'm sure whoever implements this will get love and appreciation from legions of users.

It's also possible that I'm missing a good way to do this with existing tools, in which case I'm sure someone will point it out.

Kids

Alan's school resumed today, and I start up with the homeschooling. I'm eager to try electrolysis of Hydrogen as a fun chemistry experiment. Since my goal is not (at present, anyway) to make a real table of elements, but to teach the basic concept, I think I'll go for the variation where the mixture of O2 and H2 gases fizz into soap bubbles, which are then be ignited for (hopefully) impressive yet safe effect. I'll let you know how it goes.

Max continues to astonish us with his language development. Over the past couple of weeks, he's pretty much mastered the alphabet song, and is just starting to be able to learn his individual letters (he knows "M", "A", and "X" pretty well now :). But he still managed to impress me with his facility at Alchemy, in which you have to match symbols and color. Some of our friends have been impressed merely that he's able to use the mouse. They should watch him clicking away at the thing like it's perfectly natural for a two-and-a-half year old. He's going to be a computer geek like his dad, I just know it.

Fitz

I just got my first PPM file rendered from the new Fitz prototype. One of the nice things about this go-round is that I'm doing Python bindings for everything. I'm finding Python to be a very nice environment, especially for things like testing.

It's feeling very close to ready to put into the CVS repo. I'm also starting to feel ready to ramp up the public process. Especially with the clean Python bindings, I expect that there may be lots of involvement.

I'm definitely feeling daunted, though. There are lots and lots of things that need doing, over and above getting good rendering going. Documentation, language bindings, Ghostscript plumbing, porting, that sort of thing. Even within rendering, there are many different priorities. Should it be complete first, or very good at the common case first? My inclination is the former, but the desires of the free software community probably lie toward the latter.

In any case, I'll be reaching out to people who have expressed interest in Fitz over the next few days. I want the process to be very open, but there is one requirement: you must sign a copyright assignment form. This seems to me like a reasonable balance between commercial viability and getting cool code released as free software (Fitz is GPL plus dual commercial licensing), but hardcore purists might be dissuaded by it. Ah well.

Life

The kids had a nice Christmas, with plenty of toys, and then a big family dinner at Grampa's.

I'm definitely struggling with time management. Part of the problem is that my sleep schedule is out of whack again. I seem to need at least 10 hours of sleep or so, which really cuts into my productive time. I have a lot of things I'd like to do, so I'm becoming determined to get a little more discipline, and use the time I have most effectively.

That means a little less time to play with proofs and other blue-sky things, but I'm fine with that. I do plan on keeping on going with this diary, though. Among other things, I find that the times that I'm most prolific here tend to coincide with times I'm most productive with work. So deliberately holding back my urge to write doesn't seem like a good idea.

I'm very happy that the days are getting longer again, at least in theory. You can't really see it yet, though. Right now, it's rainstorming pretty heavily. Even so, I know it will get brighter, and that's something.

Trust metrics for blogs

I had a nice long phone chat with John Hiler of Xanga and Microcontent News. He's just recently read my thesis draft, and is intrigued by the idea of using them in larger-scale blog services. The main application will probably be to suppress spam and abuse, and perhaps to ease support.

The prospect of having the trust metric ideas implemented for a mass audience site is exciting. I'm sure it will happen sooner or later, and this might just be the sooner.

Teletruth

Bruce Kushnick has written an "Unauthorized Bio of the Baby Bells", available for free download. I don't really have time to read through it right now, but I'm intrigued by his thesis that the telcos in this country are basically a scam. He comes across as a little bit of a crank, but my gut feeling is that he's substantially right.

While my experiments so far with VoIP have shown it to be not quite ready for prime time, I strongly suspect that it will become so in a couple of years or so, at which time the telco business will find itself quite disrupted. For example, imagine a cell phone that has an onboard 802.11 radio, and logs on to the Internet (instead of GSM or whatever) whenever it finds a base station within range. That would be cooool.

I'd be interested to hear what any Advogatan (or critical thinker from any corner) has to say about this book.

(via JOHO the blog via boingboing)

DNS, phone numbers, and ENUM

One of the many problems that needs to be solved before Internet phones become usable is directory services. What kind of name space do we want for IP phones? One obvious choice is phone numbers, which have the advantage of smoothing migration from POTS. Other choices include DNS-based email-like human readable addresses, or entropy-rich tokens such as hashes of public keys. As a stopgap, people might use IP addresses (these seem to be standard for NetMeeting and friends), but those have serious usability problems.

One proposal to map phone numbers to IP addresses is ENUM, which hacks the phone number namespace on top of DNS. The basic idea seems sound, but in practice it's almost certain to inherit the problems of DNS, not least of which is corrupt governance.

In particular, it sure looks like the telephone companies are going to be the primary owners of namespace chunks. This is the worst choice possible, because they have a powerful business interest to fuck up deployment of IP phones as long as possible, so as to preserve their price-gouging in the POTS business.

I want to be able to map my phone number to my Internet address. I'm skeptical that the ENUM implementation will actually be able to provide this service for any reasonable price and terms, but of course I'd love to be proved wrong. In the meantime, people interested in making IP phones happen should be thinking about other forms of namespace. Or, perhaps the right answer is an independent service that also uses the phone number namespace, but is actually trustworthy, as opposed to the telecom industry. Worth thinking about, in any case.

306 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!