Older blog entries for Grit (starting at number 11)

I'm annoyed enough with the SBC ads about "we built the network and understand it, don't force us to let other people use it." But now that they're offering long distance, the "one bill" commercials make absolutely no sense.

I already get one telephone bill with both local (SBC) phone service and (Sprint) long distance. What are they trying to convince me of? Do they think I'm too stupid to know how many bills I'm paying? (I may be paying extra to have SBC bill me, but I could save $1/month by having Sprint bill directly to my credit card if I cared.)

9 Dec 2002 (updated 3 Feb 2003 at 04:43 UTC) »

The Star-Tribune hit the nail right on the head in their editorial about "kids.us." The question to ask is: who will use this? Who will it benefit? Certainly anybody doing Internet filtering today is not going to add "*.kids.us" to their whitelist. (Although if somebody announces such a plan I'd be interested in hearing about it.)

I'm annoyed that from a government perspective, 12-year-olds and 7-year-olds are equivalent in what they should be allowed to see.

I'm also worried that the failure of ".kids.us" would increase the pressure for a worse "solution", like a ".xxx" or ".prn" domain. Not that they would work, either.

9 Jul 2002 (updated 9 Jul 2002 at 03:33 UTC) »
lukeg: I appreciated your comments on TCP. One of the members of my research group (Sam Liang) is doing his thesis on "using TCP for everything". One of his papers, on extending TCP for multicast support, was accepted at Infocomm 2002.

The work is in some ways a reaction to all the numerous wierdo protocols people build--- either explicitly or in an ad-hoc manner on top of UDP. A lot of times people set out to roll their own and end up duplicating a lot of TCP's features that they initially thought they didn't need. There is also a very real sense that TCP might be the only protocol we get to use; anything else is going to be even harder to deploy, harder to get through firewalls, might not be allowed by router ACLs, doesn't work with NAT, etc.

Real-time delivery is the biggest problem people seem to have with TCP. (Unless they're trying to load up their entire multi-gigabit link with a single TCP flow...) Sam's design provides a way around this, that is probably good enough for most applications. Check out his TCP-RTM paper if interested.

Framing is a big issue; the approach we've been thinking about lately is to use application-layer framing (which is necessary for the reasons you mention), but to change TCP's behavior to respect write() boundaries. Thus, an individual TCP segment (and a read()) contains bytes from no more than one application-layer frame.

I think you've nailed it on the head about "Worse is Better": TCP may often be not quite the right thing from an end-to-end perspective, but it's so much easier (and better engineering) to use what's already there than try to come up with something better from scratch. I'd argue (donning my asbestos suit) that the same thing can be said about IPv6 and NAT: one looks better from an e2e perspective, but a solution which just fixes NAT's problems will be a lot easier to deploy (and not necessarily violate the e2e argument, either.)

2 Jul 2002 (updated 2 Jul 2002 at 06:55 UTC) »

*sigh* So much verbiage, so little code.

I finished editing my papers on domain name policy for the class I took this spring. They're up here in a variety of formats. One still to go.

I also wrote a medium-cool short paper on denial-of-service attacks against the domain name system, which I submitted to HotNets. Not going to make that one publicly available for now, but if you're interested, drop me an email.

Congratulations to Marissa, whose short story "The Handmade's Tale" is an honorable mention in the 2001 Year's Best Science Fiction anthology!

The president of the Business Software Alliance wrote this spirited defense of the Digital Millenium Copyright Act in today's San Jose Mercury Times. Not too surprising given the BSA's support of the bill when it was in Congress. (Why the Merc's "My View" space is being used for the opinions of a professional mouthpiece from Washington, D.C. is an entirely different issue.)

I sent in the following response, which probably won't see the light of day, given that I expect a hailstorm of righteous indignation from other Bay Area geeks:

    Robert Holleyman's article on the Digital Millennium Copyright Act tries to justify the law's existence by claiming its necessity for the protection of online content. But the DMCA does not "[extend] traditional copyright protections to the digital world", as Mr. Holleyman claims; it creates an entirely new protection. Online content was already legally protected by existing copyright law.

    The so-called "anti-circumvention" provisions of the DMCA instead seek to provide legal protection to copy-protection and access-prevention devices and technologies. These technologies are not about "piracy"; they are about control, as aptly demonstrated by the example of the protection scheme used for DVDs. There are plenty of ways to copy a DVD: you can make a copy of the entire disk, protection and all, or use a digital video camera to record the movie off a screen. The encryption on a DVD disk does not prevent this copying; it prevents playing the disk on unapproved devices. Such control must necessarily interfere with "fair use" of copyrighted content, since the circumstances in which use of a copyrighted work is legal are far broader than can be allowed by a fixed technology.

    Judge Whyte's opinion does in fact state that the DMCA does not prevent circumventing use restrictions. But I find it incomprehensible that the tools to do so are illegal even while the act is not. Does "fair use" apply only to those technologically savvy enough to build their own tools? Congress's disclaimer of any attempt to impair fair use rings hollow. It is as if owning VCRs were legal but selling them were not. (Remember VCRs? The technology that was supposed to destroy the movie industry by making copying easy?)

    Mr. Holleyman's claims that the DMCA is responsible for the growth of Internet usage is nothing short of ridiculous, given that most Internet content is not protected by any technological protection measures. Employment services, bookstores, news, games, magazines, media sites, and auctions all offer value on the Internet without the need for anything beyond a simple password authentication. In fact, wasn't the movie industry just testifying before Congress that they needed yet more protection before releasing online versions of their movies?

    The nightmare world supporters of the DMCA paint for us already exists. Movies such as "Attack of the Clones" are available for download on the Internet before they are released in the theater. The DMCA is neither effective at preventing copyright violations, nor necessary in order to combat them. Instead, it shifts more power into the hands of content producers--- taking it away from those who write software, manufacture consumer electronics, or want to make fair use of legally purchased media.

Chalst: I agree that having persistent URLs and domain names is usually superior to ones that might change without warning. It's possible (and common) to have both coexisting, though.

The TLD issue is one I'm still trying to resolve in my head. I guess if I have a point to contribute, it's that there is no reason for a new TLD unless you are unhappy with the allocation policy that exists in existing ones. ".museum", for example, has a set of requirements you have to meet to register in it. But I'm not convinced that this sort of extra hurdle actually adds value to the name.

If we have enough such "set-asides" to keep everybody happy (although whether that's possible is another issue), then there are probably going to be enough so that their mnemonic value is too small to be useful. If I'm looking for the San Francisco Museum of Modern Art, do I look at sfmoma.com, sfmoma.org, sfmoma.museum, moma.sanfrancisco.museum, sfmoma.art, sfmoma.pictures, sfmoma.sculpture, sfmoma.photo, sfmoma.mus, moma.sf.ca.us, etc.? (Although I do like what .museum does--- if you guess wrong, it dumps you into a list you can search through.)

So, if we can't make people who want to try random URLs (and those that want to catch their attention) happy, does "sfmoma.museum" give more confidence in the result of a search engine than "sfmoma-museum.org" does? Perhaps.

Anyway, I hope to have an argument which make sense in the paper I'm writing.

I just had an interesting conversation with Kathryn Kleiman, who's involved in technology law and co- founded the "Domain Name Rights Coalition" in additional to a bunch of other cool stuff.

The main area we don't see eye-to-eye on is the need for new top-level domains. As I understand her argument, we need to break the ".com" monopoly up and give everybody a chance to have a "short" name. In the current system, "mcdonalds.com" goes to the company with the golden arches and everybody else loses out.

I understand the concern, I just don't see how TLDs solve the problem. Suppose I get "mcdonalds.consulting" in a brand new TLD. Even if McDonalds doesn't successfully sue me to take it away, in what sense am I better off than registering "mcdonalds-consulting.com"? If my web browser tries ".com" by default, then the difference is just a period instead of a dash.

Introducing new TLDs is not going to deprecate the worth of ".com" as long as it is seen as the default. The ugly truth, as I see it, is that there really is only one namespace. Somebody is going to win and get the preferred name as long as it is available; chopping up the bits differently just rearranges where the money goes. All we can really do is ensure there are still (less-desirable) alternatives for everybody, and we don't need TLDs to do so.

(One idea might be to auction off names every couple of years--- no automatic renewals. But that needs more thought.)

It's also worth pointing out that even with 1000 TLDs, the cost of McDonald's registering in all of them is miniscule. The only solution seems to be setting up yet more rules for what you're allowed to register in each TLD, but I don't see that as a winning solution.

Congratulations to my wife, Marissa, whose story is appearing in the June issue of Analog--- available at bookstores any day now. (We got her contributor's copies a couple days ago.)

I'm taking a very cool class this quarter, taught by Barbara Simons and Ed Felten. It's the Computer Science Policy Research Seminar. Lots of interesting stuff--- copyright, copy control, privacy, Internet governance, etc.

I've decided on a project about naming, since it's also a big chunk of my thesis topic. I've copied the abstract below:

Project: Using the Domain Name System for Content Segregation

World Wide Web content is referred to by Uniform Resource Locators (URLs). One portion of the URL is the server (or "host") identifier, which is looked up in the Domain Name System (DNS). DNS has a hierarchal tree structure, consisting of:

  • a single "root zone"
  • subsidiary "generic top level domains" (gTLDs, such as .com) and "country code top level domains" (ccTLDs, such as .us)
  • a vast number of "second-level domains" (such as stanford.edu) with their own policies and sub-delegations (e.g., dsg.stanford.edu)

Originally DNS had a limited set of generic top level domains, each with a specified use: ".mil" for U.S. military sites, ".com" for corporations, ".org" for non-profit organizations, and so on. However, with the increasing popularity of the Web, the meaning of these gTLDs has become less distinct. Personal sites are registered in ".com", and businesses register their trademarks in all available top-level domains. Countries with appealing ccTLDs, such as ".tv" and ".to", offer domain name registration to the world at large. ".com" by itself dwarfs the rest of the DNS tree, containing nearly all of the second-level delegations.

Recent attempts have been made to reverse this flattening trend by restricting the use of portions of the DNS tree, for a variety of reasons.

  • ICANN, the governing body responsible for allocating ccTLDs, has approved the creation of several content-specific names, such as ".aero" (air-transport industry), ".name" (individual use), and ".museum". Typically membership in some group is necessary for registering in such domains, but there are only loose constraints on what content may appear within sites bearing these names.
  • H.R. 3833 (introduced March 4, 2002) directs the administrator of the ".us" domain to create a ".kids.us" delegation. Registering within this domain would be contingent on agreeing to a set of guidelines for what content is appropriate.
  • S. 2137 (introduced April 6, 2002) directs ICANN to create a ".prn" domain, and mandates that any commercial web site which is in the business of "making available ... material that is harmful to minors" shall operate their service only under the new domain.
  • Various suggestions have been made to reserve top-level domains for non-ASCII domain names, such as domain names encoded in a Chinese character set.

These attempts bring up a number of interesting technical and policy questions, which my project will try to address through position papers on ICANN and on the bills mentioned above.

  • Is the creation of content-specific domain names actually useful? Would the goals of those advocating such domain names be better served by a different allocation policy among existing domains? Is there any technical reasons to favor the broadening of the DNS tree?
  • Is content segregation by domain name effective? Can children be shielded from inappropriate content using such mechanisms without infringing on the rights of adults?
  • Are there first amendment issues involved in naming? (For example, would the government be able to restrict the titles which books are given?)
  • Does the U.S. government retain the right to administer the entire domain name system? Or must any proposal such as ".prn" be subject to the same process as other new gTLDs?
  • What are the implications of content segregation policies on other protocols which use DNS, such as e-mail, and on other naming technologies, such as LDAP or Freenet?

No programming rant today. The DSL modem mysteriously started working this afternoon, so I've been migrating the home network. DirecTV (how I hate the missing 't') doesn't care if you run servers, so we're back to running our own mail server. But this time it's Postfix, not Sendmail.

Postfix is such a lovely program. Its configuration file was meant to be read by humans. Rather than the huge monolithic ogre of a program, it's composed of a bunch of specialized gnomes who work together... and best of all, it's written by a nice Dutch guy. I may convert the Stanford systems I administer over to it. (This is a big, big, win over the previous system, were mail destined for gritter.dsg.stanford.edu travelled through at least three machines, two protocols, and four software packages.)

Best of all, the switch away from ATT's cable modem (and their impending sale of our cable to Comcast) means that we're free at last from the Death Star!

2 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!