Older blog entries for kr (starting at number 4)

Probability and “Anthropic” Arguments

At Accelerating Future, Michael Anissimov has a post aptly titled “More Anthropic Nonsense?”. In it, he asks why this argument is wrong:

The joke about this is that delays keep happening is because the LHD would kill us all if it worked and that it’s anthropically likely that we’d be born into a universe with a high population, one where human extinction keeps not happening for “mysterious” reasons.

Here’s a selection from The Meaning of It All, an excellent book of a set of three lectures by Richard Feynman:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the Psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out. This man had designed and experiment which would show something which I do not remember, if the rats always went to the right, let’s say. I can’t remember exactly. He had to do a great number of tests, because, of course, they could go right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.” He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it’s a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.

The application of this principle to anthropic arguments is left as an excercise for the reader.

Syndicated 2009-07-23 07:00:00 from Keith Rarick

Two Resizing Features I’d Like to See

I have are two features I’d like to see done in a Firefox extension. The goal of each of them is to eliminate horizontal scrolling.

Shrink-Wrap

Provide a button, labelled “shrink-wrap”, that will resize the browser window so it’s just big enough to display the page with no horizontal scrollbars. Probably should have a reasonable minimum size (such as 640px) and maximum size (the width of the screen).

Fit Width

This is the same feature, but in the other direction. Rather than resizing the window to fit the content, resize the content to fit the window!

Probably works best when “Zoom Text Only” is unchecked.

I bet this sort of thing already exists in mobile phone browsers, but I want it on my desktop.

Syndicated 2009-07-15 07:00:00 from Keith Rarick

Polymorphism

I’ll never forget when I first discovered that Smalltalk has no special forms for conditionals. Smalltalk has no if statement. That moment changed the way I think about programming (as happens with any worthwhile language, according to Alan Perlis).

A conditional expression in Smalltalk is just a simple method call on a boolean object.

  (x > 3) ifTrue: [y process]

The square brackets form a lambda expression; ifTrue: is the name of a method. Here is the same structure translated into Python syntax.

  (x > 3).ifTrue(lambda: y.process())

That all looks fine, if not particularly inspiring. The real “aha!” comes when you start to ask how such a method could possibly be implemented. If there’s no if statement, how can a boolean object decide whether or not to run the callback function? It can’t, but it doesn’t need to. Since there are two boolean objects, true and false (each of which is the only instance of a distinct class, True or False), each one can have its own behavior. So true’s ifTrue: method simply runs the callback, and false’s is even simpler.

  True>>ifTrue: consequent
  ^consequent value

False>>ifTrue: consequent
  ^nil

Again, here is a (ficticious) Python version of the same structure.

  class true(bool):
  def ifTrue(self, consequent):
    return consequent()

class false(bool):
  def ifTrue(self, consequent):
    return None

This is the definition of polymorphism – two objects can provide the same interface with different behavior.

If you understand this concept and apply it in your every-day programming, you code will turn out simpler, more concise, and easier to modify or extend. How? Let’s look at a practical example.

Example

Here’s a piece of Ruby code I found recently while browsing the Jekyll source code.

  class Hash
  # Merges self with another hash, recursively.
  # 
  # This code was lovingly stolen from some random gem:
  # http://gemjack.com/gems/tartan-0.1.1/classes/Hash.html
  # 
  # Thanks to whoever made it.
  def deep_merge(hash)
    target = dup
    
    hash.keys.each do |key|
      if hash[key].is_a? Hash and self[key].is_a? Hash
        target[key] = target[key].deep_merge(hash[key])
        next
      end
      
      target[key] = hash[key]
    end
    
    target
  end
end

This code defines a new binary operation, deep_merge, on Hash instances. It is like the existing Hash#merge except that it is defined recursively, so that when the left- and right-hand entries are both Hash instances, they are “deeply merged” instead of one replacing the other.

This code is quite useful and simple enough, but it can be improved. I tend to identify things to improve by the smell; though I don’t usually manage to identify smells consciously, that’s how my mind likes to work.

The first smell I notice here is a “copy, modify, replace” pattern common in imperative-style writing. Rather than copying and updating structures in-place, it’s usually clearer and more concise to generate the new structure directly, so let’s try to make this code more stylistically functional. In this case, we can simplify things a lot by writing deep_merge in terms of Hash#merge.

  class Hash
  def deep_merge(hash)
    merge(hash) do |key, lhs, rhs|
      if lhs.is_a? Hash and rhs.is_a? Hash
        lhs.deep_merge(rhs)
      else
        rhs
      end
    end
  end
end

The second (and more important) smell is use of the is_a? method. Often, is_a? can be replaced by polymorphism.

Because deep_merge is recursive, we can usefully revise our definition of deep_merge to apply to all objects, not just Hash instances. When we do so, most applications of deep_merge are degenerate – if the left-hand side is not a Hash, the left-hand object is simply replaced by the right-hand object.

  class Object
  def deep_merge(other)
    other
  end
end

The remaining cases can now be handled even more simply.

  class Hash
  def deep_merge(other)
    merge(other) do |key, lhs, rhs|
      lhs.deep_merge(rhs)
    end
  rescue TypeError
    super(other)
  end
end

(I’ve changed the parameter name here from hash to other because we are no longer assuming it is a Hash instance.)

Organizing things this way has benefits beyond clarity. Suppose we want to alter the meaning of deep_merge so that it concatenates arrays. All we need to do now is override deep_merge in the Array class.

  class Array
  def deep_merge(other)
    self + other
  rescue TypeError
    super(other)
  end
end

Or suppose instead that we want the elements of the arrays to be deeply merged.

  class Array
  def deep_merge(other)
    (0...[size, other.size].max).map do |i|
      if i < other.size
        self[i].deep_merge(other[i])
      else
        self[i]
      end
    end
  end
end

Though there’s some noise in this code to deal with arrays of uneven lengths, its structure is still straightforward. Further, we didn’t need to touch Hash#deep_merge to add this feature.

Syndicated 2009-02-13 08:00:00 from Keith Rarick

Implementing the CSS3 “rem” unit in Firefox

For the impatient, here’s the punchline: nightly builds of Firefox (aka Minefield) now support the CSS3 “rem” unit. It’ll also be in Firefox 3.2 (but not 3.1 – it’s too late for that).

The Story

Recently I found myself writing CSS for a grid-system layout. I learned that, to keep a consistent vertical rhythm, you want the actual value of line-height (in device units) to be consistent. If you have elements with different font sizes, they must also specify different values for line-height. (See the link above for a more thorough explanation.) Here’s an example. For reference, I’ve included the size in pixels for most browsers in the default configuration.

  :root {
  font-size: 1em; /* 16px */
  line-height: 1.5em; /* 24px */
}

h1 {
  font-size: 1.5em; /* 24px */
  line-height: 1em; /* 24px */
}

h2 {
  font-size: 1.16667em; /* 18.6667px */
  line-height: 1.286em; /* 24px */
}

em, strong {
  font-size: 1.16667em; /* dangerous! */
  line-height: 1.286em;
}

This is unfortunate because it obscures the relationship between the line heights of the various elements. (They are, in fact, equal.)

Things are further complicated by the behavior of font-size, which serves as the basis for the em unit. The font size of a nested element is inherited from its parent element. A careless font-size declaration will compound upon itself, and can easily result in text (and line height) of unintended size.

Fortunately, CSS3 provides a neat solution to all this, in the form of “rem” units. Just as em is defined relative to the font size of the current element, rem is defined relative to the font size of the root element. This allows you to use an accessible, user-configurable, zoomable unit, while gaining clean, unconfusing, nice round numbers in your CSS rules.

  :root {
  font-size: 1rem; /* 16px */
  line-height: 1.5rem; /* 24px */
}

h1 {
  font-size: 1.5rem; /* 24px */
  line-height: 1.5rem; /* 24px */
}

h2 {
  font-size: 1.16667rem; /* 18.6667px */
  line-height: 1.5rem; /* 24px */
}

em, strong {
  font-size: 1.16667rem; /* safe; 18.6667px */
  line-height: 1.5rem; /* 24px */
}

Unfortunately, no major browser has implemented this unit.

But this isn’t a huge feature, even for someone like me who has no familiarity with the Mozilla codebase. Plus, now that Mozilla is using a decent version control system, I had no excuse. So, rather than just complaining about the situation (as I usually do), I decided to do something about it.

Write a patch

I cloned the Mozilla repository and set about hacking. First I made myself some test cases – small HTML files that used rem in ways that demonstrated its existence and behavior as distinct from em.

Despite being a vast C++ codebase, Mozilla is reasonably well-organized and not too hard to follow, especially once you learn the conventions. Still, getting things to work took a couple of days of sporadic hacking, an hour here and there. Almost all of this time was spent hunting around the code and documentation, looking for the right method to call or variable to pass.

When I was satisfied, I created bug 472195 as a place to post my patch and get feedback. And boy did I. In less than 24 hours, David Baron gave me a very helpful reply listing a bunch of problems with my work, some of which were stylistic and some of which were more serious. He also introduced me to Mozilla’s automated testing frameworks (yes, there are more than one), which I was very happy to see.

After a couple rounds of fixes, the patch was satisfactory and David committed it. You can try it out for yourself in a nightly build. I doubt this will excite many people other than web developers obsessed with both careful typography and scrupulous accessibility (probably a small intersection indeed), but I’m happy to finally be able to use this unit, even if it’ll be some time before I can rely on its availability for a consumer-facing web site.

I’m most curious to see if this spurs any other browsers to implement rem to maintain feature parity.

Syndicated 2009-02-02 08:00:00 from Keith Rarick

Hello, world

I finally have a blog.

I’ll most likely write about programming language design, language runtime implementation, beanstalkd, web design, typography, food, and I guess anything else that I feel like sharing.

For the curious, I’m using Jekyll, which is almost exactly what I’ve been looking for in a blogging platform. It’s sort of like Blosxom plus sanity.

Syndicated 2009-02-01 08:00:00 from Keith Rarick

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!