Fonts and hinting
What David
Turner said, with a few additions.
First, I'm obviously concerned about displaying PostScript and PDF
documents, for which the goals of high-fidelity, accurate rendering
and high-contrast, legible text are often in tension. These document
formats, for better or worse, are deeply rooted in scalable font
technology. Trying to use bitmap fonts, no matter if they're pretty, is not
going to work well.
Second, as the resolution of screens goes up, the tradeoff between
accuracy and contrast shifts in favor of (unhinted) antialiasing. At
200 dpi, which will be standard in a few years, the contrast of
unhinted aa text is plenty good enough for just about everybody. The
challenge is how to get there from here. One of the obstacles is the
large installed base of software which is incapable of scaling with
display resolution. It's a Catch-22: there isn't the pressure to fix
the broken software until the displays become cheap, and the
motivation isn't there to do high volume manufacturing of the displays
until there is software that works with them. Microsoft is in a
position to break through that, and if they do, I'll be quite
grateful.
By the way, a really good place to start would be to make
double-clocked pixel rates on CRT's work. Commodity video cards
typically support pixel clocks in the 360MHz range. That'll handily
run 2560 x 1024 (in other words, the standard 1280 x 1024 res
double-clocked in the X direction) at 95 Hz. Of course, because of the
shadow mask or aperture grille, CRT's can't actually display the full
resolution. However, you still get the advantages of improved contrast
and glyph positioning (spacing) accuracy. It's very easy to play with
this - just double all the horizontal numbers in your XFree86
modeline, then run Ghostscript with a resolution such as -r144x72 or
-r192x96.
Worth reading
A
conversation between Jim Gray and Dave Patterson, via Tim
Bray. Linger for a while at Tim's blog; it's one of the best reads
out there.
Bullshit continued
I have two quantitative questions about bullshit:
- How does the bullshit level vary between types of communication
fora?
- How does the bullshit level vary between various topics of
otherwise similar intellectual content?
I was thinking about the latter question, especially, when responding
to a rant by jwz
about gamma correction. Gamma is not all that complicated or
difficult, but a lot of people get it wrong, a huge fraction of what
you find on the Web is bullshit, and you even see your share of kooks
(and see Poynton's refutation).
A quick experiment using Google searches shows that it's a lot easier
to find bullshit about gamma correction than, say, the structure of
rhodopsin. The query "rhodopsin
structure" yielded 9 functioning links, all of which appeared to be
high quality and free of bullshit. The same search for "gamma
correction" yielded 7 independent links, of which one was an ad for
a product, and all of the remaining 6 had problems. The first hit is
typical - it suggests that the nonlinearity between voltage and
luminance in CRT's is a "problem" that needs to be "corrected", rather
than a sound engineering choice for video systems. Their sample images
are poorly considered, and reinforce this faulty notion.
Why is gamma correction so cursed? I think the main reason is that it
doesn't belong to any discipline which is taught well in school, so
there isn't a core of competent, respected people who know what
they're talking about. Color science in general suffers from this
problem. Even though color is a very basic part of everyday life, it
intersects a wide range of academic disciplines, including physics,
electrical engineering (particularly video), chemistry (less so these
days now that digital cameras are replacing silver), psychology,
computer science, and so on.
I use gamma correction as an example of a subject which needs good
bullshit discrimination. How well does the web do this? Not very, at
least measured by Google. There are some good resources
on gamma out there, but they don't make Google's top 10, which
presumably means that it's not popular to link to them. Do blogs do a
good job? That's harder to answer because my own response skews
things, but my sense is no.
Of course, I am thinking about a form of communication that seems to
succeed in filtering out much bullshit: peer reviewed scientific
publications. There are limitations, largely those of scope; for most
important things that people care about, you can't find any scientific
literature on the subject. Indeed, it would be very difficult to
publish a paper about gamma correction in a prestigious journal,
because it's a solved problem (in fact, television engineers got it
right a long time ago, and it just took computer people to screw it
up). The dollar cost of producing a peer-reviewed publication is also
very high, but in many cases could be considered worth it.
PDF: Unfit for Human Consumption
Of course, it's possible that one of the big reasons that Poynter's
Color FAQ is not a popular link target is the fact that it's in PDF
format. Jakob Nielsen, in the above linked essay, argues that PDF has
very serious usability problems as a format for Web pages. It is
tempting because you have far more control over the aesthetics (and it
works way better for printing), but overall I have to agree with
Jakob.
The good news, I think, is that many of these usability problems are
not inherent to the PDF file format, but can be fixed. Indeed, many of
the complaints Jakob raises have to do with the awkward integration
between the PDF viewer and the Web browser. Acrobat has its own UI,
but in the free software world, there isn't any viewer whose UI is
similarly entrenched. It shouldn't be hard to integrate a PDF engine
into a Web browser, so that you can browse fluidly between HTML and
PDF formats without caring all that much which is which.