The Wayback Machine - https://web.archive.org/web/20170629030532/http://www.advogato.org/person/rillian/diary.html?start=94

Older blog entries for rillian (starting at number 94)

Careful with that --aspect

I've talked to several people on IRC recently about getting the proper aspect ratio when encoding theora video, and wanted to summarize here.

The short answer is, when you're encoding from DV (or most other formats) with ffmpeg2theora don't use the --aspect switch. The program will calculate the correct aspect ratio based on the source and whatever target resolution you give it, either through a -p profile or explicit -x -y switches.

The --aspect switch is for overriding the default calcuation, usually because the input source video is incorrectly marked. But video aspect ratios are confusing, and it's easy to mess up. For example, if you take a DV source video and encode with -x 320 -y 240 --aspect 4:3, you will get a video that says its pixels are square, but in fact they are not, so playback will be distorted. Without the --aspect switch, ffmpeg2theora will mark the file with the correct (non-4:3) aspect ratio.

Huh? 320x240 is 4:3! Yes. DV, whose native resolution is 720x480 (NTSCish) or 720x576 (PALish) contains some overscan area, and so the full frame is not a 4:3 image. It roughly contains one, and the theory is the extra bits get masked off by the edges of your CRT. Computers have nice, square pixels, so everything is much easier, but digital video imports a lot of the complexity of the analog technology it developed from, and had to interoperate with.

Since the full DV frame isn't actually 4:3--or 16:9 if you're shooting in wide screen--you have to crop if you want to make an actual, standard ratio, square pixel file.

For example:

ffmpeg2theora -x 320 -y 240 --cropleft 8 --cropright 8 -o output.ogg input.dv
will give you square pixels with a 4:3 frame aspect ratio.

For 16:9, use something like:

ffmpeg2theora -x 640 -y 360 --cropleft 8 --cropright 8 -o widescreen.ogg widescreen.dv

The same thing goes for the --deinterlace flag. It forces using the deinterlace filter regardless of whether the input is marked as interlaced or not, and so can degrade quality on progressive material. DV is a very sane format and all of these things are reliably marked. In general, trust the defaults.

GPL Ghostscript 8.57

We've released Ghostscrpt 8.57. This one came much sooner than usual after the previous release, but still not as long as we wanted.

The good news is that having drawn a line under recent work, we're now merging the CUPS support and various distro patches from the ESP Ghostscript fork. Thanks to Till Kamppeter, who has been sorting the patches. This is the next step, after moving to a GPL development tree, toward shipping a single, up-to-date version of Ghostscript in linux distributions.

If you're interested in helping out, you can get the work-in-progress from

svn co http://svn.ghostscript.com/ghostscript/branches/gs-esp-gpl-merger/

Helvetica

While in Montréal for LGM 2007 we went to see a screening of the new documentary film Helvetica, about the typeface. It was entertaining, and I learned a lot, so definitely check it out if you have a chance. The director said DVDs should be for sale in September.

There are a number of famous designers in the film. in the Q&A afterward, the director Gary Hustwit said it was surprisingly easy to get interviews, perhaps because no one has really made a documentary about type, or even much about graphic design before. It made me think more of us should be documenting the history of the open source community.

Also (unintentionally) amusing was the director's answer to the question "How do you feel about people file sharing your film?" He acknowledged that it would probably happen, but had to say (he works for a company that makes DVDs) that he's not ok with it. He then went on to say that he could see it being ok for music, because the experience of listening to music on your computer is about the same as listening to a CD, but he wants us to see the film on a big screen, with other people, like we were doing that night. To help frame this, he had also said earlier that the soundtrack of the film was essentially his ipod playlist from the time he conceived the documentary.

So, like the font, the documentary is non-free.

LGM 2007

Went to Libre Graphics Meeting in Montreal this past weekend. It was a good meeting, with lots of good developer interactions. Sizeable portions of the Scribus, Inkscape, GIMP and Blender. It was especially nice to meet some of the Scribus people, who I've been talking to on IRC for years, and Bassam Kurdali, the director of Elephants Dream, the lossless version of which we're hosting over at xiph.

Somehow I volunteered to digitize the video from the conference, taking over from macslow who graciously did the the filming. That will have to wait until next week though.

I'm now in Kingston visiting family before heading back to Vancouver next week.

GPL Ghostscript 8.56

I've pushed out a new stable release of Ghostscript, the free software ps and pdf interpreter and conversion engine.

This is the second release we've we've made directly under the GPL after years of a one-version delay. The previous was 8.54. We skipped 8.55 to avoid confusion with the GNU fork's follow-on release to 8.54 under that number.

GPL GhostPCL and GhostXPS

I'm also very pleased to announce that we're now doing our development of the GhostPCL and GhostXPS interpreters under the GPL. Previously, we only made occasional releases available under the AFPL

We don't currently have a release under the GPL, but a developer snapshot is available. Hopefully this will make the other half of Artifex's work more useful to the open source community, and raise awareness of what we've been doing with the Ghostscript codebase.

While GhostPCL and GhostXPS have separate interpreter codebases, both are just front ends for Ghostscript, calling into the same graphics library and output device backends as the PS interpreter.

This is the first time we've released our implementation of Microsoft's XPS. It's very alpha, but works on some documents. I know there are a couple of other free software implementations out there, so it's good to be able to show people ours.

But not HD Photo

We have not however implemented the HD Photo spec, included in the XPS format. While Microsoft has been making bold claims about how open and free their new PDF clone is, it's not at all clear if they're going to allow free software implementation of their new image format.

People have mostly been letting them get away with this bait-and-switch. We in the FLOSS community need to ask them some hard questions about when they're going to publish the spec without a dodgy EULA and what exactly they're granting with respect to implementation and distribution rights.

mikal, try gst-launch v4l2src num-buffers=1 ! ffmpegcolorspace ! jpegenc ! filesink location=shot.jpg

However, my webcam takes a few frames to stablize, so this doesn't really work. You can use multifilesink location=shot-%05d.jpg and ask for 10 or so buffers, discarding the rest, but that requires a shell wrapper. I don't know how to ask gstreamer to do it.

FWIW

30 Dec 2006 (updated 30 Dec 2006 at 00:50 UTC) »

parallel computing

We do nightly regression tests on the Ghostscript codebase to try and detect inadvertent changes. It's a combination of established test suites and our own collection of problem files from the wild.

The problem is that a complete run takes hours. Before we bought our current server, it was impossible to to a check on every commit, and even now we'd need a queuing system no one's been annoyed enough to write. So instead we run once a day, and then someone has to check the results and work out which change caused the differences.

So we've been looking at using a cluster to speed up the runs, hopefully to a few minutes, so we can easily test things, and get automatic feedback right away after a commit. My partner does scientific parallel programming and has been helping set something up.

For the moment, we're renting time. Our usage pattern is ununsual. Most cluster users have algorithms that are limited by communication between the nodes, and so they tend to do smaller jobs, but run a simulation for hours, days, even weeks. We want a lot of nodes, but not for very long, so it's the sort of thing where renting part of a shared resource makes sense.

Of course, it works better to be sharing a resource much larger than the average job size, or with other people with similar usage patterns to avoid being blocked in the queue. But we'll see how it goes. For the moment we're using Tsunamic Technologies' cluster on demand service. They've provided good support so far, and offer a familiar linux environment using the PBS job queue system (the venerable qsub et al.) to schedule access to the nodes. So far it's going pretty well, with scaling down to a 5 minute run.

wherefore the grid?

People have been talking about Grid computing for 17 years now, but not much has appeared to fulfill the promise. Right now, most parallel machine users are doing research simulations, and there the overhead of dealing with heterogenous environment and dynamic node allocation isn't especially worthwhile. But once it there's the infrastructure available to rent time easily, and especially to sell time, I think we'll see a lot more of our sort of use.

Ironically, it's the overhead of virtualization that's finally making that possible. The problem with a market in cpu time is that you have to be able to run untrusted code. An entirely automatic reputation system isn't really good enough. You need recourse if your provider is messing with your data, and providers need to be able to protect jobs from each other. And because you can move machine images around, it also fosters the sort of dynamic infrastructure we need to really have scalable computing available as a utility.

I was therefore excited to see that Amazon is doing exactly that with their Elastic Compute Cloud beta. To use the service you upload an OS image to their storage farm, and then launch as many instances as you want, for as long as you want. It's a really cool set up. Apparently the story is that they have this enormous server farm for dealing with their peak loads (like Christmas) but of course that means it's idle much of the time. TThe same issue we have, really. They already sell almost everything else online, so they decided to try renting out time on their infrastructure as a new business idea.

They have some other cool things too, like an RPC interface to human labor.

The best thing about it is that they have a web protocol for doing all this. So while someone has to provide a credit card and pay the bills, you can now write code that can allocate and occupy its own server resources. We're one step closer to AIs living free on the net. :)

everything rots eventually

wingo, there's also a bridge in Paris named Pont Neuf.

bad phone karma

So we moved recently. To a bigger, nicer place. Which is great. But along with our address, we changed our phone number.

You see, our previous phone number was previously (previously) the fax number of a modelling agency. When we first got the number three years ago after moving back from London, we got about 30 faxes a week. We figured it couldn't last, so we didn't immediately complain. However, as of two months ago we were still getting 5-10 a week, often in the middle of the night. Certain websites' disinterest in removing our number from their agency listings probably didn't help.

We therefore asked for a new number when moving. That was fine, and while not quite as memorable as the old number, it was still pretty good. We got a few odd wrong numbers the first week, but didn't think much of it.

Well, it's been a month now, and we're still getting a consistent few wrong numbers a week, and S finally figured out what was going on, from the tone of voice one of the callers used. Turns out our number is listed in the new, just came out last month yellowpages as...an escort agency!

Yup, someone just called. "Hi, I'd like to hire an escort."

Well, that explains a few things. You'd think they'd fallow these numbers for a few months! Sigh. OTOH, if this was their normal call volume, I can see why they went out of business. And at least we know how to answer the phone now. S has been practicing her derisive laugh.

robogato, thanks for enabling multiple posts. It makes it a lot easier to have conversations through the recentlog.

Zaitcev, sorting topicality for different syndication points is what tags are for. Most blog software where generate tag-specific feeds, but it's not clear livejournal is among them. I'm not aware of a standard for including the tags in RSS items themselves, but there's an atom:category element that looks like it's for this. So maybe some combination of using a tag-specific feed from a blog and filtering by atom:category on the advogato side would work?

85 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!