The Wayback Machine - https://web.archive.org/web/20170629094609/http://www.advogato.org/person/wingo/diary.html?start=166

Older blog entries for wingo (starting at number 166)

21 Jul 2006 (updated 21 Jul 2006 at 20:22 UTC) »

advogato roundup

Advogato isn’t as usual a haunt for me as it used to be, but I like to browse back occasionally, to see the comfortingly familiar and the fresh thoughts, without the pedestal effect of other aggregators.

Today was delightful. I was first greeted by Andy Tai’s note that GNU arch 1.3.5 is out. I didn’t know they were still kicking. I’ve been dealing a lot with subversion these days, and it’s also comforting — the same commands I always used work better now. Not exactly exciting, but my afán for exciting tools has waned, temporarily perhaps.

After the ever-abstruse nymia, perceptive-of-corporate-machinations robilad, we have the enticing dangling pointer that is apenwarr, ruminating on process and software. I’m not always able to dereference his allusions, but he’s always a pleasant read, and has a corpus behind him.

There are more, old hands and new, but what is remarkable is that the advo community has maintained its quality over so long. So here’s to advogato. May your community stay rich and accessible as the years roll on. Also, may your software see a series of pleasant, non-obtrusive upgrades.

#

happening

Someone at aikido this morning found out I was a programmer. He proceeded to ask me where he could get a free copy of Fortran. 77. Chuckles, yea even belly laughs spilled forth.

work

On the death march!

open letter

From: Self Sufficient Individuals of Planet Earth (S.S.I.P.E.)
To: The bourgeoisie
Subject: Air conditioning.

HA! How we scoff you. Scoff! Scoff, even as you sit with your illness-inducing artificially dry recirculated air product, scoff as you erect glass barriers defining inside and outside, scoff. We the self-sufficient know that all that might be needed are the rotary blades of a fan for air circulation to activate our God-given sweat glands, cooling us via the traditional and natural method of evaporation. Your pollution-spewing vehicles spew all the more, carcinogens and oil wars just to maintain your comfort. Those of us with need of car drive them with our arms out the window.

Also, anytime you want to have us over to your house to hang out that’s cool.

#

Hey world what up.

Me chilling. So I’ve been getting some questions lately, one might go so far as to say frequently, and well, I figure this podium is as good as any. I didn’t not not not make up any of these questions.

I heard you went to a conference. How was it?

Oh it was quite nice. Met a lot of good folks with wonderful minds. Folks like John Hwang, who is going to bend peoples’ ideas of what GNOME is; the enigmatic Toni, who wears sunglasses when he is on the interweb; the ubiquitous Alex Gravely; and the unlinkable Pachi (Rafael Villar Burke), for whom the internet is not good enough. Oh and many more also, lots of great people.

But man, it was a lot of work. The work wasn’t easily parallelizable either. I wrote about it a bit in my last posting. Took me until this weekend to fully relax, although a bit of mid-week funkadelic did assist.

Um, speaking of which, your last writing product was rather long.

Yeah dude. I didn’t even finish either. Thanks for folks to take the time out to comment on it, there were some good tips there. I’ll finish up with an article about conference archives, once I figure out all the details.

So what is up with the GUADEC archives then?

I’ve been pretty busy is basically how it boils down. There’s probably about 8 GB to cut, all in all, which pretty much means we have to do it with people at Fluendo, where the archives are. Which is fine, they’re all good folks, only that I have to make sure the tools are working properly. Still get core dumps occaisonally, but hey, it’s better than working with proprietary software. Anyway as I said I’ll write up more about how this is done later.

But from a pragmatic point of view, expect it to be a few more days. This is not a work activity so I don’t have oodles of time to devote to it. But once the process works we’ll start churning out videos and audio clips.

#

Ah, conferences. Those ephemeral events that make virtual communities into sweaty reality, if only for a week.

I’ve been involved with the streaming of three conferences now, and have a few things to say about it. A lot of things have to go right for the first image ever see the internet. Here’s what I’ve learned. <h3>A note on cooking</h3>

Picture in your mind the perfect stream: hundreds or thousands of happy viewers and listeners; clear, readable slides; clear sound; good view of the presenter. All of these are reflections of quality inputs to the streaming process. A good pie is made of good ingredients. Everyone likes good pie.

Good ingredients take you most of the way, and free your time to focus on polish and details. Make sure that you have the support you need before taking on this responsibility. <h3>Pre-conference preparation</h3>

Each room that will need streaming will need the following things:

  1. A computer to encode the video, save it, and send it over the network.
  2. A video camera with a tripod, line-level audio input, and firewire video output.
  3. An ethernet network connection for the encoding machine, with a fixed address and dedicated bandwidth.
  4. Power (two plugs, one for the camera and one for the computer).
  5. A sound cable coming from the mixing desk, at line level.

All of these things need to be in the same place in the room, at an appropriate location to record the speaker and the screen. This is very tricky; typically 3, 4, and 5 depend on other people, and separate people at that. These people need to cooperate with each other before you get to the site, and with you once you are there. Tricky!

So, well before the conference starts, it must be absolutely clear both to the organization and to the respective people who is responsible for each of these items. Furthermore, it is your responsibility to communicate to these people exactly what it is you need from them, and to ensure that they are providing this to you.

Note that streaming that involves moving the camera is too hard. Don’t commit yourself to do it. No one that is responsible for streaming talks should have a technical burden like that — moving sound, moving power, moving video, moving network, moving computers. Too tough. <h3>The computers</h3>

The actual type of the computer is not terribly important, other than the fact that it should be based on a more recent X86 processor, to take advantage of optimizations in the theora encoder. It needs an ethernet interface, a moderate amount of memory, and a moderate amount of disk.

For example, these are the specs of the machines I bought for GUADEC 2006: AMD Sempron 3000+, 512 MB ram, 80 GB hard drive, integrated NIC and USB. USB is useful to connect to an external hard drive for copying off the archives. I then had to buy firewire cards (any brand should work). The total cost for four machines was about 1150 euros. Hyperthreading or SMP will offer a significant performance boost.

The machines worked pretty well. The one problem with the process is that theora encoding is extremely expensive in terms of CPU, and its cost depends on the amount of noise and motion in the video. We were recording 360×288 at 12.5 frames a second, which was about the maximum the machines could do, and probably a bit too high — there were a few times in which the encoder couldn’t do it in real time and so we missed some frames. You really want to avoid this. Once encoding starts lagging behind it’s difficult to catch up to steady-state again.

Note that for best optimization on the theora encoder, you should install the 32-bit x86 version of your favorite up-to-date linux distro. While I like Ubuntu on the desktop, Fedora Core 5 has better packaging of the software I use to stream (flumotion), so I use it.

Also note that preparing a machine without an up-to-date suitable operating system will take a fair amount of time. If you can get this done before arriving at the conference, your life will be much easier. <h3>Cameras and sound</h3>

For best quality, you want a good camera. A good lens means less noise in the raw video, leading to more efficient encoding and a higher quality result. Decent zoom allows you to place the camera in the back of the room, which both gives a better angle and is out of the participant’s way. Of course the camera needs to sit on a tripod; there’s not much to say there, other than the fact that better cameras are heavier and need sturdier tripods.

All cameras that I have seen, from handicams to professional cameras, have DV output over Firewire (IEEE 1394). DV Firewire cameras are all accessible via Linux, via the same interfaces, so you should not have a problem there.

Firewire cameras are very nice because they guarantee proper synchronization of audio and video. The audio plugs directly into the camera, and the camera hardware ensures that the audio and video have proper timestamps. While recording audio from a sound card is also a possibility, it’s much preferable to get the audio into the camera directly.

For the audio to enter the camera, you have a number of options. I will enumerate them from worst to best. Worst is to use the internal microphone of the camera. You will catch all of the background noise, very little of the presenter (unless the camera is close to a speaker), and lots of chit-chat and noise from people around the camera. Also you will get the fan noise from the computer. Very bad, this.

Hooking into the room’s sound system is the other, better option. However the cheapest handicams only have an unbalanced mic-level minijack (1/8″ jack plug) input, intended to be used with a simple microphone. This sucks. Normally from the room’s sound system you can only get a balanced line-level input, which is too much for the camera. You end up having to string a bunch of plug converters together, precariously sticking out of the side of the camera, and the quality is still terrible. Very bad. This is also to be avoided.

If you are in this situation, but you want good sound, the best option is to get an attenuator, something like the BeachTek DXA-2s, about 150 euros or so. You could have the mixing desk lower the level and get a converter cable (e.g. female XLR to male minijack), but it could be noisy. Depends on cost. Planning is key, though.

Line-level minijack inputs to handicams exist, but are less common. Your dynamic range will be better, but you still have to get a plug converter.

The best option by far is to get a camera that can hook directly to what the mixing desk gives you, which is likely to be a line-level XLR male cable. XLR connectors, in addition to being structurally much stronger than a minijack, are balanced, which reduces the noise in the signal. Better cameras allow direct connections from XLR cables, both for use with microphones and with inputs from mixing desks.

The cameras we used at GUADEC 2006 were Panasonic AG-DVX100AEs, and one handicam. Renting the three cameras and four tripods cost 1600 euros for a week. The Panasonics were pretty good, although we had some problems obtaining proper sound in a number of rooms, due to cable problems and people playing with the mixing desks. Also, the camera would take the input and record it to one channel only, which is a bit of a pain. The handicam was missing some kind of breakout cable for the audio input, so we had to use the handicam internal mic.

As you can see, the issues involved in getting sound into the camera are tricky. It is best to think about it all beforehand; analyze each room, determine what you will need, and make sure that the event organizers provide it to you, or obtain it yourself.

If you do not know what will be there, you have to be prepared. If the room is wired and has mic jacks outlets on the walls, assume there is a female XLR jack you can connect to. If you will be connecting directly to the mixing desk, you will probably need a male 1/4″ jack connector. In both cases, make sure you have the proper length of cable.

Note that there is a second kind of digital video camera that uses something called IIDC instead of DV for storing and transmitting the video. The difference is that DV is compressed (slightly; it’s similar to JPEG), and IIDC transfers raw video, which can offer higher quality. You don’t have to worry about it too much, though — IIDC is mainly found in industrial and scientific cameras only. I mention it because of the one exception, the iSight from Apple. You will not be able to use the iSight as a DV camera. <h3>Network and power</h3>

In the conferences I’ve worked with, it’s always been like this: some facility is rented out, the event organizers work with that facility and with a sound company to do the sound, and then run the network and the power themselves. So my comments are colored by those experiences, and that you normally have to talk to the same people for network and power.

On the topic of power, there’s not much to say. You need one plug for the camera and one for the computer. It’s useful to have a couple extra, for example for your laptop if you need to plug in, and an external hard drive. But you can get by with two outlets.

Actually, there is something worth saying: many facilities completely power off at night. As in, the security guard comes in and throws the breaker. Whine and complain, try to make sure the power will be there for you, but in the end it might happen during the conference. Cameras handle this gracefully, turning back on and functioning when the power comes back. Computers do not, in my experience. If you can figure out a way to make computers turn back on when the power returns you will be a happier person.

The network requirements of live streaming are the most obvious ones. You need an ethernet (cable) connection, of about the bandwith of your stream (400-700 kilobit), with a fixed IP address, dedicated bandwidth, and no blocked ports. The encoder computers should normally connect to another machine, internal to the conference, that will aggregate the streams for internal streaming and for relaying to some external server with more bandwidth. Of course in the case that the conference has lots of bandwidth itself, you can stream from that machine instead.

The reason you want that extra machine is to make the network peoples’ job easier. Normally you’re on a private network with NAT access to the internet, and you want to limit the holes in the firewall to one machine, located physically close to the firewall. If your server can have two IPs, one public with no ports filtered and one internal, that is the ideal solution.

As GUADEC 2006 showed, it is possible to do streaming over a wireless connection. However it has to be your wireless connection, not shared by anyone else, and with sufficient bandwidth. Even then, the routing issues can get tricky, and GUADEC 2006 was not perfect in that regard.

In GUADEC 2006 we also ran into a strange issue, where the NICs on the machine would not link over a long cable, whereas it would link on our laptop NICs and with the switches. The solution was to install a switch right next to the machine. That was an odd problem.

Note that in the end, once you get to the conference, you will probably have to run around to each of the machines, setting the network addresses and verifying that things are working correctly. You can do this with a laptop if the machine has two NICs, plugging into the alternate interface with a crossover cable (or a normal cable if your laptop detects and switches, as in some modern laptops). Otherwise you will be dragging a monitor and keyboard around everywhere for the first few hours. <h3>At-conference preparation</h3>

You arrive at the conference with just your laptop. All of the computers are there, installed with Fedora Core 5, updated, and hooked to ethernet cables. You have a keyboard and a mouse. There is also a wifi network available so that you can access these machines. You boot them all, check their IPs with the keyboard and mouse, write them down, then go and sit in a nice comfortable chair with a desk.

Of course, if any of this is missing, you will spend a lot of time getting it together. Carrying computers, cameras, tripods, cables. Marshalling the network and working space that the organizers forgot about in their last-minute rush. Et cetera.

Anyway, back to fantasy land! From your working space, you ssh into all of the servers as root, adding your ssh key to ~root/.ssh/authorized_keys. The servers need to be updated to the latest errata and to have the streaming software installed. We will use FC5 with Flumotion. Flumotion is distributed in GStreamer’s repository, so add it to your repos list and upgrade:

wget http://thomas.apestaart.org/pkg/thomas.pubkey
rpm --import thomas.pubkey cd /etc/yum.repos.d
wget http://gstreamer.freedesktop.org/download/gstreamer-0.10-deps.repo
wget http://gstreamer.freedesktop.org/download/gstreamer-0.10-gst.repo
yum -y upgrade && yum -y install flumotion
# make sure all users have access to firewire
echo 'KERNEL=="raw1394", MODE="0666"' > /etc/udev/rules.d/60-flumotion.rules

Do this on all your machines. It will take quite a while to complete depending on your network speed; an alternate option is to somehow mirror the updates you need to DVD or portable hard drive and take that with you. I’m not that good with Fedora, however, and so I just rely on the network being there. The consequent is that you really need the network to work.

While this is installing, go find your network people and get them to give you fixed IPs for the encoding machines, and check to see that you have network, power, and sound at an appropriate place in each room. Set up the cameras, and do sound checks. A decent camera will show you the sound levels on the video screen. Adjust the levels (either on the camera or from the mixing desk) so that you hit red occasionally in normal speech, with plosives like p and b.

Back at your computer, everything has updated. Now you just need to get Flumotion running on the system. You might find the official manual useful. I find it a bit hard to read, so I’m not going to refer to it. <h3>Flumotion setup</h3>

I will describe use of Flumotion to stream the conference. It’s a pretty good streaming server, with several desirable characteristics.

First, Flumotion is completely Free Software. It emphasizes free formats, like Ogg Theora and Vorbis. When people fly halfway across the world to talk about freedom, it is important that the tools they use reflect and support the freedom they seek.

Secondly, Flumotion is complete. It handles the entire streaming process from capture device access to serving http to clients. It has a graphical administration tool with a wizard to help you configure the streaming process, with good error handling. Flumotion is integrated with the standard Unix services architecture as well.

Finally, Flumotion is distributed. The streaming process can span multiple machines, which is good as the resource constraints are topologically different as well; typically you have CPU power at the video source, and bandwidth in one or more other locations. Flumotion allows many “workers” to run Flumotion processes on behalf of a manager.

I don’t know of any other end-to-end solutions like Flumotion. You can probably get dvgrab | ffmpeg2theora to feed an Icecast server if you are sufficiently wizardly. But as far as something that’s easy to install and run, Flumotion’s probably the only game worth playing.

Architecturally, a Flumotion system is defined by one manager running somewhere, and one or more workers running on the machines of interest. All workers and admin clients will have to log into the manager, so if you are relaying streams to an external server, your manager should be placed on a public machine, like the server machine. I’ll assume it’s there.

What we did at GUADEC was to have one manager for every room we streamed, which meant that there were four managers. Since we were streaming through the server machine, there were four workers running on the manager, one on each of the encoding machines, and four workers running on the streaming platform. (A worker can only connect to one manager.) It’s a bit complicated, which is mostly due to the number of machines in the system (6), and the need to authenticate on the various connections. We could have pushed everything through one manager, but the admin client is better optimized for one “flow” (the streaming process, from capture to encoding to streaming). Also one manager is one point of failure for the whole system, which is bad considering the various networking problems you might run into.

To configure one room, for instance the Sala de Juntes, you have to (1) make a worker on the worker in the Sala de Juntes; (2) make a worker on the server; (3) make a manager; and (4) start a worker on the streaming platform. This is actually pretty easy, given Flumotion’s service script integration.

Given that you will need four managers, first decide what ports they will listen on on the server. The default port is 7531, so I chose 7531-7534. The Sala de Juntes was the last one I configured, so it was 7534.

For (1), drop the following XML snippet in /etc/flumotion/workers as juntes.xml:

<worker>
  <debug>4</debug>
  <manager>
    <host>10.0.0.120</host>
    <port>7534</port>
  </manager>
  <authentication type="plaintext">
    <username>flumotion</username>
    <password>Pb75qla</password>
  </authentication>
</worker>

The password will need to be the same as the one configured in the manager. Delete /etc/flumotion/workers/default.xml and /etc/flumotion/managers/default/, because we are not running the default setup, and say service flumotion status on the command line. It should give you something like the following:

worker juntes not running

That’s sweet. Go ahead and service flumotion start, and chkconfig flumotion start’ to ensure flumotion will be started on boot.

On the manager, remove /etc/flumotion/workers/default.xml and /etc/flumotion/managers/default/ as before. Drop the same XML snippet as before into /etc/flumotion/workers/juntes-server.xml. Note that the name of the file by default determines the name of the worker, and two workers with the same name can’t be logged in at the same time. It’s important to make sure the files are named differently.

Now, you need to set up the manager. Make a directory for it, /etc/flumotion/managers/juntes/, and a directory for your flows, /etc/flumotion/managers/juntes/flows/. Drop the following snippet in /etc/flumotion/managers/juntes/planet.xml:

<planet>
  <manager name="planet">
    <!-- no host means all interfaces <host></host -->
    <port>7534</port>
    <debug>5</debug>
    <component name="manager-bouncer" type="htpasswdcrypt">
      <property name="data"><![CDATA[
flumotion:dF4wh3SMa4q/2
]]></property>
    </component>
  </manager>
</planet>

The bit about the CDATA is the user that’s allowed to log in, in htpasswd format. You can generate this via htpasswd -nb flumotion Pb75qla.

Now, when on the server you do service flumotion status, you should get:

manager juntes not running
worker juntes-server not running.

At that point you’re good to go. service flumotion start and chkconfig flumotion on. Sweet. Your Flumotion is ready to go. You’ll have to wait until the computers are connected to the cameras and have their proper IPs to set up the flows though, because the setup wizard actually checks that all of the hardware is working, including the DV connection to the camera. <h3>Final setup</h3>

Having obtained the IP addresses for your boxen, configure them (on fedora, that involves mucking about in /etc/sysconfig/networking/ somewhere), and add their IP addresses to your own /etc/hosts. Shut them down and move them to their locations. Verify with your laptop that the network cables that you have actually have the right IP addresses, and that they can reach the gateway and the internet. While they don’t have to have internet, it is good for last-minute package installs. Turn on the machines at their locations and plug in the Firewire cables that you brought (2 meter minimum) to the camera.

On your local machine, run flumotion-admin. Choose “Connect to running manager”, and enter the host and port of the flumotion manager. Enter your authentication information. At this point flumotion-admin will pop up a wizard to help you configure the flow. Choose Firewire as the audio and video input and choose 360×288 as the video size with 12.5 frames a second. When you go on to the overlay, note that turning off all overlay options is a big performance improvement, so you might want to do that if you are seeing loads that are too high. You can choose to stream or save any combination of audio and video. Remember to choose the right worker on each page, so that you are running the encoding on the capture machine, for example.

After finishing the wizard, you should see lots of smiling faces in your flumotion-admin window. At that point, check to see if you can watch the stream. Dump the configuration to a file via File->Export configuration, and drop that file into /etc/flumotion/managers/juntes/flows/default.xml on the server machine. Once you’ve set up on stream you can just copy/paste for the rest.

For the record, the flow that we used for the GUADEC carpa is available carpa.xml. <h3>In-conference activities</h3>

Ideally everything is ready to go the day before the conference. If you haven’t done anything, and have to install from barebones machines, you’ll need three days to prepare, provided that things go well. If you have arranged for the machines to be installed already with up-to-date FC5, and the sound, network, and power, and cameras are installed in a decent spot in all rooms, you might be able to do it in a few hours. I haven’t seen that happen before though, so do give yourself some time.

Potential problems during the conference include power outages, network outages, bad sound connections, and poor framing of the speaker and slides. The actual circumstance of getting the speaker and slides to come out nicely is tricky, and probably merits another discussion. People working the cameras like to zoom in on the presenter, while remote viewers place more importance on seeing the slides, especially if the slides are not posted for public download. Decide beforehand which you want to see, and consider printing out guidelines and taping them to the encoding computers.

By this time, you will need some autonomous help. You’ll be a bit tired, and other people will be more able to help out. The quality of the streams will need to be monitored, and the inputs corrected if they go bad (bad sound, etc). Other people can do this for you, and fix problems if possible. <h3>Archives</h3>

In our next installment, hopefully coming tomorrow, I’ll talk about how to get decent archives. Until then, corrections, questions and comments are most welcome, preferably via comments on this weblog entry.

#

22 Jun 2006 (updated 23 Jun 2006 at 08:08 UTC) »

The oral thermometer I nicked from the Peace Corps was reading 40 this morning. I begrudgingly chose metro over bike today, didn’t go to aikido, and still feel like crap. Come on body, don’t you know it’s Sant Joan tomorrow, and GUADEC the week after that, and you have a lot of crap to do tomorrow!

But no, attempts to sleep at a prudent hour seem to elude me. So then, tidbits, dear readers, tidbits it is!

weekend

Peter Bernath came for an aikido seminar last weekend, as he did last year. Intensity! After the weekend I had to spend most of Sunday afternoon lazing on the floor. I had a delightful listen to Wizard People, Dear Reader while investigating the limits of horizontality.

guadec

Finally got the streaming situation together — in the tent and the three main rooms, there will be cameras streaming during the whole week. During the three main days I’ll make sure the archives get rotated so that we can get them up on the web quickly. So, if you wanted to see a talk in the After Hours set, rest assured. If the network is working, you’ll be live. Perhaps more interesting are the hackfests that will fill the blank spaces, that with the camera they can have a stronger virtual component. BECAUSE THAT IS JUST HOW WE ROLL.

seen on the internet

Yow is the new lorem ipsum.

update

It seems persons with my last name will henceforth all be movie stars. I’ll let y’all know how that works out ok!

#

Luis writes about the effect that planet gnome has had on the character of project-scope conversations in Gnome. He left a bit of meat on that bone though: why is it that (mostly) the same people are having a different conversation? Is it just the lack of a few individuals with a high mail-to-code ratio?

If I were to guess, it would be that with people treat web logs as a reflection of the self, and people choose to express who they are rather than who they are not. Thus there’s not too much in the way of the print “You’re wrong.”; while True: print “No, you’re wrong.” loop. This does leave the planet “vulnerable” to monologues; however since all blogs have a monologue aspect to them, it would be difficult to limit this.

Aside from the identification of self with its written artifact, I feel another force at work, a definition of who is in and who is out. Those inside are more inclined to be supportive of each other, or at least tolerant, because hey, we’re all in this together. There’s a similar identification of the collective self with our writings. Stinging barbs as well-written as this one can only be written by those not on the planet, even though I know plenty of folks on the planet that think the same.

The other half of that would be entity outside, free to deride: that which is not part of the planet may be freely mocked. For example, the listengnome thread of a few months ago.

I do think that the current balance means that on-planet people don’t get enough critical feedback. My threshold for sending off a mail saying “Hey, you’re being an ass” is much lower than writing a post on my own web log to the same effect.

#

10 Jun 2006 (updated 10 Jun 2006 at 19:50 UTC) »

When travelling, I have an enduring obsession with enumerating the ways that I have moved. So here we go: planes (7 thus far, in 9 airports), taxis, cars, motorcycle, sailboat, bike, buses, trains, metros.

If you are so disposed, you may choose to read “enduring” with the nonstandard pronunciation /ɪndi:rɪÅ</ (en - DEE - ring).

I spent the last week in San Francisco. It’s a very civilized place. A lot of my friends from other threads in life have taken it as theirs. Besides the goodness of seeing my people, I really enjoy stepping into their lives, seeing the intersections between people and places.

Also, I would like to mention one thing. Burritoeater.com. Thank you that is all.

hack

Various hacks on guile-gnome recently, none committed — see we have a forest of GNU arch branches, and I can’t remember how they work exactly. In the end I would have to evaluate arch as interesting, but confusing, and ultimately a problem. The need to explicitly mirror and create private branches (two things where there should be zero) for offline operation is very irritating. We have an innovative SCM setup, but I realized a while back I’m not so interested in trading pain for gain anymore. Gain-only need apply! That’s why subversion is nice, and why bzr promises to be nicer.

In the short term however it seems I am stuck with pain for a while.

#

As I walk the streets I think of things I’d like to communicate somehow, but the thoughts are a bit too ephemeral for the keyboard. They are not with me now.

I write from a coffee shop in the Mission (San Francisco California US Earth &c), mostly to test the newness of the wordpress 2 editor. It’s pleasant, if a bit aquarian.

I also got around to looking at categories-as-tags, after my experiments with tags and photos. Wordpress already has the necessary infrastructure — I just needed to pull a cloud out of the database. There are quite a number of people who do this. I ended up settling randomly on the category tagging plugin from Michael Wöhrer. Works well enough with some tweaking.

#

Home

I’ve had a series of fine days. A festa major in a small street near Gràcia, a meatful party at the house in which no glass was broken, and for the first time I was chosen as the person on whom the aikido instructor would demonstrate a technique. Granted, 15 of the best folks at the school were gone to Japan to participate in the All Japan Aikido Demonstration, but hey, I take my moments as they come.

Other home

I’ll be flying out on Sunday to North Carolina. After running around a bit, I fly to San Francisco from Thursday to Thursday, hopefully my visa situation is all ready for me to go to Washington on Friday, back to Charlotte on Saturday, and hop the aluminum airtubebus back to Spain on the Sunday two weeks after arriving.

That is the theory. Practice may or may not follow the theory.

Work

I realized the other day that I’ve been hacking proprietary software for going on two months now. Not NDA-proprietary, not copyright-proprietary, but just proprietary. Software to run the Flumotion streaming server on a large cluster of machines. Proprietary in the sense that it’s only useful to some organization planning to host streams from many customers, streaming to many many listeners.

It’s been interesting, although I have to figure out how to get back to free software or otherwise rationalize my existence.

Hack

I wrote a lossless ogg/theora+vorbis cutter a few weeks ago and never said anything about it. It has a very simple installation process: just click here. You’ll need up-to-date gstreamer, gst-plugins-base, and gst-python. Ubuntu Dapper will do.

<center>
mungeariffic</center>

For example, if you like the video on vilanova, but just wanted the part about the festa major, cut cut cut and you have a lossless cut of the video.

I’m pretty sure it produces correct files, although in general it outputs a few P frames before the first I frame. While correct, this confuses some common players until the first keyframe is processed. Ignore the man behind the curtain.

#

Photos

Went through and tagged the rest of my photos, resulting in an enormous tag cloud. The thresholds can be tweaked but I think it’s rather interesting.

As far as the software goes, recent changes: the bottom row of random thumbnails are only recent photos; fixed some url-encoding issues so all tag names should work now. I keep claiming I won’t do any more web hacking but I imagine I’ll implement view counts for photos.

States

Going home next Sunday for visa issues; hopefully will be returning Spain-side within a couple of weeks. However in the meantime I’m pretty excited about heading out to visit some folks in San Francisco. Sweetness.

Hacks

Steel Bank Common Lisp has an excellent profiler. I mentioned this before. It’s statistical, driven by SIGPROF, so that the stack samples that it takes are evenly spaced in program execution time. That means that if you have more samples in one function, that your program spent more time there. A simple idea, used also by the statprof profiler I hacked on for guile and python, among many others.

Juho Snellman did a nice hack recently, making the profiler ticks driven by allocations instead of SIGPROF. The result is an allocation profiler that will tell you which parts of your code are allocating memory the most. In garbage-collected languages this is often one of the best ways to microoptimize, as the cost of GC depends on the amount of allocation. Nice hack, Juho!

Meanwhile back at the bit ranch, holy shit! It seems that as early as 2003, a fellow named Alex Yakovlev wrote a Scheme interpreter in JavaScript with everything — call/cc, tail recursion, multiple values, dynamic-wind, a syntax-rules based macro system, everything. Then recently Chris Double ported it from IE6 to Firefox, leaving us with a nifty repl demo, replete with an erlang-like library for concurrency programming. Wow. Very neat. But oh, for a common VM.

come back from san francisco, it can’t be all that pretty

#

157 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!