Older blog entries for walken (starting at number 36)

OK, so I got married on December 26th. Yay :)

I recall there has been a discussion here about digital photography and how most photographs do not like to provide full resolution pictures - just like 'film' photographs do not like to provide negatives, because they prefer to make you pay for the reprints instead.

I'm happy to say I found a photographer who did not give me any hassles - he gave us all the high-resolution pictures he shot (all 850 of them :), and this is just part of his standard contract. Also he came with his wife, who was shooting too - this is nice because she could shoot the guests when he was shotting us and vice versa.

Anyway - for anyone who's looking for a wedding photograph in the SF bay area, I would highly recommend them: http://www.manuelandjulie.com/

The instructions are: Grab the nearest book, open it to page 23, find the 5th sentence, post the text of the sentence in your journal along with these instructions.

Say you have a 6% 30-year Treasury bond.

Or if I take the second closest book:

Oh! je sais, c'est pas tres reluisant pour un honnete pere de famille de rambiner des dames aux terrasses des cafes, mais je vous avancerai, pour excuse, que ma femme est frigide comme tout le pole Nord.

Oh well :)

mpeg2 stuff

raph: You mention the lack of a common way to export mpeg2 flags in an exchange format to be used between a decoder and a recoder. One option I've been considering here, would be to add these as a text comment in the pgm header. In mpeg2dec, there is a pgmpipe output that is mainly done to be used in transcoding applications, maybe the textual flags (as you can get with -vvvv option) might be exported there as well.

Other than that... I suppose you could use libmpeg2 as a library... if you're looking for an already integrated solution, have you looked at ffmpeg and/or gstreamer ?

mathieu: I have read that great article How to write shared libraries from Ulrich Drepper. I was quite enthusiastic - seems like there's a thousand things everyone writing a shared library should know, but I had never seen these written up anywhere. Great stuff.

In particular, I was very interested by Ulrich's comments about -fPIC code. I've heard a thousand persons before him tell me that the -fPIC overhead is negligible, blah blah blah. Well it's not in my experience - in libmpeg2 the -fPIC overhead is about 8% (on an athlon CPU). To me, negligible is defined as being below 1%. But, the very interesting thing I learnt from Ulrich's paper is that if you use gcc's visibility attribute so your library's internal symbols do not get exported, gcc (>= 3.1) can use that knowledge to avoid generating PIC code. Supposedly this should get rid of most of the -fPIC overheads.

So I was very excited to try it. But all I got out of it was a gcc warning: `visibility' attribute directive ignored. Even Ulrich's own C example gives me the same error. And it still generates PIC code for places where it should not have to.

Now I feel cheated... The whole PIC infrastructure is a huge mess. libtool also has a -prefer-non-pic option for those who want to avoid the PIC overhead on architectures that can tolerate this... but the damn option is broken, it still tries to generate non-pic code on some of the architectures that dont tolerate it... It's not even difficult to test for, it's a 20 line autoconf macro, but why use libtool and why does it have a -prefer-non-pic option if you're still required to worry about architecture specificities...

Is anyone else using the AT&T (well, now comcast) @home service from a linux box, and occasionally (actually, fairly frequently in my case) getting high (say, 50%) packet loss ? If you do, I might have a workaround. I'm guessing that windows users are not affected by this AT&T broadband bug, due to subtle differences in the behaviour of their ARP cache. When I finally understood what happens and how to fix the issue, I thought it was fairly funny :) I still have no idea what this network behaviour is supposed to accomplish though.

hacker wrote:
The reason I bring this up, is that people will do what the technology allows, even if it breaks the law. In many cases, the person who is doing it, doesn't even realize that any laws were involved.

In many cases, I submit that people are fully aware that they're breaking the law, and they just dont care. Speeding is a perfect example of that, actually. I live in the SF bay area and on the 280 for example, you really don't see anyone driving at 65 - I'd estimate the average traffic speed is more around 80-85. I'm fairly sure all these people know they are breaking the law, and don't care.

You're writing as if it's always immoral to break the law, I don't even agree with that either. Ideally, the law should simply put in writing what most people already agree is right. In a democracy, if the majority thinks that some law is bullshit and they can't be bothered to follow it, then the law is wrong and should be changed to what most people consider reasonable.

Ilan: I'm not sure where you want to go with your AUHDL license. Do you really think your typical HOWTO document would be easier to read if it had three colors and three pictures in it ? And what would you use the pictures for in the first place, if you're writing up the doc for something like, say, sed ?

Seems to me what we need is more people writing docs, instead of just bitching about it.

I scored 8 out of 9 on tk's GPL quiz. I guess I havent been involved in quite enough licensing flamewars yet :)

(I messed up at question 8)

Just when I was getting happy about the AT&T cable service, they started changing IP's. Hope this is a one-time thing (it might, as they've been setting up a different subnet and submask, not just re-assigning IP's inside of the subnet).

Played a bit with traffic shaping over the weekend. I'm on a cable modem, and without shaping frames go out of my computer at 10Mbit/s and get buffered into the cable modem before they get to the 128Kbit/s uplink, which results in very bad lag whenever I upload anything.

I had never played with this before, and I was amazed how much it helps. Now my telnets never get lagged at all, and the reduced latency also seems to help when I do some downloads. If people are interested, I would recommand them to use:

"tbf" for the basic shaping - making sure we do not overfill the modem's internal buffers. I did that with "tc qdisc add dev eth0 root handle 1: tbf rate 120kbit burst 2000 mpu 128 limit 100000". I would recommend people to use the tbf patch from http://luxik.cdi.cz/~devik/qos/qos.htm - it allows you to put additional shaping disciplines "inside" of the tbf shaper. (I dont know if its still required if you run 2.4 - I'm still running a 2.2 kernel)

"prio" works good enough to do the prioritization, based on the Type Of Service field of the IP headers. Most linux applications set it correctly, so you dont have to scratch your head too hard to prioritize your packets. If I was doing a gateway for windows machines I guess my life would be harder though. For now, I just did "tc qdisc add dev eth0 parent 1:1 handle 2: prio". interactive traffic (telnet, ssh) is prioritized over control traffic (dns, ping, netscape apparently ends up there too), and the lowest priority is bulk data (wget, scp, ftp, fetchmail, ...)

"sfq" tries to make the shaping more fair - so that if in the same priority band you transfer files to different places at the same time, they will get roughly equivalent amounts of bandwidth, even if one is far away with more ping delay and stuff. So I added it in all three priority bands: "tc qdisc add dev eth0 parent 2:1 handle 10: sfq perturb 600", "tc qdisc add dev eth0 parent 2:2 handle 20: sfq perturb 600" and "tc qdisc add dev eth0 parent 2:3 handle 30: sfq perturb 600". The perturb parameter is there to work around some limitations in the sfq algorithm, I found out that lower values (10 or so) tend to slow down the transfers a bit as they make packets appear out of order at times and that confuses the TCP bandwidth management algorithms.

Now the funny thing is, that I was only doing this to get me started - my longer term project was to set up shaping on downloads, which seems to be harder to do with the current linux code. AT&T cable had that very nasty capping to 1.5 mbit/s, which was done with a 1-second granularity: you could download 160KB or so, which for a close server took about half a second, then things would freeze for half a second, then it would start again, etc... that was pretty bad if you had connections to a remote server at the same time - these would usualy timeout because of loosing half of their packets. So, I was just getting ready to fight that by doing shaping on my end at a lower speed, say 1.4 mbit/s or so - but, AT&T beat me to it, and they fixed the problems on their end ! Incredible, they made me a happy guy. So, now I get smooth transfers in both directions and no delays. I guess I never liked cable so much before :)

27 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!