Older blog entries for pphaneuf (starting at number 338)

This One's Just For You

Okay, so I don't post links to Youtube videos a whole lot, but this one is special. Make sure you view it on the site itself, with the US English language setting (because it really is special), and make sure to catch what's in the tabs at the top of the page.

Syndicated 2008-11-14 22:05:36 from Pierre Phaneuf

(Almost) Got a New Bike

Today, I apparently felt very optimistic.

You see, Monday I went to Cycle Technique and asked them if they had some used bikes, having in mind of getting a rain/winter bike. Turned out they did, their summer rental bikes, they were pretty nice, and (allegedly) they had a large one that would fit me. I figured I'd give it a thought, then decided it was a good idea, and went on the Tuesday. I figured I'd walk there, pick up the bike, then ride to work. Except that large was actually a medium. So I walked to work instead.

Today, I figured I'd head over to Beaudry metro, check out Vélo Espresso and Revolution Montreal, for sure I should be able to get a decent used ride between the two of those, right? Well, no. Well, maybe. But mainly no. I had forgotten that Revolution mainly does custom built bikes, meaning that, no, they did not have anything for sale right there. Vélo Espresso had a used bike, but while it could have done, it was quite used. On their main floor, they had this rather weird bike, a Norco VFR 3 Internal. It's a fairly sporty frame, although not too aggressive, and it actually has space and lugs for fenders and racks (although I hear that it's not always the best fit ever), but it has an internal hub and a chain cover. An internal hub and a chain cover, but no fenders? I keep seeing utility bikes that have fenders, racks and lights, but no chain cover nor internal hub, and this bike has the reverse? Well, uh, it so happens that this is the exact set of things that you can't add, so I guess that's cool? I tried it out around a few blocks, and while it's not nearly an upright riding position, it's still surprisingly relaxed. It also comes with clipless pedals and clip-on platforms like those I already have?!? What a weird bike!

After that, I went to ABC Cycles & Sports, but it was closed (only on Wednesdays, argh!). I stopped by Brakeless, since it was just down from it, but they only had the one fixie, it seems more trendy than a place I'd actually want to get a bike from. I then headed over to Le Yéti, where I had a rather informative chat, and saw a ridiculously fancy German bike (I think? don't remember the make/model), which, while complying with pretty much all my requirements, and piling on disc brakes on top (because I really like brakes that work well), is also almost three grands, although it's now on sale at a bit past two grands. Uh, tempting as it is, I'll have to pass.

After that, La Bicycletterie JR, Sport Dépôt, and Pignon sur Roues. The latter had an interesting bike, the Louis Garneau Cityzen One, but is oddly missing just a chain cover (even though a blurb about the bike in Vélo Mag claims there's one?). Why is there almost no bike with chain covers?

I ended up going back to Sport Dépôt, and after some pondering, getting a Marin Belvedere. I had already spotted that bike from some research on the web, and while I knew they had Marin bikes there, turns out they pretty much only had this one, 20% off, so it was a happy coincidence. But... Their mechanic was off today, so they couldn't prep the bike, so I did not ride to work, once more.

Tomorrow, I shall ride home on my new ride! There's no stopping me! They may try, and Jeff might try to jinx me (I beat his Space Invaders high score to ward it off), but I'll be riding back tomorrow, rain or shine, and there'll be no stripe down my back if it's raining!

Syndicated 2008-09-25 05:38:23 from Pierre Phaneuf

No Nick Cave and a Cycling Problem

Damn! I wanted to see Nick Cave & the Bad Seeds, coming to the Metropolis in October, but it's sold out! Oh well, I'll be trying to go to more small shows, I think. It's been too long I have been to La Sala Rossa, for example, and places like Zoobizarre merit being visited again. Oh, and going to see Miss Kittin & The Hacker on the 27th at SAT! Awesome!

All those shows, I'd like to bike to them, but I've been finding my quest for fenders to put on my bike rather frustrating. It's a weird bike, rather easy to ride with its straight handlebars, but the rest is done in a racing style. Which means that there's basically no clearance anywhere between the tires, the frame, the fork, and the brakes, not a lug in sight for anything (well, except water bottles), and so on...

I'm also pondering a winter bike, as I'd like to try (to some degree, I always have my CAM in my bag!) to ride for at least part of the winter. I'm pondering what to do, as there are many parameters...

I'd like to have a city bike, that would fill in the role a car does for most people. It would have to be practical, something I'd be able to ride day in and day out. It shouldn't be a hassle to ride all the time. I do not want to be hardcore. I'd like to just dress normally, as if I had a car, and arrive maybe a bit rained on at best, as if I was parked a bit far, but not drenched, and no wet line along my back! It could be aluminium, to keep the weight (and the rust) down, but it wouldn't try to be super-light. I think an internal hub might be good, to minimize maintenance as much as possible. Chain cover, to protect my pants. Everything bolted on, so that locking it is easy and quick. Lights, possibly with a generator (but it shouldn't be awful like those against-the-wheel generators).

One of the problems I'm having in this quest is that most bikes fulfilling these criteria (that I can find here) tend to go for a vintage look and have some of the features I listed only because most bikes in the fifties had them, not because they're sensible bikes. One bike having all those items also had things like a seat with big springs (heavy if it useful, sure, but those are heavy and useless!), and a back wheel cover (so that my longcoat doesn't get stuck in the spokes). Stylish, yes, but practical? I live just at the foot of the hill between René-Levesque and St-Antoine, if my first experience when I take out the bike is consistently having this feeling that I'm going to die, well, uh, I don't think that'll be encouraging!

Batavus seems to have some interesting models, and while I haven't seen much of them in Montreal, they have a Canadian site, and there are some resellers in Montreal (I've been there before, but I don't remember seeing them, I guess they can order them, in the worst case). Some details are a bit off still, like the integrated horseshoe locks, which are pretty nice, but require replacing all the inhabitants of Montreal with Danish people first, so it's a bit impractical.

Another thing that's causing me some grief is the parking space. I don't think I want to give up my fast FCR for this hypothetical new bike, you see? On nice days, I don't see why I would deprive myself from the fun of zipping down Ste-Catherine at almost 40 kph! But at the moment, my spot in the basement is just big enough for one bike, maybe two if I could hang then (but it's a temporary setup, and I can't). And maybe I'll be wanting a crappier bike for the winter. And after trying out phython's fixie, I'm still longing for one myself (soooo smooooth!). Where am I going to put all of this bikery? I have my eyes on the mezzanine at home, but it's not very practical, so it might be good to put the winter bike in the summer and vice-versa, but getting stuff up and down there is rather annoying.

Ah, what to do, what to do...


I think I'll deal with the winter first, and get myself one of those cheap-ish Marin hybrid/commuter bikes...

Syndicated 2008-09-13 17:50:33 (Updated 2008-09-13 17:54:26) from Pierre Phaneuf

Nothing Sucks Like A Vax!

We moved into our new office this week (photos courtesy of MAD, thanks!), and it's pretty damned awesome! Considering the small size of the office (in number of people), it is extremely nice, the food is great, and so is the view (we had a nice view from the 24th floor, but now we're more "in the action", I like that better). Plus, we can easily reach the wifi from the pub nearby, hehe!

While the move was ongoing, we had an off-site activity on Île-Ste-Hélène that was pretty cool, involving, among other things, geo-caching, which I had never done before and is a lot of fun. It can be surprisingly difficult to find a small item, even when given the location within 10 feet! I bike there from home, and it was particularly nice, hitting 40 kph for fairly long stretches and all. On the return trip I was pretty confident that I'd get to the dinner's location first, but when I found out that the likely reason for my swift arrival was a wicked strong headwind, I wasn't so confident anymore. I did arrive first anyway, but I'm told they took a brief detour in a sketchy St-Henri bar first. Crazy people!

Today, we also obtained a vacuum cleaner at home. You're probably thinking that this doesn't really sound all that exciting, and normally, I'd agree with you, but that was before I met the Dyson DC20. As far as box-opening experience goes, relatively speaking (let's face it, it's still just a vacuum cleaner), they're taking lessons from Apple, it looks like. One of the selling points is how it can fit into a small space, and when I got the box, I was a bit worried that it'd be missing, you know, maybe the whole thing?!? But no, it was all in there, and even when assembled, it packs into almost no space, and is very cleverly engineered.

Tomorrow, a rather late in the making haircut.

Syndicated 2008-06-28 03:50:43 (Updated 2008-06-28 03:54:59) from Pierre Phaneuf

20 Jun 2008 (updated 15 Jul 2008 at 02:07 UTC) »

Putting Thoughts Together

Something that I have said a number of times is that nowadays, there is almost no reason to pick C over C++ for a new project (one of the few reasons that I know of involve writing execute-in-place code for very small embedded systems, so no, GNOME definitely doesn't qualify!). Worst case, you write exactly the same code you'd have written in C, just avoiding using the new keywords as identifiers, and you then get better warnings (remember, no templates would be involved) and stricter type checking (no more silent casting of void* to pointers to random things! No more setting enums from any random integral junk you happen to have at hand! No more forgetting a header and using a function with the wrong parameters!).

But these slides really put it together, from someone who's generally thought of as neither insane or dumb. Doesn't really have much to do with GCC in particular, other than just the general fact that this is becoming so obvious that even GCC might be making the switch...

Edit: This article by Amit Patel is also pretty good on this subject.

Syndicated 2008-06-18 16:04:05 (Updated 2008-07-15 02:04:48) from Pierre Phaneuf

Moving On

Reg Braithwaite was writing not long ago about how we can be the biggest obstacle to our own growth. It made me realize how I've dropped things that I was once a staunch supporter of.

I was once a Borland Pascal programmer, and I believed that it was better than C or even C++. I believed that the flexibility of runtime typing would win over the static typing of C++ templates, as computers got faster. I belived that RPC were a great idea, and even worked on an RPC system that would work over dial-up connections (because that's what I had back then). I put in a lot of time working on object persistence and databases. I thought that exceptions were fundamentally bad. I believed that threads were bad, and that event-driven was the way to go.

Now, I believe in message-passing and in letting the OS kernel manage concurrency (but I don't necessarily believe in threads, it's just what I happen to need in order to get efficient message-passing inside a concurrent application that lets the kernel do its work). I wonder when that will become wrong? And what is going to become right?

I like to think I had some vision, occasionally. For example, I once worked on an email processing system for FidoNet (thanks to Tom Jennings, a beacon of awesome!), and my friends called me a nutjob when I told them that I was designing the thing so that it was possible to send messages larger than two gigabytes. What I believed was that we'd get fantastic bandwidth someday where messages this large were feasible (we did! but that was an easy call), and that you'd be able to subscribe to television shows for some small sum, where they would send it to you by email and you'd watch it to your convenience. That's never gonna happen, they said! Ha! HTTP (which I think is used in the iTunes Store) uses the very same chunked encoding that I put in my design back then...

Note that in some cases, I was partly right, but the world changed, and what was right became wrong. For example, the 32-bit variant of Borland Pascal, Delphi, is actually a pretty nice language (ask apenwarr!), and while it isn't going to beat C++ in system programming, like I believed it could, it's giving it a really hard time in Windows application programming, and that level of success despite being an almost entirely proprietary platform is quite amazing. Even Microsoft is buckling under the reality that openness is good for language platforms, trying to have as many people from the outside contributing to .NET (another thing to note: C# was mainly designed by some of the Delphi designers). Imagine what could happen if Borland came to its sense and spat out a Delphi GCC front-end (and use it in their products, making it "the real one", not some afterthought)?

I doubt that's going to happen, though. For application development, I think it's more likely that "scripting languages" like Ruby, Python and JavaScript are going to reach up and take this away from insanely annoying compiled languages like C++ (and maybe even Java).

But hey, what do I know? I once thought RPC was going to be the future!

Syndicated 2008-05-28 15:29:20 (Updated 2008-05-28 20:06:04) from Pierre Phaneuf

Timeouts In Blocking Socket Code

I was wondering how to handle timeouts correctly while blocked for I/O on sockets, with as few system calls as possible.

Thanks to slamb for reminding me of SO_SNDTIMEO/SO_RCVTIMEO! Combined with recv() letting me do short reads, I think I've got what I need for something completely portable.

Syndicated 2008-05-23 22:32:20 from Pierre Phaneuf

Following Up On The End Of The World

Being the end of the world and all, I figure I should go into a bit more details, especially as [info]omnifarious went as far as commenting on this life-altering situation.

He's unfortunately correct about a shared-everything concurrency model being too hard for most people, mainly because the average programmer has a lizard's brain. There's not much I can do about that, unfortunately. We might be having an issue of operating systems here, rather than languages, for that aspect. We can fake it in our Erlang and Newsqueak runtimes, but really, we can only pile so many schedulers up on each others and convince ourselves that we still make sense. That theme comes back later in this post...

[info]omnifarious's other complaint about threads is that they introduce latency, but I think he's got it backward. Communication introduces latency. Threads let the operating system reduce the overall latency by letting other runs whenever it's possible, instead of being stuck. But if you want to avoid the latency of a specific request, then you have to avoid communication, not threads. Now, that's the thing with a shared-everything model, is that it's kind of promiscuous, and not only is it tempting to poke around in memory that you shouldn't, but sometimes you even do it by accident, when multiple threads touch things that are on the same cache line (better allocators help with that, but you have to be careful still). More points in the "too hard for most people" column.

His analogy of memcached with NUMA is also to the point. While memcached is at the cluster end of the spectrum, at the other end, there is a similar phenomenon with SMP systems that aren't all that symmetrical, multi-cores add another layer, and hyper-threading yet another. All of this should emphasize how complicated writing a scheduler that will do a good job of using this properly is, and that I'm not particularly thrilled at the idea of having to do it myself, when there's a number of rather clever people trying to do it in the kernel.

What really won me over to threading is the implicit I/O. I got screwed over by paging, so I fought back (wasn't going to let myself be pushed around like that!), summoning the evil powers of mlockall(). That's where it struck me that I was forfeiting virtual memory, at this point, and figured that there had to be some way that sucked less. To use multiple cores, I was already going to have to use threads (assuming workloads that need a higher level of integration than processes), so I was already exposed to sharing and synchronization, and as I was working things out, it got clearer that this was one of those things where the worst is getting from one thread to more than one. I was already in it, why not go all the way?

One of the things that didn't appeal to me in threads was getting preempted. It turns out that when you're not too greedy, you get rewarded! A single-threaded, event-driven program is very busy, because it always finds something interesting to do, and when it's really busy, it tends to exhaust its time slice. With a blocking I/O, thread-per-request design, most servers do not overrun their time slice before running into another blocking point. So in practice, the state machine that I tried so hard to implement in user-space works itself out, if I don't eat all the virtual memory space with huge stacks. With futexes, synchronization is really only expensive in case of contention, so that on a single-processor machine, it's actually just fine too! Seems ironic, but none of it would be useful without futexes and a good scheduler, both of which we only recently got.

There's still the case of CPU intensive work, which could introduce trashing between threads and reduced throughput. I haven't figured out the best way to do this yet, but it could be kept under control with something like a semaphore, perhaps? Have it set to the maximum number of CPU intensive tasks you want going, have them wait on it before doing work, post it when they're done (or when there's a good moment to yield)...

[info]omnifarious is right about being careful about learning from what others have done. Clever use of shared_ptr and immutable data can be used as a form of RCU, and immutable data in general tends to make good friends with being replicated (safely) in many places.

One of the great ironies of this, in my opinion, is that Java got NIO almost just in time for it to it to be obsolete, while we were doing this in C and C++ since, well, almost forever. Sun has this trick for being right, yet do it wrong, it's amazing!

Syndicated 2008-05-19 06:50:15 from Pierre Phaneuf

The End Of The World (As We Know It)!

Ok, here we go:

Event-driven non-blocking I/O isn't the way anymore for high-performance network servers, blocking I/O on a bunch of threads is better now.

Wow, I can't believe I just wrote that! Here's a post that describes some of the reasons (this is talking more about Java, but the underlying reasons apply to C++ as well, it's not just JVMs getting wackier at optimizing locking). It depends on your platform (things don't change from being true to being false just out of the blue!), and more specifically, I have NPTL-based Linux 2.6 in mind, at the very least (NPTL is needed for better futex-based synchronization, and 2.6 for the O(1) scheduler and low overhead per thread). You also want to specify the smallest stacks you can get away with, and you also want a 64-bit machine (it has a bigger address space, meaning it will explode later).

The most important thing you need is to think and not be an idiot, but that's not really new.

And when I say "bunch of threads", I really mean it! My current "ideal design" for a web server now involves not just a thread per connection, but a thread per request (of which there can be multiple requests per connection)! Basically, you want one thread reading a request from the socket, then once it's read, fork it off to let it do its work, and have the writing of the reply to the socket be done on the request thread. This allows for as much pipelining as possible.

Still, event-driven I/O is not completely useless, it is still handy in the case of protocols that have long-lived connections which stay quiet for a long time. Examples of that are IRC and LDAP servers, although it's possible that with connection keep-alive, one might want to do that with an HTTP server as well, using event notification to see that a request is arrived, then hand it back to a thread to actually process it.

I also now realize that I was thinking too hard in my previous thoughts on using multiple cores. One could simply have a "waiting strategy" (be it select() or epoll), and something else to process the events (an "executor", I think some people call that?). You could then have a simple single-threaded executor that just runs the callbacks right there and then, no more fuss (think of WvStreams' post_select()), or you could have a fancy-pants thread-poll, whatever you fancied. I was so proud of my little design, now it's all useless. Oh well, live and learn...

Syndicated 2008-05-16 23:44:10 from Pierre Phaneuf

25 Apr 2008 (updated 7 May 2008 at 17:08 UTC) »

Old Fogeys

I've become a member of Communauto last week, and combined with getting my bike back, means that I'm at what is going to be my peak mobility for the next little while.

Used Communauto a couple of days later to go to a Quadra hackfest at Rémi's, with [info]slajoie as well. I've had a surge of interest in Quadra, but it is a delicate thing to do: we need to release a new stable version before we can hack on the "next generation" version, and while we're getting very close now, there is definitely a momentum thing that can be lost just too easily. And now the kind of things left are packaging related, which isn't the most exciting (so help us out, [info]dgryski!). We've got interesting ideas for future development, but we can't really do any of this for now, since it would make merging from the stable release very annoying (and it already isn't too wonderful at times)...

Getting my bike back meant going to work on bike, and that is ridiculously quick, on the order of six to seven minutes. That's faster than the metro, by a lot (that's only a bit more than the average waiting time, and I don't have to walk to Lionel-Groulx). In my opinion, that's not even good exercise, I hardly have time to break a sweat even if I go fast, so I might end up taking detours on good days (the Lachine Canal bike path is nearby).

Related to Quadra, I've been looking at SDL (which the next version of Quadra uses instead of its internal platform) and SDL_net. It's funny how game developers are so conservative sometimes! I don't know much about 3D games, but in 2D, people seem to develop more or less like they did on DOS more than 10 years ago, which was very limited back then, due to DOS not having much of a driver model. Because of that, since anything more than page flipping and waiting for the vertical retrace (using polling PIO, of course) is specific to every video chipset. A game wanting to use accelerated blits had to basically have its own internal driver model, and when a card was not supported, either the game would look bad (because it would use a software fallback), or would not work at all. In light of that, most games just assumed a basic VGA card (the "Super" part is made of vendor-specific extensions), using 320x200 in 256 colors (like Doom), or 640x480 in 16 colors (ever used Windows' "safe mode"?), with maybe a few extra extensions that were extremely common and mostly the same.

Then, DirectX appeared and all the fancy accelerations became available to games (window systems like X11 and Windows had their own driver model, but could afford to, being bigger projects than most games, and were pretty much the sole users of the accelerations, so they existed). What happened? Game developers kept going pretty much the same way. Some tests by Rémi back then found that using the video memory to video memory color key accelerated blits (with DirectDraw), getting hundreds of frames per second, where the software equivalent could barely pull thirty frames per second on the same machine. About an order of magnitude faster! You'd think game developers would be all over this, but no, they weren't. They were set in their ways, had their own libraries that did it the crappy way, and didn't bother, overall. The biggest user of 2D color keyed blitting is probably something like the Windows desktop icons.

Then, 3D acceleration appeared, and they just didn't have the choice. The thing is, this hardware still isn't completely pervasive, and especially for the target audience of a game like Quadra, who like nice little games and won't have big nVidia monsters in their machines, so using the 3D hardware for that kind of game would leave them in the dust. Nowadays, DirectDraw has been obsoleted and is now a compatibility wrapper on top of Direct3D, so oddly enough, we're back to 2D games having to avoid the acceleration.

Thankfully, in the meantime, the main CPUs and memory became much faster, so you can do pretty cool stuff all in software, but it's kind of a shame, I see all of this CPU being wasted. Think about it: Quadra pulls in at about 70% CPU usage on my 1.5 GHz laptop, so one could think it would "need" about 1 GHz to run adequately, right? Except it worked at just about full frame rate (its engine is bound at 100 frames per second) on my old 100 MHz 486DX! Something weird happened in between...

Game developers seem to be used to blocking APIs and polling so much, it spills over in SDL_net, which uses its sockets in blocking mode, and where one could easily lock up a server remotely by doing something silly like hooking up a debugger to one of the client and pausing it. Maybe unplugging the Ethernet cable would do it too, for a minute or two, until the connection timed out. How awful...

Syndicated 2008-04-25 16:39:47 (Updated 2008-05-07 17:01:30) from Pierre Phaneuf

329 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!