Older blog entries for dan (starting at number 151)

Social notworking

After a bit over a month using Google Plus (with admittedly decreasing enthusiasm over the course of that time) I have no firm conclusions about what it’s good for, except that it’s incredibly good at reminding me how much I miss Usenet.

I could compare it with the other networks that people consider it “competition” for: it doesn’t replace Facebook – for me anyway - because the whole world isn’t on it, and that means I can’t use it to stay in touch with friends and family. It doesn’t replace Twitter as the the lack of a message length limit means it’s useless for epigrams (which I like) and not much cop for status updates either (which I can live without) – though it does work as “source of interesting links” which in my opinion is the third arm of Twitter utility. And Google will, probably, be disappointed to learn that it doesn’t replace LinkedIn be cause despite the best efforts of the Real Names policy enforcers, it still isn’t quite boring enough. Yet, anyway.

But that’s enough about Google+, what about Usenet?

  • The unit of discussion was an article. Not a two-line throwaway comment or a single bit of “me likes this” information. When you read something on Usenet that you felt strongly enough to reply to, you hit ‘r’, you got the scary warning about “hundreds if not thousands of dollars”, and it dumped you in a full screen text editor where you could compose your pearl of wisdom. Sure, so you could alternatively compose your “ME TOO!”, but it wasn’t a teeny text widget which practically demands the latter response: the affordances were there for writing something with meat
  • It was decentralised. No capricious site owner could take your comment down because someone might find it offensive, or ban all discussion of certain topics, or refuse to allow you to post links to other places, or even that he was going to pull the plug completely and delete all your words. You might be reading this and thinking Godfrey vs Demon and you’d be entirely correct that it wasn’t completely uncensored in practice – nor, I contend, should it have been – but there was at least a bit more effort involved in getting a post removed than clicking the ‘I am offended by this picture of a breast-feeding woman’ button, and that made potential complainants think a bit more carefully about whether it was worth it
  • It had user interfaces that didn’t get in the way. Really. I could sit in front of my computer for hours pressing only the space bar (maybe alternating with the ‘n’ key in less interesting groups) and it would keep the content coming. (And I did. I would blame my degree class on Usenet, if it weren’t that the time I spent fiddling with Linux was in itself sufficient to leave approximately 0 time for studying. But i digress.)

The reasons it’s dead are well-rehearsed, and boil down to this: it couldn’t cope with universal access. It was built back in the days when people had access through their institutions or employers, and for the most part knew they could lose it by acting like jerks - or at least by acting like jerks consistently enough and outrageously enough. Come the personal internet revolution – the Endless September - it had no protection against or meaningful sanctions for spammers and trolls, and so blogs/web forums sucked away most of the people who wanted to just talk, leaving behind people who were by and large too much concerned with the minutiae of meta and much less enthused about the actual posting of content.

But it did do stuff that nobody else has replicated since.

Other people:

Syndicated 2011-09-18 20:36:00 from diary at Telent Netowrks

Openwrt "backfire" first impressions

Some notes on my first impressions of Openwrt 10.03 “Backfire”

Having happily run a Draytek Vigor 2600 in my last home for 2-3 years, the obvious thing to do when my exchange was upgraded to 21CN (that’s ADSL2+ to readers outside the UK) was to buy the same brand again and this time go for a model that supports the newer standard. I bought a 2700 on ebay on the basis that comparing the model numbers indicated it should be better by at least 64 (octal, right?). It wasn’t. Although I can’t prove that it’s the router’s fault it drops out twice a week (we also moved house at about the same time, it could be the line), I can say it’s not a mark of quality that when I access its web interface (e.g. to force a redial) I get an HTTP timeout on at least one of the three frames in the frameset – if you’re going to use framesets for your router admin interface, it would probably be smart to give it a web server that can answer more than two queries at the same time. And its syslog client has an approach to the standards which is most charitably described as “improvisational”, . And I’ve talked before about the missing options for second subnet support that aren’t really missing.

Push eventually came to shove last month when my OfflineIMAP process decided that 2GB a day was a reasonable amount of traffic to incur checking my email (I disagree, for the record) and I hit my ISP monthly download allowance, and the router offered absolutely no help whatever in finding the source of the problem (between one wired computer, three wireless laptops, assorted smartphones and ipod, and a wifi-enabled weighing scale it could really have been anywhere). So it was time to shove it, preferably in favour of something that would run Linux. Like an early WRT-54G won on ebay, coupled with a lightly hacked BT Voyager 220V left behind in a previous flat by a previous previous tenant and configured in bridge mode for the ADSL hookup.

Openwrt seems to be the most Debian-like of the popular Linux-based router firmwares (that’s intended as a compliment), in that it has a package manager, and it likes to be configured from the command line by editing files. My observations based on about 4 hours playing with it:

  • the documentation is fragmented and lacks any clear sense of order or editorial control. This is listed first not because it’s most important (it isn’t) but because it’s the first thing I noticed. Seriously, a Wiki is not a substitute for a user manual, and I say that as someone who’s written one. When resorting to Google you will find that a lot of what you read there is out of date. For example, there is no longer an ipkg command, but opkg seems to replace it.
  • It has a web interface called Luci. It’s a bit slow and clunky - though still better than the Vigor router’s was – but it’s helpful for getting started. I was confused by the interaction between the various ‘Save’, ‘Apply’, ‘Save and Apply’ buttons at the bottom of the page and the ‘Unsaved Changes’ link in the menu bar at the top: on the ‘Firewall’ page, for example, clicking ‘Save’ at the bottom causes the status at the top to go from ‘Changes: 0’ to ‘Unsaved Changes: 1’. To my way of thinking, clicking Save should reduce the number of unsaved changes not increase them, but this is probably just bad labelling.
  • I say it likes to be configured by editing files: it is however fussy about which files. If there’s a file under /etc/config with a relevant-looking setting in it, edit that in preference to whatever the upstream app’s usual config file would be, then run uci commit - although actually you might not need to run uci commit – see this lovely thread for the full confusing detail – then run the relevant /etc/init.d/foo scripts to restart services as needed. I am not sure if there’s a clear rule for gets overridden or overwritten if you edit config files directly and conflict with UCI, but I suspect it’s pretty ad hoc.
  • the hardware doesn’t speak ADSL, hence the need for a separate box to do that. I set the Voyager up to do PPPoE and the WRT likewise: in Luci look for Network → Interfaces → WAN and set the Protocol to PPPoE: this should get you Username and Password boxes in which you put whatever your ISP told you to.
  • the wifi did not work no matter what I did in Luci, but eventually I found the problem was in /etc/config/wireless which had an entirely bogus mac address in its definition of radio1: I replaced it with the address printed by ifconfig wlan0 and suddenly everything started working.
  • it runs an ssh daemon, which is nice. Although it will authenticate using keys, it won’t look at /root/.ssh/authorized_keys as openssh does. I used the web interface to add my key, which worked fine.

Summary: although not currently suitable for the non-technical end user, if you have some Linux experience and a few hours to screw around with Google, it all eventually works fine. And I can run tcpdump on it, which more than makes up for all these minor problems 64 times over. Get in.

More on the BT Voyager in a later blog entry, but I leave you with some instructions for unlocking it which you may need if you are sensible enough to use an ISP that isn’t BT Retail.

Syndicated 2011-06-22 10:22:56 from diary at Telent Netowrks

ANN: Goldleaf - scripted Debian kvm image creation

After two weeks of work writing scripts to automate the creation of new production/test boxes for $WORK, and two days to get it into a state where I could post the result on Github without spilling all our internal secrets, I am pleased to announce Goldleaf. From the README:

Goldleaf is a tool for creating consistent and repeatable KVM Debian system images in which the packages installed, the versions of each package, and the answers to “debconf” configuration questions are specified precisely.

The package manifest and configuration files are text-based and are intended to be kept under Git version control.

The name ‘goldleaf’ comes from the article Golden Image or Foil Ball by Luke Kanies. On which note: it is unclear to me whether he would see this script as a good thing or as a bad thing, but I would argue that even if you reduce the number of images under your control to a single “stem cell” image, being able to recreate (any current or previous version of) that image on-demand is just as valuable as being able to recreate (any current or previous version of) any other application.

The README has full instructions. You can see the working example at https://github.com/telent/goldleaf-example and you can git clone or download (or just nose around) from https://github.com/telent/goldleaf

Feedback welcome. Bugs and stuff to the Github issue tracker

Syndicated 2011-06-12 21:27:18 from diary at Telent Netowrks

Bullet time

Short bits

  • HTC have apparently reversed their policy on locking their phone bootloaders – i.e. in future, they say they won’t do it any more. I find it interesting that LWN have reported this as “The CEO of HTC has seemingly posted on Facebook that its phones will not be locked down” whereas every other report I’ve seen has assumed it’s real, and only LWN have thought to wonder if Facebook really is an official HTC communication channel. Anyway, if it does prove to be true (I hope so) I will be reversing my previous recommendation against buying their phones
  • Here is a real working example (indeed, the primary use case) for use of thin-prefork. This blog is already running on it, and as of Tuesday so will be $WORK. And then maybe I can get back to some actual work. Hurrah.
  • After finishing the thin-prefork integration I spent some time on Friday trying to sort out $WORK’s exception handling. The exception handling stuff in Sinatra is … kind of involved, and if you test it by inserting
    raise Exception, “Error happens here”
    into a route, you may spend quite a while wondering why it doesn’t seem to be working properly. I wanted Sinatra to pass all exceptions up to Rack where I could use Rack::MailExceptions to send me email: eventually by reading sinatra/base.rb for a while I find (1) that enable :raise_errors, perhaps contrary to its documentation, doesn’t raise an error if there’s a error stanza in the application which would catch it; (2) that the default Sinatra::Base class installs exactly such an error clause for the base Exception class. So you may want to change your test code to use StandardError or something instead.
  • Once having done that, you will then find that Rack::MailExceptions requires Tmail, which has trouble with Ruby 1.9, and then you will find that the mail it sends you is not laid in any order you might reasonably hope for – the errant URL is buried two thirds of the way down – and eventually you will decide that you might as well just copy the entire file into your own code and edit it for your requirements. Which is what I did. There’s a ticket to fix the Tmail dependency: apparently porting it to the newer Mail gem is a non-starter due to its needing active_support, but it appears that recent versions of Mail no longer need active_support anyway, so maybe that decision can now be revisited.

Syndicated 2011-05-28 20:41:16 from diary at Telent Netowrks

Give me back my event loop

My new phone has now been emancipated, thanks in part to Atheer-pronounced-Arthur (or possibly vice versa) at the “Root” internet cafe on Edgware Road. They have an XTC Clip and he was able to do the S-OFF thing for me in about ten minutes and for the princely sum of £10. Recommended. I have been looking at AOSP build instructions, but actually doing the build and flashing the phone with a nice clean 2.3.4 Sense-free system will have to wait until I can devote a few more mental cycles to it.

In between the distractions of Shiny! New! Toy! I have been working on the projectr/thin-prefork pair – I am still reasonably convinced that they should be independent modules, though as I always seem to end up hacking on both at once I worry about the degree of coupling between them – to impose some sense on the interface for extending the thin-prefork server. Which I think is 80% there, but this morning I thought it was 100% there until I started trying to use it for real, so that’s a little bit annoying.

Which brings us to the rant/plea for today, as indicated in the title. Hands off my event loop! I’m sure I’ve said already this in other contexts and with regard to other platforms, but: I am not going to devote my process to call Oojimaflip.run! when there are other things it should be doing concurrently with watching for Oojimaflips, and I see no reason either to start a new thread (or process) exclusively for your use when you could have just written a method that says whether there are fresh oojimaflips and another to say what they are.

I am prompted to say this by rb-inotify, which is a (quite nicely written) wrapper around some kernel functionality that communicates via a file descriptor. I’d like a wrapper like this to (1) give me the file descriptor so I can call Kernel.select on it, along with all the other files I’m looking at; (2) give me a method which I will call when select says the fd is ready to read, which will read them and (3) digest them into beautiful Ruby-friendly Event objects. What I’ve got is about two out of three (good odds if you’re Meatloaf): there is a public method #to_io whose return value I can plug into select, there are beautiful Ruby-friendly Event objects, but to get those objects, unless I’m overlooking something (and I don’t mean that to sound passive-aggressive), I have to run one cycle of the rb-inotify event loop: call the #process method which calls my callback once per event, which has to find somewhere to store the events it’s passed, and then check the stored events when control eventually unwinds from #process and returns to me.

I’m actually being a bit harsh here, because the event-parsing code is there in the internals and not hard to grab. In the basement lavatory behind the “beware of the leopard” sign, I find a method called read_events, which if you don’t mind calling undocumented code can be used something like this. The preceding call to select would be better replaced by some code to put the file into non-blocking mode, but that’s a refinement that can wait for another time.

I have opened an issue on github saying something similar, which I expect is far more likely to have a useful effect than posting on this obscure blog. But, yeah, I like ranting.

Syndicated 2011-05-20 13:19:57 from diary at Telent Netowrks

Desire S: don't

If I had known a week ago about the lengths to which HTC are now going to prevent people from using their phones, I would have bought some other phone (like maybe the the LG Optimus 2X) instead – and if you are the kind of person who prefers to choose your own software than to let device manufacturers and mobile phone networks do it for you, I would recommend that you don’t buy it either.

That’s right, as far as I can determine it’s not (currently, at least) rootable. Finding this out is made harder than it needs to be because for any conceivable relevant search term, Google seems to prefer the xda-dev forum – by volume, 98% composed of script kiddies saying “OMG LOLZ” – over any results from people who know what they’re talking about. But here’s the summary:

  1. as delivered, there is no root access. Well, so far so perfectly standard
  2. there are a couple of promising-sounding exploits which work on other similar devices. The psneuter exploit – or at least the binary copy of psneuter of unknown provenance that I downlaoded from some ad-filled binaries site - doesn’t work, erroring out with failed to set prot mask. GingerBreak, though, will get you a root shell. Or at least it will if you get the original version that runs from the command line and not the APK packaged version that a third party has created for the benefit of xda-dev forum users.
  3. the problem is that GingerBreak works by exploiting bugs in vold and as the side-effect is to render vold unusable, you can’t get access to your sd card after running it. So, you think, “no problem, I’ll remount /system as writable and install su or some setuid backdoor program that will let me back in”. This doesn’t work although it looks like it did right up until you reboot and notice your new binary has disappeared.
  4. (incidentally, if you run GingerBreak again after rebooting the phone and it fails, it’s because you need to remove /data/local/tmp/{sh,boomsh} by hand.)
  5. The explanation for this freakiness is that /system is mounted from an eMMC filesystem which is hardware write-protected in early boot (before the Linux kernel starts up) but Linux doesn’t know this, so the changes you make to it are cached by the kernel but don’t get flushed. There is a kernel module called wpthis designed for the G2/Desire Z whih attempts to remove the write-protect flag by power-cycling the eMMC controller, but it appears that HTC have somehow plugged this bug on the Desire S. For completeness sake, I should add that every other mounted partition has noexec and/or nosuid settings, so /system is the only possible place to put the backdoor command.
  6. Um.

Avenues to explore right now: (1) the write-protect-at-boot behaviour is apparently governed by a “secu flag” which defaults to S-ON on retail devices but can be toggled to S-OFF using a hardware device called the “XTC Clip”, available in some phone unlocking shops. (2) Perhaps it is possible to become root without calling setuid by installing an APK containing a started-on-boot service and hacking /data/system/packages.xml so that the service launches with uid 0. (3) wait and see if anyone else has any good ideas. (4) study the Carphone Warehouse web site carefully and see if they have an option to return the phone for exchange, bearing in mind that I’ve been using it for seven days. Obviously those last two options are mutually incompatible.

Summary of the summary: HTC, you suck.

Incidentally, if you want to build the wpthis module for yourself there’s not a lot of useful documentation on building Android kernels (or modules for them) for devices: everything refers to a page on sources.google.com which appears to be 404. The short answers: first, the gcc 4.4.0 cross-compiler toolchain is in the NDK; second, the kernel source that corresponds to the on-device binary, at the time I write this, can be had from <http://dl4.htc.com/RomCode/Source_and_Binaries/saga-2.6.35-crc.tar.gz> (linked from <http://developer.htc.com/>); third, <https://github.com/tmzt/g2root-kmod/> doesn’t compile cleanly anyway: you’ll need to scatter #include <linux/slab.h> around a bit and to copy/paste the definition of mmc_delay from linux/drivers/mmc/core/core.h

Incidentally (2): <http://tjworld.net/wiki/Android> has lots of interesting stuff.

At this point, though, my advice remains that you should buy a different phone. Even if this one is rooted eventually (and I certainly hope it will be), HTC have deliberately made it more difficult than it needs to and why reward that kind of anti-social behaviour?

Syndicated 2011-05-15 13:10:27 from diary at Telent Netowrks

Testing a monolithic app - how not to

In the process of redesiging the interfaces to thin-prefork, I thought that if it’s going to be a design not a doodle I’d try to do it the TDD way and add some of that rspec goodness.

I’m not so proud of what I ended up with

There are a number of issues with this code that are all kind of overlapped and linked with each other, and this post is, unless it sits as a draft for considerably longer than I intended to spend on it, going to be kind of inchoate because all I really plan to do is list them in the order they occur to me.

  • The first and most obvious hurdle is that once you call #run!, the server process and its kids go off and don’t come back: in real-world use, any interaction you might have with it after that is driven by external events (such as signals). In testing, we have to control the external environment of the server to give it the right stimuli at the right time, then we need some way to look inside it and see how it reacts. So we fork and run it in a child process. (Just to remind you, thin-prefork is a forking server, so we now have a parent and a child and some grandchildren.) This is messy already and leads to heuristics and potential race conditions: for example, there is a sleep 2 after the fork, which we hope is long enough for it to be ready after we fork it, but is sure to fail somewhere and to be annoyingly and unnecessarily long somewhere else especially as the number of tests grows.
  • We make some effort to kill the server off when we’re done, but it’s not robust: if the interpreter dies, for example, we may end up with random grandchild processes lying around and listening to TCP ports, and that means that future runs fail too.
  • Binding a socket to a particular interface is (in Unix-land) pretty portable. Determining what interfaces are available to bind to, less so. I rely on there most likely being a working loopback and hope that there is additionally another interface on which packets to github.com can be routed. I’m sure that’s not always true, but it;‘ll have to do for now. (Once again I am indebted to coderr’s neat trick for getting the local IP address – and no, gethostbyname(gethostname()) doesn’t work on a mobile or a badly-configured system where the hostname may be an alias for 127.0.0.1 in /etc/hosts/)
  • We need the test stanzas (running in the parent code) somehow to call arbitrary methods on the server object (which exists in the child). I know, we’ll make our helper method start accept a block and install another signal handler in the child which yields to it. Ugh
  • We needed a way to determine whether child processes have run the correct code for the commands we’re testing on them. Best idea I came up with was to have the command implementation and hook code set global variables, then do HTTP requests to the children which serve the value of those global variables. I’m sort of pleased with this. In a way.

Overall I think the process has been useful, but the end result feels brittle, it’s taken nearly as long as the code did to write, and it’s still not giving me the confidence to refactor (or indeed to rewrite) blindly that all the TDD/BDD advocates promote as the raison d’embêter

The brighter news is, perhaps, that I’m a lot more comfortable about the hook/event protocol this time round. There are still bits that need filling in, but have a look at Thin::Prefork::Worker::Lifecycle and module TestKidHooks for the worker lifecycle hooks, and then at the modules with names starting Test... for the nucleus of how to add a custom command.

Syndicated 2011-05-11 15:56:25 from diary at Telent Netowrks

HTC Desire S, unlocked, new, £359

No, I’m not selling it at that price. I just bought it at that price. Amazon and Handtec are both advertising it at about £371 and other well-known online retailer (expansys, etc) at even more: what I did was notice that the O2 Store sell it for £349 plus £10 topup, and that Carphone Warehouse have recently introduced a Network Price Promise so even though their advertised price for the same phone and tariff is £399 they will give it to you for the £359 price if you insist. And because it’s CPW and not a tied shop, they will (well, most probably will, and certainly did in my case) give you the unlocked and unbranded handset. In fairness I should say I don’t know whether the handset in the O2 store would have been locked or not.

I’ve only had a few minutes to play with the phone so far so haven’t formed a strong opinion (HTC Sense may have to go …) on it yet, but it’s already very clearly an upgrade from my really rather elderly T-Mobile G1

Syndicated 2011-05-09 14:06:27 from diary at Telent Netowrks

Sinatra and the class/instance distinction

The Sinatra microframework is described as enabling “Classy Web Development”, and it turns out this is more literally true than I previously thought.

The Rack Specification says

A Rack application is an Ruby object (not a class) that responds to call. It takes exactly one argument, the environment and returns an Array of exactly three values: The status, the headers, and the body.

(emphasis mine). When you write a Sinatra app, though, it seems to want a class: whether you call MyApp.run! directly (we assume throughout this post that MyApp is a Sinatra::Base subclass) or use a config.ru or any other way to start the app running, there is a conspicuous lack of MyApp.new anywhere around. Yet the Rack spec says an app is an instance.

At first I thought I was being silly or didn’t understand how Rack works or had in general just misunderstood something, but it turns out not. Some ferretting through Sinatra source code is needed to see how it does this, but the bottom line is that MyApp has a class method MyApp.call which rack invokes, and this delegates to (after first, if necessary, instantiating) a singleton instance of MyApp stored in the prototype field. I am not at all sure why they did this. It may just be a hangover from Sinatra’s heritage and this stuff came along for the ride when Sinatra::Base was factored out of the Sinatra::Application classic app support. Or they may have a perfectly good reason (this is the hypothesis I am leaning towards and I suspect that “Rack middleware pipelines” is that reason). For my purposes currently it’s probably sufficient to know that they do it without needing to know why, and that I should stop trying to write Sinatra::Base subclasses which takes extra parameters to new.

:; irb -r sinatra/base
ruby-1.9.2-p0 > class MyApp  nil 
ruby-1.9.2-p0 > MyApp.respond_to?(:call)
 => true 
ruby-1.9.2-p0 > begin; MyApp.call({}); rescue Exception => e ;nil;end
 => nil 
ruby-1.9.2-p0 > MyApp.prototype.class
 => Sinatra::ShowExceptions 

Ta, and with emphasis, da! (The begin/end around MyApp.call is because for the purpose of this example I am too lazy to craft a legitimate rack environment argument and just want to demonstrate that prototype is created. And we should not be surprised that the prototype’s class is not the same class as we created, because there is middleware chained to it. In summary, this example may be more confusing in its incidentals than it is illuminating in its essentials. Oh well)

Syndicated 2011-05-04 12:39:05 from diary at Telent Netowrks

Preforking multi-process Sinatra serving (with Sequel)

Picture the scene. I have a largish Ruby web application (actually, a combination of several apps, all based on Sinatra, sharing a model layer, and tied together with Rack::URLMap), and I want a better way of reloading it on my development laptop when the files comprising it change.

At the same time, I have a largish Ruby web application (etc etc) and I wanted a better way of running several instances of it on the same machine on different ports, because running a one-request-at-a-time web server in production is not especially prudent if you can’t guarantee that (a) it will always generate a response very very quickly, and (b) there is no way that slow clients can stall it. So, I needed something like the thin command, but with more hooks to do stuff at worker startup time that I need to do but won’t bore you with.

And in the what-the-hell-why-not department I see no good reason that I shouldn’t be using the same code in development as is running in production and plenty of good reasons that I should. And a program that basically fork()s three times (for user-specified values of three) can’t be that hard to write, can it?

Version 0 of “thin-prefork” kind of escaped onto github and contains the germ of a good idea plus two big problems and an exceedingly boring name.

What’s good about it? It consists of a parent process and some workers that are started by fork(). There is a protocol for the master to send control messages to the workers over a socket (start, stop, reload, and basically whatever else you decide), and you subclass the Worker to implement these commands. This was found to be necessary, because version -1 used signals between parent and child, and it was found eventually and empirically that EventMachine (or thin, or something else somewhere in the stack) likes to install signal handlers that overwrote the ones I was depending on. And at that point I had two commands which each needed a signal and in accordance with the Zero-One-Infinity Rule I could easily foresee a future in which I would run out of spare Unix signals.

What’s not so good? Reloading – ironically, the whole reason we set out to write the thing. Reloading is implemented by having the master send a control message to the children, and the children then reload themselves (using Projectr or however else you want to). But when you have 300MB x n children to reload you’d much rather do the reload once in the parent and then kill and respawn the kids than you would have each of the kids go off and do it themselves – that way lies Thrash City, which is a better place for skateboarders than servers. (This would also be a bad thing for sharing pages between parent and child, but I am informed by someone who sounded convincingly knowledgeable that the garbage collector in MRI writes into pretty much every page anyway thus spitting all over COW, so this turns out not to be a concern at present. But someday, maybe – and in the meantime it’s still kinda ugly)

What’s also not so good is that the interaction between “baked in” stuff that needs to happen for some actions – like “quit” – and user-specified customizations is kind of fuzzy and it’s not presently at all obvious if, for example, a worker subclass command should call super: if you want to do somewthing before quitting, then obviously you should then hand off to the superclass to actually exit, but if you want to define a reload handler then you don’t want to call a non-existent superclass method when you’re done. But how do you know it doesn’t exist? Your worker might be based off another customisation that does want to do something important at reload time. So it’s back to the drawing board to work out the protocol there, though rereading what I’ve just written it sounds like I should make a distinction between notifiers and command implementations - “tell me when X is happening because I need to do something” vs “this is the code you should run to implement X”.

And why does the post title namecheck Sequel? Because my experience with other platforms is that holding database handles open across a fork() call can be somewhat fraught and I wanted somewhere to document everything I know about how Sequel handles this

Syndicated 2011-05-03 15:11:12 from diary at Telent Netowrks

142 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!