Older blog entries for dan (starting at number 154)

In Soviet Russia, ActiveRecord mocks YOU!

A week ago I attended the Ru3y Manor conference, which was Really Cool. Educational, entertaining, excellent value for money.

One of the talks was by Tom Stuart on Rails vs object-oriented design which could be summarised as a run through the SOLID principles and a description of how well (or how badly) the affordances in Rails encourage adherence to each principle.

ActiveRecord came in for some stick. The primary offence is against the Single Responsibility Principle, which says that a class should have only one reason to change – or in the vernacular, should do only one thing. This is because AR is both an implementation of a persistence pattern and (usually, in most projects) a place to dump all the business logic and often a lot of the presentation logic as well.

Divesting the presentation logic is usually pretty simple. Decorators (Tom plugged the Draper gem, which I haven’t yet tried but looks pretty cool in the screencast) seem well-equipped to fix that.

But I wish he’d said more about persistence, because it’s a mess. And the root cause of the mess is, I conjecture, that an AR object is actually two things (although only one at a time). First, it reifies a database row – it provides a convenient set of OO-ey accessors to some tuples in a relational database, allowing mutation of the underlying attributes and following of relations. Second, it provides a container for some data that might some day appear in some database – or on the other hand, might not even be valid. I refer of course to the unsaved objects. They might not pass validation, the result of putting them in associations is ambiguous, they don’t have IDs … really, they’re not actually the same thing as a real AR::Model object. But because saving is expensive (network round trips to the database, disk writes, etc) people use them e.g. when writing tests and then get surprised when they don’t honour the same contract that real saved db-backed AR objects do. So, the clear answer there is “don’t do that then”.

Ideally, I think, there would be a separate layer for business functionality which uses the AR stuff just for talkum-to-database and can have that dependency neatly replaced by e.g. a Hash when all you want to do is test your business methods. I suggest this is the way to go because my experiences with testing AR-based classes have not been uniformly painless: when I want to test object A and mock B, and each time I run the test I find a new internal ActiveRecord method on B that needs stubbing, someone somewhere is Doing Something Wrong. Me, most likely. But what? I should be using Plain Old Ruby Objects which might delegate some stuff to the AR instances: then I should decide whether all those CRUD pages should be using my objects or the AR backing, then I should decide how to represent associations (as objects or arrays of objects or using some kind of lazy on-demand reference to avoid loading the entire object graph on each request, and will there need to be a consistent syntax for searching or will I just end up with a large number of methods orders_in_last_week, orders_in_last_month, open_orders each of which does some query or other and then wraps each returned AR object in the appropriate domain object) and whether the semantic distinction between an “aggregation” relation and a “references” relation (an Order has many OrderLines, but a Country doesn’t have many People – people can emigrate) has practical relevance. The length of the preceding sentence suggests that there’s a fair amount to consider. I don’t know of any good discussion of this in Ruby, and the prospect of wading through all the Java/.NET limitations-imposed-by-insufficiently-expressive-languages shit to find it in “enterprise” languages is not one I look forward to. Surely someone must have answers already?

There’s other stuff. Saving objects is expensive. Saving objects on every single update is expensive and wasteful when there’s probably another update imminent, so there’s some kind of case to be made for inventing a “to be saved” queue of AR objects which is eventually flushed by saving them once each at most. The flush method could be called from some suitable post-request method in the controller, or wherever the analogous “all done now” point is in a non-Web application. That would probably be a fairly easy task, although it would be no help for the initial object creation, because until we have an id field – and we need to ask the database to get a legitimate value for it – the behaviour of associations is officially anybody’s guess.

Rant over, and I apologise for the length but I am running out of time in which to make it shorter. In happier news: Pry – a replacement ruby toplevel that does useful stuff and that can be invoked from inside code. It’s like what Ruby developers would come up with after seeing SLIME.

Syndicated 2011-11-09 11:20:41 from diary at Telent Netowrks

Inanely great

A lot has been written – and I expect a lot more is yet to be written - about the attention to detail and unique grasp of design aesthetic that Steve Jobs exerted on Apple product development. A reasonable observation and not a new one. But the implication that goes with it which I find curious is that those slacker open source/free software people who are threatening to eat his lunch with Android or (perhaps less convincingly) with Ubuntu have no hope of ever replicating this setup because as they’re volunteer-based they have to spend too much time being nice to their contributors.

Ignoring the quibble that Android’s not actually a very good exemplar of open source development style (development directions are quite obviously set by Google, and at the time I write this there have been two major releases since they even pushed any open source stuff out at all) this argument falls down because it’s simply not true. Free software projects can be very good indeed at maintaining exacting standards in areas that they care about, and not apparently caring too much whose toes they tread on in the process – it’s just that the areas they care about are much more related to code quality and maintainability than typography and exact shades of yellow

Taking the Linux kernel for an example, the particular story that prompted this observation was the Broadcom wireless drivers contribution, but I could add to that: Reiserfs, nvidia ethernet, Intel ethernet drivers, Android wake locks, and a zillion other less high-profile cases where badly coded patches have not been accepted, even when the rejection is due to something as trivial as whitespace[*]. (OK, maybe I was wrong to say they don’t care about typography ;-) So, the social/organisational structures exist for an open source project to be quite incredibly demanding of high standards and yet remain successful – the question of why they don’t extend these standards to external factors and “UX” probably has to remain open. And don’t tell me it’s because they don’t appreciate good design when it is on offer, because the number of Macs I see at conferences invalidates that hypothesis straight off.

[*] I am reasonably sure this is not an exaggeration, although I can no longer find the mail from when it happened to me so I may be misremembering.

Syndicated 2011-10-25 09:35:17 from diary at Telent Netowrks

Pluto

My previous entry was not just a retro whinge about today’s centralised and balkanised Internet, but also a run up to a description of how things could be different. My efforts on and off over the last few weeks to make that difference have recently been blocked by too-much-$DAYJOB, so maybe this is a good time to stop coding and talk about it a bit.

When I was first playing around with the idea of a distributed social network my focus was on duplicating the interesting bits of Facebook, and one of the reasons I concluded it wasn’t really ever worth pursuing was that Facebook already exists and nobody (to a first approximation) needs an empty duplicate of it. If you want a network where you can tell your friends what you had for breakfast and post cat videos, you want it to be the network that your friends are on.

But in the course of thinking about how to implement it and reading about Atompub , I realised that it showed the way to something subtly different. And when I thought about that a bit more I realised I’d reinvented the blog aggregator. Um. But this is the threaded blog aggregator, which is better.

The Embrace

Well, the logic is unassailable: there are already lots of people on the internet publishing their thoughts using Atom (or its gelatinous structural isomorph RSS): what we need is an app that sucks all their posts, sorts them into categories (which we are calling “channels”), and allows the user to post his own articles (either ab initio or in reply to those they read) into the same channels. You can notify the people you’re replying to by sending them a copy of your reply (as Atom POST to their published feed url, falling back to Trackback or Pingback or Slingback or Stickleback or whatever if that doesn’t work), and you can incorporate their replies to your articles in the same way when they come in. Stick a UI on the front that presents a trn-style threaded view of all unread articles by all authors in the channel, et viola, you’ve just created a conversational view of stuff that’s out there already. And by and large it’s much better stuff than “paste this as your status and tag three people”.

The Extension

How do we turn that into a distributed resilient blah system like Usenet?

The key bit of NNPP was that each node answers proxy requests on behalf of its neighbours, for articles it’s loaded from its neighbours. So, if one of your usual feed sources is offline, you can fetch their articles from someone else who reads them. Combine that with PubSubHubbub and add some yet-to-be-decided peer-to-peer negotiation protocol so that a group of nodes can decide between themselves which will be the hub and which will subscribe to it.

This does make the issue of identity a bit more pressing: what’s to stop node B altering articles published by A, or even introducing entirely new ones that purport to come from A? Crypto, that’s what. I don’t give a stuff whether the name you go by is what your government calls you, but I do want to know whether, when someone with your moniker is claiming to have written article N, it is the same someone who prevously wrote articles 1,2,3,… N-1. So, you get a PGP key (or some other asymmetric peer-to-peer public-key encryption system that doesn’t depend on a centralised certification authority). Then if the key associated with your feed changes without prior notification, my client shows me a big red warning that says you probably aren’t who you say you are. Key management by key continuity a.k.a “what ssh does”. Perhaps once you’ve been posting stuff I like for a while I’ll sign your key as well, and other people - at least, other people who like what I post – will be more likely to trust you as a result.

(NNPP also contains an outline sketch of a DNS protocol replacement. I presently think this is an optional extra, but that depends on how offensive you plan to be to deep-pocket corporates who will complain to your naming authority)

Spam? No magic solutions, I’m afraid, but the “trusted introducer” thing goes some way. If people that you don’t already read send you articles that aren’t signed by keys you have a trust relationship with, they pile up in your “slush pile” (the analogue of the G+ Incoming feed) until you decide to look at them – you might decide to apply spam filtering tools of the same kind as we use for email, or you might just decide to junk it sight unread.

The End

It’s called Pluto. Because Planet is “a feed aggregator application designed to collect posts from the weblogs of members of an Internet community and display them on a single page” (thank you, Wikipedia) and Pluto is a dwarf planet. Sometime soon, I hope, there will be code on Github.

Catchy summary points:

  • we care about content and conversation – I’m happy to let Facebook and Twitter corner the market in ephemera: this is for keepers
  • protocols not platforms – we interoperate on equal terms with anything that speaks Atompub (and intend to provide adaptors for RSS or Facebook or scraped content or even an email-to-pluto gateway) - all the other authentication and distribution stuff is strictly opt-in

Syndicated 2011-09-26 20:19:16 from diary at Telent Netowrks

Social notworking

After a bit over a month using Google Plus (with admittedly decreasing enthusiasm over the course of that time) I have no firm conclusions about what it’s good for, except that it’s incredibly good at reminding me how much I miss Usenet.

I could compare it with the other networks that people consider it “competition” for: it doesn’t replace Facebook – for me anyway - because the whole world isn’t on it, and that means I can’t use it to stay in touch with friends and family. It doesn’t replace Twitter as the the lack of a message length limit means it’s useless for epigrams (which I like) and not much cop for status updates either (which I can live without) – though it does work as “source of interesting links” which in my opinion is the third arm of Twitter utility. And Google will, probably, be disappointed to learn that it doesn’t replace LinkedIn be cause despite the best efforts of the Real Names policy enforcers, it still isn’t quite boring enough. Yet, anyway.

But that’s enough about Google+, what about Usenet?

  • The unit of discussion was an article. Not a two-line throwaway comment or a single bit of “me likes this” information. When you read something on Usenet that you felt strongly enough to reply to, you hit ‘r’, you got the scary warning about “hundreds if not thousands of dollars”, and it dumped you in a full screen text editor where you could compose your pearl of wisdom. Sure, so you could alternatively compose your “ME TOO!”, but it wasn’t a teeny text widget which practically demands the latter response: the affordances were there for writing something with meat
  • It was decentralised. No capricious site owner could take your comment down because someone might find it offensive, or ban all discussion of certain topics, or refuse to allow you to post links to other places, or even that he was going to pull the plug completely and delete all your words. You might be reading this and thinking Godfrey vs Demon and you’d be entirely correct that it wasn’t completely uncensored in practice – nor, I contend, should it have been – but there was at least a bit more effort involved in getting a post removed than clicking the ‘I am offended by this picture of a breast-feeding woman’ button, and that made potential complainants think a bit more carefully about whether it was worth it
  • It had user interfaces that didn’t get in the way. Really. I could sit in front of my computer for hours pressing only the space bar (maybe alternating with the ‘n’ key in less interesting groups) and it would keep the content coming. (And I did. I would blame my degree class on Usenet, if it weren’t that the time I spent fiddling with Linux was in itself sufficient to leave approximately 0 time for studying. But i digress.)

The reasons it’s dead are well-rehearsed, and boil down to this: it couldn’t cope with universal access. It was built back in the days when people had access through their institutions or employers, and for the most part knew they could lose it by acting like jerks - or at least by acting like jerks consistently enough and outrageously enough. Come the personal internet revolution – the Endless September - it had no protection against or meaningful sanctions for spammers and trolls, and so blogs/web forums sucked away most of the people who wanted to just talk, leaving behind people who were by and large too much concerned with the minutiae of meta and much less enthused about the actual posting of content.

But it did do stuff that nobody else has replicated since.

Other people:

Syndicated 2011-09-18 20:36:00 from diary at Telent Netowrks

Openwrt "backfire" first impressions

Some notes on my first impressions of Openwrt 10.03 “Backfire”

Having happily run a Draytek Vigor 2600 in my last home for 2-3 years, the obvious thing to do when my exchange was upgraded to 21CN (that’s ADSL2+ to readers outside the UK) was to buy the same brand again and this time go for a model that supports the newer standard. I bought a 2700 on ebay on the basis that comparing the model numbers indicated it should be better by at least 64 (octal, right?). It wasn’t. Although I can’t prove that it’s the router’s fault it drops out twice a week (we also moved house at about the same time, it could be the line), I can say it’s not a mark of quality that when I access its web interface (e.g. to force a redial) I get an HTTP timeout on at least one of the three frames in the frameset – if you’re going to use framesets for your router admin interface, it would probably be smart to give it a web server that can answer more than two queries at the same time. And its syslog client has an approach to the standards which is most charitably described as “improvisational”, . And I’ve talked before about the missing options for second subnet support that aren’t really missing.

Push eventually came to shove last month when my OfflineIMAP process decided that 2GB a day was a reasonable amount of traffic to incur checking my email (I disagree, for the record) and I hit my ISP monthly download allowance, and the router offered absolutely no help whatever in finding the source of the problem (between one wired computer, three wireless laptops, assorted smartphones and ipod, and a wifi-enabled weighing scale it could really have been anywhere). So it was time to shove it, preferably in favour of something that would run Linux. Like an early WRT-54G won on ebay, coupled with a lightly hacked BT Voyager 220V left behind in a previous flat by a previous previous tenant and configured in bridge mode for the ADSL hookup.

Openwrt seems to be the most Debian-like of the popular Linux-based router firmwares (that’s intended as a compliment), in that it has a package manager, and it likes to be configured from the command line by editing files. My observations based on about 4 hours playing with it:

  • the documentation is fragmented and lacks any clear sense of order or editorial control. This is listed first not because it’s most important (it isn’t) but because it’s the first thing I noticed. Seriously, a Wiki is not a substitute for a user manual, and I say that as someone who’s written one. When resorting to Google you will find that a lot of what you read there is out of date. For example, there is no longer an ipkg command, but opkg seems to replace it.
  • It has a web interface called Luci. It’s a bit slow and clunky - though still better than the Vigor router’s was – but it’s helpful for getting started. I was confused by the interaction between the various ‘Save’, ‘Apply’, ‘Save and Apply’ buttons at the bottom of the page and the ‘Unsaved Changes’ link in the menu bar at the top: on the ‘Firewall’ page, for example, clicking ‘Save’ at the bottom causes the status at the top to go from ‘Changes: 0’ to ‘Unsaved Changes: 1’. To my way of thinking, clicking Save should reduce the number of unsaved changes not increase them, but this is probably just bad labelling.
  • I say it likes to be configured by editing files: it is however fussy about which files. If there’s a file under /etc/config with a relevant-looking setting in it, edit that in preference to whatever the upstream app’s usual config file would be, then run uci commit - although actually you might not need to run uci commit – see this lovely thread for the full confusing detail – then run the relevant /etc/init.d/foo scripts to restart services as needed. I am not sure if there’s a clear rule for gets overridden or overwritten if you edit config files directly and conflict with UCI, but I suspect it’s pretty ad hoc.
  • the hardware doesn’t speak ADSL, hence the need for a separate box to do that. I set the Voyager up to do PPPoE and the WRT likewise: in Luci look for Network → Interfaces → WAN and set the Protocol to PPPoE: this should get you Username and Password boxes in which you put whatever your ISP told you to.
  • the wifi did not work no matter what I did in Luci, but eventually I found the problem was in /etc/config/wireless which had an entirely bogus mac address in its definition of radio1: I replaced it with the address printed by ifconfig wlan0 and suddenly everything started working.
  • it runs an ssh daemon, which is nice. Although it will authenticate using keys, it won’t look at /root/.ssh/authorized_keys as openssh does. I used the web interface to add my key, which worked fine.

Summary: although not currently suitable for the non-technical end user, if you have some Linux experience and a few hours to screw around with Google, it all eventually works fine. And I can run tcpdump on it, which more than makes up for all these minor problems 64 times over. Get in.

More on the BT Voyager in a later blog entry, but I leave you with some instructions for unlocking it which you may need if you are sensible enough to use an ISP that isn’t BT Retail.

Syndicated 2011-06-22 10:22:56 from diary at Telent Netowrks

ANN: Goldleaf - scripted Debian kvm image creation

After two weeks of work writing scripts to automate the creation of new production/test boxes for $WORK, and two days to get it into a state where I could post the result on Github without spilling all our internal secrets, I am pleased to announce Goldleaf. From the README:

Goldleaf is a tool for creating consistent and repeatable KVM Debian system images in which the packages installed, the versions of each package, and the answers to “debconf” configuration questions are specified precisely.

The package manifest and configuration files are text-based and are intended to be kept under Git version control.

The name ‘goldleaf’ comes from the article Golden Image or Foil Ball by Luke Kanies. On which note: it is unclear to me whether he would see this script as a good thing or as a bad thing, but I would argue that even if you reduce the number of images under your control to a single “stem cell” image, being able to recreate (any current or previous version of) that image on-demand is just as valuable as being able to recreate (any current or previous version of) any other application.

The README has full instructions. You can see the working example at https://github.com/telent/goldleaf-example and you can git clone or download (or just nose around) from https://github.com/telent/goldleaf

Feedback welcome. Bugs and stuff to the Github issue tracker

Syndicated 2011-06-12 21:27:18 from diary at Telent Netowrks

Bullet time

Short bits

  • HTC have apparently reversed their policy on locking their phone bootloaders – i.e. in future, they say they won’t do it any more. I find it interesting that LWN have reported this as “The CEO of HTC has seemingly posted on Facebook that its phones will not be locked down” whereas every other report I’ve seen has assumed it’s real, and only LWN have thought to wonder if Facebook really is an official HTC communication channel. Anyway, if it does prove to be true (I hope so) I will be reversing my previous recommendation against buying their phones
  • Here is a real working example (indeed, the primary use case) for use of thin-prefork. This blog is already running on it, and as of Tuesday so will be $WORK. And then maybe I can get back to some actual work. Hurrah.
  • After finishing the thin-prefork integration I spent some time on Friday trying to sort out $WORK’s exception handling. The exception handling stuff in Sinatra is … kind of involved, and if you test it by inserting
    raise Exception, “Error happens here”
    into a route, you may spend quite a while wondering why it doesn’t seem to be working properly. I wanted Sinatra to pass all exceptions up to Rack where I could use Rack::MailExceptions to send me email: eventually by reading sinatra/base.rb for a while I find (1) that enable :raise_errors, perhaps contrary to its documentation, doesn’t raise an error if there’s a error stanza in the application which would catch it; (2) that the default Sinatra::Base class installs exactly such an error clause for the base Exception class. So you may want to change your test code to use StandardError or something instead.
  • Once having done that, you will then find that Rack::MailExceptions requires Tmail, which has trouble with Ruby 1.9, and then you will find that the mail it sends you is not laid in any order you might reasonably hope for – the errant URL is buried two thirds of the way down – and eventually you will decide that you might as well just copy the entire file into your own code and edit it for your requirements. Which is what I did. There’s a ticket to fix the Tmail dependency: apparently porting it to the newer Mail gem is a non-starter due to its needing active_support, but it appears that recent versions of Mail no longer need active_support anyway, so maybe that decision can now be revisited.

Syndicated 2011-05-28 20:41:16 from diary at Telent Netowrks

Give me back my event loop

My new phone has now been emancipated, thanks in part to Atheer-pronounced-Arthur (or possibly vice versa) at the “Root” internet cafe on Edgware Road. They have an XTC Clip and he was able to do the S-OFF thing for me in about ten minutes and for the princely sum of £10. Recommended. I have been looking at AOSP build instructions, but actually doing the build and flashing the phone with a nice clean 2.3.4 Sense-free system will have to wait until I can devote a few more mental cycles to it.

In between the distractions of Shiny! New! Toy! I have been working on the projectr/thin-prefork pair – I am still reasonably convinced that they should be independent modules, though as I always seem to end up hacking on both at once I worry about the degree of coupling between them – to impose some sense on the interface for extending the thin-prefork server. Which I think is 80% there, but this morning I thought it was 100% there until I started trying to use it for real, so that’s a little bit annoying.

Which brings us to the rant/plea for today, as indicated in the title. Hands off my event loop! I’m sure I’ve said already this in other contexts and with regard to other platforms, but: I am not going to devote my process to call Oojimaflip.run! when there are other things it should be doing concurrently with watching for Oojimaflips, and I see no reason either to start a new thread (or process) exclusively for your use when you could have just written a method that says whether there are fresh oojimaflips and another to say what they are.

I am prompted to say this by rb-inotify, which is a (quite nicely written) wrapper around some kernel functionality that communicates via a file descriptor. I’d like a wrapper like this to (1) give me the file descriptor so I can call Kernel.select on it, along with all the other files I’m looking at; (2) give me a method which I will call when select says the fd is ready to read, which will read them and (3) digest them into beautiful Ruby-friendly Event objects. What I’ve got is about two out of three (good odds if you’re Meatloaf): there is a public method #to_io whose return value I can plug into select, there are beautiful Ruby-friendly Event objects, but to get those objects, unless I’m overlooking something (and I don’t mean that to sound passive-aggressive), I have to run one cycle of the rb-inotify event loop: call the #process method which calls my callback once per event, which has to find somewhere to store the events it’s passed, and then check the stored events when control eventually unwinds from #process and returns to me.

I’m actually being a bit harsh here, because the event-parsing code is there in the internals and not hard to grab. In the basement lavatory behind the “beware of the leopard” sign, I find a method called read_events, which if you don’t mind calling undocumented code can be used something like this. The preceding call to select would be better replaced by some code to put the file into non-blocking mode, but that’s a refinement that can wait for another time.

I have opened an issue on github saying something similar, which I expect is far more likely to have a useful effect than posting on this obscure blog. But, yeah, I like ranting.

Syndicated 2011-05-20 13:19:57 from diary at Telent Netowrks

Desire S: don't

If I had known a week ago about the lengths to which HTC are now going to prevent people from using their phones, I would have bought some other phone (like maybe the the LG Optimus 2X) instead – and if you are the kind of person who prefers to choose your own software than to let device manufacturers and mobile phone networks do it for you, I would recommend that you don’t buy it either.

That’s right, as far as I can determine it’s not (currently, at least) rootable. Finding this out is made harder than it needs to be because for any conceivable relevant search term, Google seems to prefer the xda-dev forum – by volume, 98% composed of script kiddies saying “OMG LOLZ” – over any results from people who know what they’re talking about. But here’s the summary:

  1. as delivered, there is no root access. Well, so far so perfectly standard
  2. there are a couple of promising-sounding exploits which work on other similar devices. The psneuter exploit – or at least the binary copy of psneuter of unknown provenance that I downlaoded from some ad-filled binaries site - doesn’t work, erroring out with failed to set prot mask. GingerBreak, though, will get you a root shell. Or at least it will if you get the original version that runs from the command line and not the APK packaged version that a third party has created for the benefit of xda-dev forum users.
  3. the problem is that GingerBreak works by exploiting bugs in vold and as the side-effect is to render vold unusable, you can’t get access to your sd card after running it. So, you think, “no problem, I’ll remount /system as writable and install su or some setuid backdoor program that will let me back in”. This doesn’t work although it looks like it did right up until you reboot and notice your new binary has disappeared.
  4. (incidentally, if you run GingerBreak again after rebooting the phone and it fails, it’s because you need to remove /data/local/tmp/{sh,boomsh} by hand.)
  5. The explanation for this freakiness is that /system is mounted from an eMMC filesystem which is hardware write-protected in early boot (before the Linux kernel starts up) but Linux doesn’t know this, so the changes you make to it are cached by the kernel but don’t get flushed. There is a kernel module called wpthis designed for the G2/Desire Z whih attempts to remove the write-protect flag by power-cycling the eMMC controller, but it appears that HTC have somehow plugged this bug on the Desire S. For completeness sake, I should add that every other mounted partition has noexec and/or nosuid settings, so /system is the only possible place to put the backdoor command.
  6. Um.

Avenues to explore right now: (1) the write-protect-at-boot behaviour is apparently governed by a “secu flag” which defaults to S-ON on retail devices but can be toggled to S-OFF using a hardware device called the “XTC Clip”, available in some phone unlocking shops. (2) Perhaps it is possible to become root without calling setuid by installing an APK containing a started-on-boot service and hacking /data/system/packages.xml so that the service launches with uid 0. (3) wait and see if anyone else has any good ideas. (4) study the Carphone Warehouse web site carefully and see if they have an option to return the phone for exchange, bearing in mind that I’ve been using it for seven days. Obviously those last two options are mutually incompatible.

Summary of the summary: HTC, you suck.

Incidentally, if you want to build the wpthis module for yourself there’s not a lot of useful documentation on building Android kernels (or modules for them) for devices: everything refers to a page on sources.google.com which appears to be 404. The short answers: first, the gcc 4.4.0 cross-compiler toolchain is in the NDK; second, the kernel source that corresponds to the on-device binary, at the time I write this, can be had from <http://dl4.htc.com/RomCode/Source_and_Binaries/saga-2.6.35-crc.tar.gz> (linked from <http://developer.htc.com/>); third, <https://github.com/tmzt/g2root-kmod/> doesn’t compile cleanly anyway: you’ll need to scatter #include <linux/slab.h> around a bit and to copy/paste the definition of mmc_delay from linux/drivers/mmc/core/core.h

Incidentally (2): <http://tjworld.net/wiki/Android> has lots of interesting stuff.

At this point, though, my advice remains that you should buy a different phone. Even if this one is rooted eventually (and I certainly hope it will be), HTC have deliberately made it more difficult than it needs to and why reward that kind of anti-social behaviour?

Syndicated 2011-05-15 13:10:27 from diary at Telent Netowrks

Testing a monolithic app - how not to

In the process of redesiging the interfaces to thin-prefork, I thought that if it’s going to be a design not a doodle I’d try to do it the TDD way and add some of that rspec goodness.

I’m not so proud of what I ended up with

There are a number of issues with this code that are all kind of overlapped and linked with each other, and this post is, unless it sits as a draft for considerably longer than I intended to spend on it, going to be kind of inchoate because all I really plan to do is list them in the order they occur to me.

  • The first and most obvious hurdle is that once you call #run!, the server process and its kids go off and don’t come back: in real-world use, any interaction you might have with it after that is driven by external events (such as signals). In testing, we have to control the external environment of the server to give it the right stimuli at the right time, then we need some way to look inside it and see how it reacts. So we fork and run it in a child process. (Just to remind you, thin-prefork is a forking server, so we now have a parent and a child and some grandchildren.) This is messy already and leads to heuristics and potential race conditions: for example, there is a sleep 2 after the fork, which we hope is long enough for it to be ready after we fork it, but is sure to fail somewhere and to be annoyingly and unnecessarily long somewhere else especially as the number of tests grows.
  • We make some effort to kill the server off when we’re done, but it’s not robust: if the interpreter dies, for example, we may end up with random grandchild processes lying around and listening to TCP ports, and that means that future runs fail too.
  • Binding a socket to a particular interface is (in Unix-land) pretty portable. Determining what interfaces are available to bind to, less so. I rely on there most likely being a working loopback and hope that there is additionally another interface on which packets to github.com can be routed. I’m sure that’s not always true, but it;‘ll have to do for now. (Once again I am indebted to coderr’s neat trick for getting the local IP address – and no, gethostbyname(gethostname()) doesn’t work on a mobile or a badly-configured system where the hostname may be an alias for 127.0.0.1 in /etc/hosts/)
  • We need the test stanzas (running in the parent code) somehow to call arbitrary methods on the server object (which exists in the child). I know, we’ll make our helper method start accept a block and install another signal handler in the child which yields to it. Ugh
  • We needed a way to determine whether child processes have run the correct code for the commands we’re testing on them. Best idea I came up with was to have the command implementation and hook code set global variables, then do HTTP requests to the children which serve the value of those global variables. I’m sort of pleased with this. In a way.

Overall I think the process has been useful, but the end result feels brittle, it’s taken nearly as long as the code did to write, and it’s still not giving me the confidence to refactor (or indeed to rewrite) blindly that all the TDD/BDD advocates promote as the raison d’embêter

The brighter news is, perhaps, that I’m a lot more comfortable about the hook/event protocol this time round. There are still bits that need filling in, but have a look at Thin::Prefork::Worker::Lifecycle and module TestKidHooks for the worker lifecycle hooks, and then at the modules with names starting Test... for the nucleus of how to add a custom command.

Syndicated 2011-05-11 15:56:25 from diary at Telent Netowrks

145 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!