Older blog entries for Stevey (starting at number 603)

So I have a wheezy desktop

I look after a bunch of servers, working for Bytemark that is not a surprise, but I only touch a very small number of desktop systems.

precious - My desktop

This is the machine upon which I develop, check my personal mail, play my music & etc.

steve - My work machine

To keep the working from home separation going I have a machine I only use for work purposes.

travel/travel2 - EEPC box

I have two EEPC machines, a personal 701 and a work-provided 901.

Honestly these rarely get used. One is for when I'm on holiday or traveling, the second for when I'm on-call.

Yesterday I got round to upgrading both the toy EEPC machines to wheezy. The good news? Both of them upgraded/reinstalled easily. Hardware was all detected, sleeping, hibernation, wifi, etc all "just worked".

Unfortunately I am now running GNOME 3.x and the experience is unpleasant. This is a shame, because I've enjoyed GNOME 2.x & bluetile for the past few years.

The only other concern is that pwsafe appears to be scheduled for removal from Debian GNU/Linux - the list of open bugs shows some cause, but there are bugs there that are trivial to fix.

For the moment I've rebuilt the package and if I cannot find a suitable alternative - available for squeeze and wheezy - then I will host the package on my package repository.

In conclusion: Debian, you did good. GNOME, I've loved and appreciated you for years, but you might not be the desktop I want these days. It's not you, it's me.

Syndicated 2013-04-06 04:27:58 from Steve Kemp's Blog

30 Mar 2013 (updated 30 Mar 2013 at 12:12 UTC) »

Time passes, Thorin sits down and starts singing about gold.

This weekend I have mostly been reading Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time .

In modern times we divide the earth up into rings of lines, latitude and longitude, as wikipedia will explain.

Finding your latitude is easy, finding your longitude is a difficult process, and it was vitaly important when people started to sail large distances, the book contained lots of stories of sailors being suddenly suprised by the appearance of land - because they'd misjudged their position.

Having four ships, containing garlic, pepper, and other goods of value exceeding the total wealth of the UK, sink all at once was a major blow. Not to mention the large number of sailors who lost their lives.

There were several solutions proposed, involving steady hands and telescopes, etc, but the book mostly discusses John Harrison and his use of watches/clocks.

John Harrison was featured in Only Fools & Horses, as the designer of the watch that made Delboy & Rodney millionaires.

->Time on our hands

The idea of using a clock is that you take one with you, set to the time of your departure location. Using that clock you can compare the time to the local-time, by viewing the sun, etc. Calculating the difference between the two times allows you to see how far away, in degrees, from your port, and thus how far you've traveled.

Until harrison came along clocks weren't accurate enough to keep time. His clocks would lose a second a month, until then clocks might lose 15 minutes a day. (With more variations depending on temperture, location, and pressure. Clearly things like pendulum clocks weren't suitable for rocking ships either.)

All in all this book was a great read, there were mentions of Galilao, Newton, and similar folk who we've all heard of. There was angst, drama, deceit, and some stunning craftmanship.

Harrison was a woodworker, and he made his clocks out of wood (+brass where necessary). Choosing fast/slow-grown wood depending on purpose, and using wood that secreted oils naturally allowed him to avoid lubrication - which improved accuracy, as lubricants tend to thin/thicken when temperature/pressure change.

A lovely read, thank you very much.

In other news I received several patches for my templer static-site generator, and this has resulted in much improvement. I've also started using Test::Exception now, and slowly updating all my perl code to use this.

Syndicated 2013-03-30 10:47:46 (Updated 2013-03-30 12:12:41) from Steve Kemp's Blog

Want to fight about it?

So via hackernews I recently learned about fight code, and my efforts have been fun. Currently my little robot is ranked ~400, but it seems to jump around a fair bit.

Otherwise I've done little coding recently:

I'm pondering libpcap a little, for work purposes. There is a plan to write a deamon which will count incoming SYN packets, per-IP, and drop access to clients that make "too many" requests "too quickly".

This plan is a simple anti-DoS tool which might or might not work in the real world. We do have a couple of clients that seem to be DoS magnets and this is better than using grep + sort against apache access logs.

For cases where a small number of source IPs make many-many requests it will help. For the class of attacks where a huge botnet has members making only a couple of requests each it won't do anything useful.

We'll see how it turns out.

Syndicated 2013-03-24 10:08:31 from Steve Kemp's Blog

Handling bookmarks?

I've a collection of about 500 bookmarks which I've barely touched for a few years. I started organizing them late the other night, because I'd been off work sick for two days and that was about the most I felt up for doing with a computer.

The intention was to "tidy" them, and then setup some way of syncing them across browsers/computers. In the end I didn't like any of the syncing plugins I could find - xmarks, etc - so I decided to take a step backwards.

I'd exported my bookmarks to HTML page, via firefox, before I started, and then later in a fit of pique I deleted the whole damn lot of them.

So now a few years worth of bookmarks are stored in a single HTML file. But wait, we can use revision control can't we? We can host that file on github/similar. We can rely upon merges to deal with conflicts - simple if we just add lines to the end, or delete lines.

Maybe that's the best way to store bookmarks? I updated the bookmark file to read:

<ul>
<li tags="debian, personal"><a href="http://www.debian-administration.org/">Debian Admin</a></li>
..
</ul>

Adding "tags" to the LI-container and then some simple jQuery code gave me the ability to search/filter the bookmarks and auto-populate tags.

A small example placed online here:

The obvious comment is that this makes adding new bookmarks a bit harder, but we'll see.. The javascript works in the browsers I tested, and for those that have none the bookmarks will just be a simple unordered list which should be universal.

I expect the javascript could be improved by a real developer.

Syndicated 2013-03-16 10:42:49 from Steve Kemp's Blog

So I'm a year older

Last week I had another birthday, which was nice. I'm now all mature, and everything. Honest.

I received a few surprise gifts from friends and strangers alike, which was pretty good. Other than that I didn't do too much.

This weekend I'm going to be using "airbnb" to spend the weekend in Dundee with my partner who is regularly commuting between Edinburgh and Perth/Dundee, to work in various hospitals. With all the commuting time she's not had too much time to explore the actual city, and I've only been there once before so I'm sure it will be a fun weekend.

The templer static site generator got a little bit of pimping on LWN.net the other day, thanks to Martin Michlmayr, although embarassingly I seem to have read the article and repeated the content in the conclusion, and duplicated that in my own comment. Ooops.

Beyond that I've done little coding recently, although I suspect now that nodejs has had a stable release I might do something interesting soon. I don't want to dwell on the failure of Sim City - because I don't run windows and couldn't have tried it even if I wanted to - but I'm pondering the idea of a persistant grid-space where different items can be placed.

I've not tried anything browser-based before, but the popularity of things like minecraft make me wonder if you had an "infinite grid" where folk could store "stuff", and scroll around in a browser you might be able to do interesting things.

Starting small, with a 100x100 grid, and some kind of updated play-by-mail turfwars/drug-war like experience should be simple. But then again enthusiasm is easy to generate until you start working out how you'd interface with the server and what kind of client you'd need.

Now to enjoy some 21 year old whisky and call it a night..

Syndicated 2013-03-13 23:41:50 from Steve Kemp's Blog

Testing the blog feed

My previous entry, about templating, didn't make it into Planet Debian.

This entry is just a test to see if it is my fault.

Syndicated 2013-03-03 13:29:23 from Steve Kemp's Blog

Templer rocks.

For the past few days I've been making minor changes to my static-site generator, templer (source on github). The recent changes have all had one aim, which was to allow me to rebuild my main site.

Now I've finished http://www.steve.org.uk is up to date, and the source code to the website is stored in a mercurial repository.

No real functional changes have been made, but I've rationolized several ad-hoc bits of the site, marked areas are depreciated/unsupported where appropriate, and removed a few things that were completely broken.

I almost removed the software for Microsoft Windows, but didn't. By a strange coincidence I was recognized as the author of a windows utility back in 2004 - almost ten years ago now - on Hacker News. Guess I made the right choice.

I'm going to spend a while working on my slaughter documentation in the next week or two, although "the definitive guide" is a great starting point.

"Yes, this is dog" - Landscape in The Mist (1984).

Syndicated 2013-03-01 07:27:29 from Steve Kemp's Blog

Let there be slaughter-documentation, and cake.

Tonight I've made a new release of my slaughter automation tool.

Recent emails lead me to believe I've now got two more users, so I hope they appreciate this:

That covers installation, setup, usage, and more. Took a while to write, but I actually enjoyed it. I'm sure further additions will be made going forward. Until them I'm going to call it a night and enjoy some delicious cake.

Syndicated 2013-02-07 19:16:27 from Steve Kemp's Blog

More competition for server management and automation is good

It was interesting to read recently from Martin F. Krafft a botnet-like configuration management proposal.

Professionally I've used CFEngine, which in version 2.x, supported a bare minimum of primitives, along with a distribution systme to control access to a central server. Using thse minimal primitives you could do almost anything:

  • Copy files, and restart services that depend upon them.
  • Make minor edits to files. (Appending lines not present, replacing lines you no longer wanted, etc)
  • Installing / Removing packages.
  • More ..

Now I have my mini cluster (and even before that when I had 3-5 machines) it was time to look around for something for myself.

I didn't like the overhead of puppet, and many of the other systems. Similarly I didn't want to mess around with weird configuration systems. From CFEngine I'd learned that using only a few simple primitives would be sufficient to manage many machines provided you could wrap them in a real language - for control flow, loops, conditionals, etc. What more natural choice was there than perl, the sysadmin army-knife?

To that end slaughter was born:

  • Download polices (i.e. rules) to apply from a central machine using nothing more complex than HTTP.
  • Entirely client-driven, and scheduled via cron.

Over time it evolved so that HTTP wasn't the only transport. Now you can fetch your policies, and the files you might serve, via git, hg, rsync, http, and more.

Today I've added one final addition, and now it is possible to distribute "modules" alongside policies and files. Modules are nothing more than perl modules, so they can be as portable as you are careful.

I envisage writing a couple of sample modules; for example one allowing you to list available sites in Apache, disable the live ones, enable/disable mod_rewrite, etc.

These modules will be decoupled from the policies, and will thus be shareable.

Anyway , I'm always curious to learn about configuration management systems but I think that even though I've reinvented the wheel I've done so usefully. The DSL that other systems use can be fiddly and annoying - using a real language at the core of the system seems like a good win.

There are systems layered upon SSH, such as fabric, ansible, etc, and that was almost a route I went down - but ultimately I prefer the notion of client-pull to server-push, although it is possible in the future we'll launche a mini-daemon to allow a central host/hosts to initial a run.

Syndicated 2013-02-02 21:27:09 from Steve Kemp's Blog

Shame there isn't more competition for self-hosted analytics

Today I've been mostly replanting spider-plants, aloe-vera plants, and shuffling trees around inside my flat.

Beyond that I've been updating my trivial dashboard skeleton, which was put together as part of this simple introduction article. (So there is a standalone redis&sinatra-using visualization server)

After working on the display I was suddenly reminded that I run a cluster now. That means I have four servers each writing a local Apache logfile, and no central way of viewing all my visitor-data.

There are several open source analytic packages such as piwik and openwebanalytics - but they require MySQL & PHP at the back-end.

Given that node.js is "teh new shiny" it is a surprise there isn't something out there using that, and web sockets perhaps, to collect visitor data.

I found a few toy projects, but nothing that seemed to be a clear winner. Adding some javascript to webpages to submit:

  • Browser version
  • Referer
  • Screen Size.
  • window.location
  • etc

Is trivial. The hard part is storing that and visualizing it in a neat way. Making data pretty is something I'm notoriously bad at - unless it is turning numbers into graphs using a good library I'm out of luck most of the time.

Anyway I will keep digging. Ideally I'll have a scalable node service that'll receive submissions, bung them in redis, and then show real-time activity in a sexy fashion. I can dream?

Syndicated 2013-01-25 19:28:46 from Steve Kemp's Blog

594 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!