Recent blog entries for Stevey

lumail2 nears another release

I'm pleased with the way that Lumail2 development is proceeding, and it is reaching a point where there will be a second source-release.

I've made a lot of changes to the repository recently, and most of them boil down to moving code from the C++ side of the application, over to the Lua side.

This morning, for example, I updated the handing of index.limit to be entirely Lua based.

When you open a Maildir folder you see the list of messages it contains, as you would expect.

The notion of the index.limit is that you can limit the messages displayed, for example:

  • See all messages: Config:set( "index.limit", "all")
  • See only new/unread messages: Config:set( "index.limit", "new")
  • See only messages which arrived today: Config:set( "index.limit", "today")
  • See only messages which contain "Steve" in their formatted version: Config:set( "index.limit", "steve")

These are just examples that are present as defaults, but they give an idea of how things can work. I guess it isn't so different to Mutt's "limit" facilities - but thanks to the dynamic Lua nature of the application you can add your own with relative ease.

One of the biggest changes, recently, was the ability to display coloured text! That was always possible before, but a single line could only be one colour. Now colours can be mixed within a line, so this works as you might imagine:

Panel:append( "$[RED]This is red, $[GREEN]green, $[WHITE]white, and $[CYAN]cyan!" )

Other changes include a persistant cache of "stuff", which is Lua-based, the inclusion of at least one luarocks library to parse Date: headers, and a simple API for all our objects.

All good stuff. Perhaps time for a break in the next few weeks, but right now I think I'm making useful updates every other evening or so.

Syndicated 2015-11-16 22:04:44 from Steve Kemp's Blog

lumail2 approaches readiness

So the work on lumail2 is going well, and already I can see that it is a good idea. The main reason for (re)writing it is to unify a lot of the previous ad-hoc primitives (i.e. lua functions) and to try and push as much of the code into Lua, and out of C++, as possible. This work is already paying off with the introduction of new display-modes and simpler implementation.

View modes are an important part of lumail, because it is a modal mail-client. You're always in one mode:

  • maildir-mode
    • Shows you lists of Maildir-folders.
  • index-mode
    • Shows you lists of messages inside the maildir you selected.
  • message-mode
    • Shows you a single message.

This is nothing new, but there are two new modes:

  • attachment-mode
    • Shows you the attachments associated with the current message.
  • lua-mode
    • Shows you your configuration-settings and trivia.

Each of these modes draws lines of text on the screen, and those lines consist of things that Lua generated. So there is a direct mapping:

Mode Lua Function
maildir function maildir_view()
index function index_view()
message function message_view()
lua function lua_view()

With that in mind it is possible to write a function to scroll to the next line containing a pattern like so:

function find()
   local pattern = Screen:get_line( "Search for:" )

   -- Get the global mode.
   local mode = Config:get("global.mode")

   -- Use that to get the lines we're currently displaying
   loadstring( "out = " .. mode .. "_view()" )()

   -- At this point "out" is a table containing lines that
   -- the current mode wishes to display.

    -- .. do searching here.

Thus the whole thing is dynamic and mode-agnostic.

The other big change is pushing things to lua. So to reply to an email, populating the new message, appending your ~/.signature, is handled by Lua. As is forwarding a message, or composing a new mail.

The downside is that the configuration-file is now almost 1000 lines long, thanks to the many little function definitions, and key-binding setup.

At this rate the first test-release will be out at the weekend, but API documentation, and sample configuration file might make interesting reading until then.

Syndicated 2015-11-05 21:52:02 from Steve Kemp's Blog

It begins - a new mail client, with lua scripting

Once upon a time I wrote a mail-client, which worked in the console directly via Maildir manipulation.

My mail client was written in C++, and used Lua for scripting unlike clients such as mutt, alpine, and similar alternatives which don't have complete scripting support.

I've pondered several times whether to restart this project, but I think it is the right thing to do.

The original lumail client has a rich API, but it is very ad-hoc and random. Functions were added where they seemed like a good idea, but with no real planning, and although there are grouped functions that operate similarly there isn't a lot of consistency. The implementation is clean in places, elegant in others, and horrid in yet more parts.

This time round everything is an object, accessible to Lua, with Lua, and for Lua. This time round all the drawing-magic is will be written in Lua.

So to display a list of Maildirs I create a bunch of objects, one for each Maildir, and then the Lua function Maildir.to_string is called. That function looks like this:

-- This method returns the text which is displayed when a maildir is
-- to be show in maildir-mode.
function Maildir.to_string(self)
   local total  = self:total_messages()
   local unread = self:unread_messages()
   local path   = self:path()

   local output = string.format( "[%05d / %05d] - %s", unread, total, path );

   if ( unread > 0 ) then
      output = "$[RED]" .. output

   if ( string.find( output, "Automated." ) ) then
      output = strip_colour( output )
      output = "$[YELLOW]" .. output

   return output

The end result is something that looks like this:

[00001 / 00010 ] -
[00000 / 00023 ] - Automated.root

The formatting can thus be adjusted clearly, easily, and without hacking the core of the client. Providing I implement the apporpriate methods to the Maildir object.

It's still work in progress. You can view maildirs, view indexes, and view messages. You cannot reply, forward, or scroll properly. That said the hard part is done now, and I'm reasonably happy with it.

The sample configuration file is a bit verbose, but a good demonstration regardless.

See the code, if you wish, online here:

Syndicated 2015-10-26 22:01:10 from Steve Kemp's Blog

Robbing Peter to pay Paul, or location spoofing via DNS

I rarely watched TV online when I was located in the UK, but now I've moved to Finland with appalling local TV choices it has become more common.

The biggest problem with trying to watch BBC's iPlayer, and similar services, is the location restrictions.

Not a huge problem though:

  • Rent a virtual machine.
  • Configure an OpenVPN server on it.
  • Connect from $current-country to it.

The next part is the harder one - making your traffic pass over the VPN. If you were simple you'd just say "Send everything over the VPN". But that would slow down local traffic, so instead you have to use trickery.

My approach was just to run a series of routing additions, similar to this (except I did it in the openvpn configuration, via pushed-routes):

ip -4 route add .... dev tun0

This works, but it is a pain as you have to add more and more routes. The simpler solution which I switched to after a while was just configuring mitmproxy on the remote OpenVPN end-point, and then configuring that in the browser. With that in use all your traffic goes over the VPN link, if you enable the proxy in your browser, but nothing else will.

I've got a network device on-order, which will let me watch netflix, etc, from my TV, and I'm lead to believe this won't let you setup proxies, or similar, to avoid region-bypass.

It occurs to me that I can configure my router to give out bogus DNS responses - if the device asks for "" it can return - which is the remote host running the proxy.

I imagine this will be nice and simple, and thought I was being clever:

  • Remote OpenVPN server.
  • MITM proxy on remote VPN-host
    • Which is basically a transparent HTTP/HTTPS proxy.
  • Route traffic to it via DNS.
    • e.g. For any DNS request, if it ends in return

Because I can handle DNS-magic on the router I can essentially spoof my location for all the devices on the internal LAN, which is a good thing.

Anyway I was reasonably pleased with the idea of using DNS to route traffic over the VPN, in combination with a transparent proxy. I was even going to blog about it, and say "Hey! This is a cool idea I've never heard of before".

Instead I did a quick google(.fi) and discovered that there are companies offering this as a service. They don't mention the proxying bit, but it's clearly what they're doing - for example OverPlay's SmartDNS.

So in conclusion I can keep my current setup, or I can use the income I receive from DNS hosting to pay for SmartDNS, or other DNS-based location-fakers.

Regardless. DNS. VPN. Good combination. Try it if you get bored.

Syndicated 2015-10-17 08:57:19 from Steve Kemp's Blog

So about that idea of using ssh-keygen on untrusted input?

My previous blog post related to using ssh-keygen to generate fingerprints from SSH public keys.

At the back of my mind was the fear that running the command against untrusted, user-supplied, keys might be a bad plan. So I figured I'd do some fuzzing to reassure myself.

The most excellent LWN recently published a piece on Fuzzing with american fuzzy lop, so with that to guide me I generated a pair of SSH public keys, and set to work.

Two days later I found an SSH public key that would make ssh-keygen segfault, and equally the SSH client (same parser), so that was a shock.

The good news is that my Perl module to fingerprint keys is used like so:

my $helper = SSHKey::Fingerprint->new( key => "ssh ...." );
if ( $helper->valid() ) {
   my $fingerprint = $helper->fingerprint();

The validity-test catches my bogus key, so in my personal use-cases this OK. That said it's a surprise to see this:

skx@shelob ~ $ ssh -i 
Segmentation fault

Similarly running "ssh-keygen -l -f ~/" results in an identical segfault.

In practice this is a low risk issue, hence mentioning it, and filing the bug-report publicly, even if code execution is possible. Because in practice how many times do people fingerprint keys from unknown sources? Except for things like githubs key management page?

Some people probably do it, but I assume they do it infrequently and only after some minimal checking.

Anyway we'll say this is my my first security issue of 2015, we'll call it #roadhouse, and we'll get right on trademarking the term, designing the logo, and selling out for all the filthy filthy lucre ;)

Syndicated 2015-10-12 10:40:34 from Steve Kemp's Blog

Generating fingerprints from SSH keys

I've been allowing users to upload SSH public-keys, and displaying them online in a form. Displaying an SSH public key is a pain, because they're typically long. That means you need to wrap them, or truncate them, or you introduce a horizontal scroll-bar.

So rather than displaying them I figured I'd generate a fingerprint when the key was uploaded and show that instead - This is exactly how github shows your ssh-keys.

Annoyingly there is only one reasonable way to get a fingerprint from a key:

  • Write it to a temporary file.
  • Run "ssh-keygen -lf temporary/file/name".

You can sometimes calculate the key via more direct, but less obvious methods:

awk '{print $2}' ~/.ssh/ | base64 -d | md5sum

But that won't work for all key-types.

It is interesting to look at the various key-types which are available these days:

mkdir ~/ssh/
cd ~/ssh/
for i in dsa ecdsa ed25519 rsa rsa1 ; do
  ssh-keygen -P "" -t $i -f ${i}-key

I've never seen an ed25519 key in the wild. It looks like this:

$ cat ~/ssh/
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcT04t6UpewqQHWI4gfyBpP/ueSjbcGEze22vdlq0mW skx@shelob

Similarly curve-based keys are short too, but not as short:

ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLTJ5+  \
 rWoq5cNcjXdhzRiEK3Yq6tFSYr4DBsqkRI0ZqJdb+7RxbhJYUOq5jsBlHUzktYhOahEDlc9Lezz3ZUqXg= skx@shelob

Remember what I said about wrapping? Ha!

Anyway for the moment I've hacked up a simple perl module SSH::Key::Fingerprint which will accept a public key and return the fingerprint, as well as validating the key is well-formed and of a known-type. I might make it public in the future, but I think the name is all wrong.

The only code I could easily find to do a similar job is this node.js package, but it doesn't work on all key-types. Shame.

And that concludes this weeks super-happy fun-time TODO-list item.

Syndicated 2015-10-07 10:56:49 from Steve Kemp's Blog

All about sharing files easily

Although I've been writing a bit recently about file-storage, this post is about something much more simple: Just making a random file or two available on an ad-hoc basis.

In the past I used to have my email and website(s) hosted on the same machine, and that machine was well connected. Making a file visible just involved running ~/bin/publish, which used scp to write a file beneath an apache document-root.

These days I use "my computer", "my work computer", and "my work laptop", amongst other hosts. The SSH-keys required to access my personal boxes are not necessarily available on all of these hosts. Add in firewall constraints and suddenly there isn't an obvious way for me to say "Publish this file online, and show me the root".

I asked on twitter but nothing useful jumped out. So I ended up writing a simple server, via sinatra which would allow:

  • Login via the site, and a browser. The login-form looks sexy via bootstrap.
  • Upload via a web-form, once logged in. The upload-form looks sexy via bootstrap.
  • Or, entirely seperately, with HTTP-basic-auth and a HTTP POST (i.e. curl)

This worked, and was even secure-enough, given that I run SSL if you import my CA file.

But using basic auth felt like cheating, and I've been learning more Go recently, and I figured I should start taking it more seriously, so I created a small repository of learning-programs. The learning programs started out simply, but I did wire up a simple TOTP authenticator.

Having TOTP available made me rethink things - suddenly even if you're not using SSL having an eavesdropper doesn't compromise future uploads.

I'd also spent a few hours working out how to make extensible commands in go, the kind of thing that lets you run:

cmd sub-command1 arg1 arg2
cmd sub-command2 arg1 .. argN

The solution I came up with wasn't perfect, but did work, and allow the seperation of different sub-command logic.

So suddenly I have the ability to run "subcommands", and the ability to authenticate against a time-based secret. What is next? Well the hard part with golang is that there are so many things to choose from - I went with gorilla/mux as my HTTP-router, then I spend several hours filling in the blanks.

The upshot is now that I have a TOTP-protected file upload site:

publishr init    - Generates the secret
publishr secret  - Shows you the secret for import to your authenticator
publishr serve   - Starts the HTTP daemon

Other than a lack of comments, and test-cases, it is complete. And stand-alone. Uploads get dropped into ./public, and short-links are generated for free.

If you want to take a peak the code is here:

The only annoyance is the handling of dependencies - which need to be "go got ..". I guess I need to look at godep or similar, for my next learning project.

I guess there's a minor gain in making this service available via golang. I've gained protection against replay attacks, assuming non-SSL environment, and I've simplified deployment. The downside is I can no longer login over the web, and I must use curl, or similar, to upload. Acceptible tradeoff.

Syndicated 2015-09-13 12:39:10 from Steve Kemp's Blog

The Jessie 8.2 point-release broke for me

I have about 18 personal hosts, all running the Jessie release of Debian GNU/Linux. To keep up with security updates I use unattended-upgrades.

The intention is that every day, via cron, the system will look for updates and apply them. Although I mostly expect it to handle security updates I also have it configured such that point-releases will be applied by magic too.

Unfortunately this weekend, with the 8.2 release, things broke in a significant way - The cron deamon was left in a broken state, such that all cronjobs failed to execute.

I was amazed that nobody had reported a bug, as several people on twitter had the same experience as me, but today I read through a lot of bug-reports and discovered that #783683 is to blame:

  • Old-cron runs.
  • Scheduled unattended-upgrades runs.
  • This causes cron to restart.
  • When cron restarts the jobs it was running are killed.
  • The system is in a broken state.

The solution:

# dpkg --configure -a
# apt-get upgrade

I guess the good news is I spotted it promptly, with the benefit of hindsight the bug report does warn of this as being a concern, but I guess there wasn't a great solution.

Anyway I hope others see this, or otherwise spot the problem themselves.


In unrelated news the seaweedfs file-store I previously introduced is looking more and more attractive to me.

I reported a documentation-related bug which was promptly handled, even though it turned out I was wrong, and I contributed CIDR support to whitelisting hosts which was merged in well.

I've got a two-node "cluster" setup at the moment, and will be expanding that shortly.

I've been doing a lot of little toy-projects in Go recently. This weekend I was mostly playing with the message-bus, and tying it together with sinatra.

Syndicated 2015-09-07 09:37:05 from Steve Kemp's Blog

Making an old android phone useful again

I've got an HTC Desire, running Android 2.2. It is old enough that installing applications such as thsoe from my bank, etc, fails.

The process of upgrading the stock ROM/firmware seems to be:

  • Download an unsigned zip file, from a shady website/forum.
  • Boot the phone in recovery mode.
  • Wipe the phone / reset to default state.
  • Install the update, and hope it works.
  • Assume you're not running trojaned binaries.
  • Hope the thing still works.
  • Reboot into the new O/S.

All in all .. not ideal .. in any sense.

I wish there were a more "official" way to go. For the moment I guess I'll ignore the problem for another year. My nokia phone does look pretty good ..

Syndicated 2015-08-13 14:44:38 from Steve Kemp's Blog

A brief look at the weed file store

Now that I've got a citizen-ID, a pair of Finnish bank accounts, and have enrolled in a Finnish language-course (due to start next month) I guess I can go back to looking at object stores, and replicated filesystems.

To recap my current favourite, despite the lack of documentation, is the Camlistore project which is written in Go.

Looking around there are lots of interesting projects being written in Go, and so is my next one the seaweedfs, which despite its name is not a filesystem at all, but a store which is accessed via HTTP.

Installation is simple, if you have a working go-lang environment:

go get

Once that completes you'll find you have the executable bin/weed placed beneath your $GOPATH. This single binary is used for everything though it is worth noting that there are distinct roles:

  • A key concept in weed is "volumes". Volumes are areas to which files are written. Volumes may be replicated, and this replication is decided on a per-volume basis, rather than a per-upload one.
  • Clients talk to a master. The master notices when volumes spring into existance, or go away. For high-availability you can run multiple masters, and they elect the real master (via RAFT).

In our demo we'll have three hosts one, the master, two and three which are storage nodes. First of all we start the master:

root@one:~# mkdir /
root@one:~# weed master -mdir / -defaultReplication=001

Then on the storage nodes we start them up:

root@two:~# mkdir /data;
root@two:~# weed volume -dir=/data -max=1  -mserver=one.our.domain:9333

Then the second storage-node:

root@three:~# mkdir /data;
root@three:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

At this point we have a master to which we'll talk (on port :9333), and a pair of storage-nodes which will accept commands over :8080. We've configured replication such that all uploads will go to both volumes. (The -max=1 configuration ensures that each volume-store will only create one volume each. This is in the interest of simplicity.)

Uploading content works in two phases:

  • First tell the master you wish to upload something, to gain an ID in response.
  • Then using the upload-ID actually upload the object.

We'll do that like so:

laptop ~ $ curl -X POST http://one.our.domain:9333/dir/assign

client ~ $ curl -X PUT -F file=@/etc/passwd,06c3add5c3

In the first command we call /dir/assign, and receive a JSON response which contains the IPs/ports of the storage-nodes, along with a "file ID", or fid. In the second command we pick one of the hosts at random (which are the IPs of our storage nodes) and make the upload using the given ID.

If the upload succeeds it will be written to both volumes, which we can see directly by running strings on the files beneath /data on the two nodes.

The next part is retrieving a file by ID, and we can do that by asking the master server where that ID lives:

client ~ $ curl http://one.our.domain:9333/dir/lookup?volumeId=1,06c3add5c3

Or, if we prefer we could just fetch via the master - it will issue a redirect to one of the volumes that contains the file:

client ~$ curl http://one.our.domain:9333/1,06c3add5c3
<a href=",06c3add5c3">Moved Permanently</a>

If you follow redirections then it'll download, as you'd expect:

client ~ $ curl -L http://one.our.domain:9333/1,06c3add5c3

That's about all you need to know to decide if this is for you - in short uploads require two requests, one to claim an identifier, and one to use it. Downloads require that your storage-volumes be publicly accessible, and will probably require a proxy of some kind to make them visible on :80, or :443.

A single "weed volume .." process, which runs as a volume-server can support multiple volumes, which are created on-demand, but I've explicitly preferred to limit them here. I'm not 100% sure yet whether it's a good idea to allow creation of multiple volumes or not. There are space implications, and you need to read about replication before you go too far down the rabbit-hole. There is the notion of "data centres", and "racks", such that you can pretend different IPs are different locations and ensure that data is replicated across them, or only within-them, but these choices will depend on your needs.

Writing a thin middleware/shim to allow uploads to be atomic seems simple enough, and there are options to allow exporting the data from the volumes as .tar files, so I have no undue worries about data-storage.

This system seems reliable, and it seems well designed, but people keep saying "I'm not using it in production because .. nobody else is" which is an unfortunate problem to have.

Anyway, I like it. The biggest omission is really authentication. All files are public if you know their IDs, but at least they're not sequential ..

Syndicated 2015-08-10 13:29:10 from Steve Kemp's Blog

770 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!