California postcard 2012
California postcard 2012
build a dynamic webapp in Yesod in just three days
This screenscast shows what I built. Scroll down for my Yesod braindump.
I've been astonished how quickly this went together. This is my first time using any sort of web framework, and I used a still unusual one, Yesod. It's my first time using Bootstrap too. It's also the first time I've done any AJAX programming!
Bootstrap was something I'd heard of and seen some suspiciously similar looking sites built with, but it was really a pleasant surprise. Being able to make things that work and look good on the web without struggling with CSS is such a nice change. For the first time, it makes the web feel like a UI toolkit to me.
Overall, I'm really enjoying Yesod, and it's making me productive in new ways.
I also see a lot of potential in Yesod to improve from where it is.
I'm betting this will be integrated into Yesod eventually. They have an active wiki page about it.
There's a WAI library for building local webapps with Yesod, but it was not suitable for my needs (for one thing, it lacks security; for another it kills the haskell program when the web page is closed); so I built my own webapp library. A problem with my current pace of development is that I'm building lots of reusable libraries, but I don't have the time to stabalize them and make them generally available. That one goes in the pile of 2k+ lines of such code.
Yesod needs a version of the Hamlet markup that can be edited by people who only understand html. That means it should allow closing tags, and tabs, and not have nasty whitespace gotchas. I think this would be easy to build, starting from Hamlet. It could be called "Hecate".. I don't have time right now.
The compile time error messages are often beyond atrocious. Seriously, I'm tempted to write a filter to filter out common patterns where there's one line about a syntax error in a Hamlet file sandwitched in between 150 lines of type error gobbly-gook and generated code.
Some really nice things could be done integrating Yesod with Bootstrap. Like the ability to build entire sites by just composing together Bootstrap components, with no need to write a single line of html or css. I'm very tempted to try to build this library.
webpage = bootstrap Dynamic $ do setTitle "FooCorp" login <- getLogin navbar [FixedTop] $ do brand "FooCorp" link AboutR link BlogR nav [PullRight] $ link . maybe LoginR ProfileR login div [ContainerFluid] $ content login where content Nothing = heroUnit $ do para $ "We're the FooCorp for you." button "Register Today" [Primary, Large] SignUpR carousel [ amazingFeatures , aboutFooCorp , pricing ] content (Just user) = do para $ "Welcome back " ++ name user ++ "!" showProfile user
ghc threaded runtime gotchas
That's a seemingly simple question I started asking various people two weeks ago. I didn't get many useful answers, but now I have experience doing it myself, and so here's a blog post brain dump.
I have been trying to convert git-annex to use GHC's threaded runtime, for
a variety of reasons. Naively adding the
-threaded option resulted in a
git-annex command that seemed to randomly freeze, but only sometimes
(and, infuriatingly, never when I straced it), and a test suite that froze
at a random point almost every time I ran it. Not a good result, and
lacking any knowledge about gotchas with using the threaded runtime, I was
at a loss for a long time (most of DebConf) about how to fix it.
I now know of at least three classes of problems that enabling the threaded runtime can turn up in programs that have been developed using the non-threaded runtime.
MissingH has some code similar to this, which works ok with the non-threaded runtime:
forkProcess $ do debugM "about to run some command" executeFile ...
In the above example,
debugM accesses a
MVar. Doing that after
forkProcess can result in a MVar deadlock, as it tries to access a MVar
value, that is, apparently, not accessible to the forked process.
(Bug report with test case)
System.Cmd.Utils from MissingH is asking for trouble.
I switched all my code to the newer and, apparently, threaded runtime
Even when not accessing a MVar after
forkProcess, it's very unsafe to
use. It's easy to miss the warning attached to forkProcess, when the code
seems to work. But with the threaded runtime, I've found that most
any call to
forkProcess will sometimes succeed, and sometimes freeze
the whole program. This might only happen around 1 time in 1000.
Then you'll find this warning and do a lot of head-scratching about what
it really means:
forkProcess comes with a giant warning: since any other running threads are not copied into the child process, it's easy to go wrong: e.g. by accessing some shared resource that was held by another thread in the parent.
The hangs I saw could be due to laziness issues deferring code to run
forkProcess that you'd expect to have run before it ... or
who knows what else.
It's not clear to me that it's possible to use
forkProcess safely in
Haskell code. I think it's notable that
System.Process runs the whole
fork/exec in C code instead.
According to most of the documentation you'll find in eg, the Haskell wiki,
Real World Haskell, etc, the only difference between the
unsafe imports in the FFI is that
unsafe is faster, and shouldn't be
used for C code that calls back into Haskell code.
But the documentation is out of date. Actually, if you're using the FFI,
and the foreign function can block, you need to use
safe. When using
unsafe, a blocking foreign function can block all threads of the program.
In my case, I was using
kqueue to wait for changes to files, and this
indeed blocked my whole program when linked with
-threaded. Marking it
safe fixed this.
The details are well described in this paper: http://community.haskell.org/~simonmar/papers/conc-ffi.pdf
Somewhat surprisingly, this blocking only happens when using the threaded
runtime. If you're using the non-threaded runtime with
FFI functions, your other pseudo-threads won't be blocked. This is because
the non-threaded runtime has an SIGALARM timer that interrupts (most)
blocking system calls. This leads to other troubles of its own (like
needing to restart interrupted FFI functions, or blocking the other
pseudo-threads from running if the C code ignores SIGALARM), but that's
offtopic for this post.
Converting a large Haskell code base from the default, non-threaded runtime to the threaded runtime can be quite tricky. None of the problems are the sort of thing that Haskell helps to manage either. I will be developing new programs using the threaded runtime from the beginning from now on.
By the way, don't take this post to say that threading in Haskell sucks.
I've really been enjoying writing threaded Haskell code. The control
Haskell gives over isolation of state to threads, and the excellent and
wide-ranging suite of thread communications data types (
SampleVar, etc) have made developing a complex threaded
program something I feel comfortable doing for the first time, in any
I am become Joey, destroyer of drives
To add to the fun of being in Nicaragua, my laptop's solid state drive died the second day here. Seems this is the failure mode for a SSD, get a little slow, and then a switch flips and it stops accepting any writes, at all, becoming a read-only media. Never was there an indication of a problem in SMART etc.
Did you know that Nicaragua has neither names for most roads, nor addresses? Rather than trying to shoehorn the directions to my hotel into the address fields of Amazon.com, I shipped the replacement SSD to home, and thought I'd limp along.
I destroyed my second drive one hour after getting it working. A borrowed USB flash drive, which seems to have melted under the unusual load while I was away at lunch. Perhaps putting an ext3 filesystem on it was a very bad idea, although I have successfully run other USB flash drives that way for years. Perhaps it was a cheap drive that only pretended to hold 32 gb.
For several days I happily used the third drive, a USB hard drive, as my temporary root filesystem. Until I destroyed it by knocking it off my bed.
Now my netbook is running from a Debian Live USB key, with a second USB key for my customisations. So far I have not managed to destroy these, but there's still a day left in my trip..
debian-cd work at DebCamp
I've been working the past two days on debian-cd, which was the main thing (besides git-annex assistant) I planned to work on at DebCamp.
Yesterday, Steve McIntyre and I cleaned up some cruft in debian-cd's package lists. This freed somewhere in the area of 30 mb. I also took a grep through d-i and made sure the CDs include packages that d-i can optionally install in some situations.
Today, I investigated how Recommends are ordered on the CD, and concluded it's as close to optimal as can be easily achieved. So was not able to save space there, but I did find a way to reorganize the desktop task that avoids needing to include a lot of printing stuff and some other stuff on the first CD. While that helped some, it still didn't get either Gnome or Kde to entirely fit, so getting there will probably involve rebuilding 100 or so packages with xz, if someone decides to do that.
So it's stil TBD whether Gnome or Kde will fit on a single CD in Debian wheezy. At this point I think most of us are getting tired of fighting this increasingly losing battle every release, and so other options like only having a desktop DVD are looking more appealing, as they solve the problem long-term. This would also free up the first CD for other interesting use cases, perhaps xfce, or perhaps a CD targeted at server users, and/or containing high-popularity non-desktop packages.
I reached a nice milestone on my git-annex assistant in my first day's work at DebCamp in Nicaragua. Here's a screencast demoing it.
git-annex-assistant.ogg (12 MB)
By the way, the weather, food, and people here are all excellent.
notes for a caretaker
I recently had the sort of weird experience of a recent blog post being on the top of Hacker News and there being a fair amount of interest in the details of my wacky living situation.
I could write a detailed post explaining everything, but that'd be boring .. instead I'll tease the stalkers with more oblique references to it. So here's my notes for a caretaker who will be here while I'm away at DebConf.
Please make yourself at home!
AA:AA:AA:AA:AA:AA. You may have to enter this information manually.) Slow is normal.
Obnam 1.0 was released during several months when I had no well-connected server to use for backups. Yesterday I installed a terabyte disk in a basement with a fiber optic network connection, so my backupless time is over.
Now, granted, I have a very multi-layered approach to backups; all my data is stored in git, most of it with dozens of copies automatically maintained, and with archival data managed by git-annex. But I still like to have a "real" backup system underneath, to catch anything else. And to back up those parts of my user's data that I have not given them tools to put into git yet...
My backup server is not my basement, so I need to securely encrypt
the backups stored there. Encrypting your offsite backups is such a good
idea that I've always been surprised at the paucity of tools to do it. I
got by with
duplicity for years, but it's increasingly creaky, and the
few times I've needed to restore, it's been a terrific pain. So I'm excited
to be trying Obnam today.
So far I quite like it. The only real problem is that it can be slow, when there's a transatlantic link between the client and the server. Each file backed up requires several TCP round-trips, and the latency kills the bandwidth. Large files are still sent fast, and obnam uses little resources on either the client or server while running. And this mostly only affects the initial, full backup.
But the encryption and ease of use more than make up for this. The real
killer feature with Obnam's encryption isn't that it's industry-standard
encryption with gpg, that can be trivially enabled with a single option
--encrypt-with=DEADBEEF). No, the great thing about it is its key
I generate a new gpg key for each system I back up. This prevents systems reading each other's backups. But that means you have to backup the backup keys.. or when a system is lost, the backup would be inaccessible.
With Obnam, I can instead just grant my personal gpg key access to
obnam add-key --keyid 2512E3C7. Now both the machine's
key and my gpg key can access the data. Great system; can't revoke access,
but otherwise perfect. I liked this so much I stole the design and used
it in git-annex too. :)
I'm also pleased I can lock down
.ssh/authorized_keys on my backup
server, to prevent clients running arbitrary commands. Duplicity runs
ad-hoc commands over ssh, which defeated me from ever locking it down.
Obnam can be easily locked down, like this:
This could still be improved, since clients can still read the whole filesystem with sftp. I'd like to have something like git-annex's git-annex-shell, which can limit access to only a specific repository. Hmm, if Obnam had its own server-side program like this, it could stream backup data to it using a protocol that avoids the roundtrips needed by the SFTP protocol, and fix the latency issue too. Lars, I know you've been looking for a Haskell starter project ... perhaps this is it? :)
I work for The Internet now
I have an interesting problem: How do I shoehorn "hired by The Internet for a full year to work on Free Software" into my resume?
Yes, the git-annex Kickstarter went well. :) I had asked for enough to get by for three months. Multiple people thought I should instead work on it for a full year and really do the idea justice. Jason Scott was especially enthusiastic about this idea. So I added new goals and eventually it got there.
Don Marti thinks the success of my Kickstarter validates crowdfunding for Free Software. Hard to say; this is not the first Free Software to be funded on Kickstarter. Remember Diaspora?
Here's what I think worked to make this a success:
I have a pretty good reach with this blog. I reached my original goal in the first 24 hours, and during that time, probably 75% of contributions were from people I know, people who use git-annex already, or people who probably read this blog. Oh, and these contributors were amazingly generous.
I had a small, realistic, easily acheivable goal. This ensured my project was quickly visible in "successful projects" on Kickstarter, and stayed visible. If I had asked for a year up front, I might not have fared as well. It also led to my project being a "Staff Pick" for a week on Kickstarter, which exposed it to a much wider audience. In the end, nearly half my funding came from people who stumbled over the project on Kickstarter.
The git-annex assistant is an easy idea to grasp, at varying levels of technical expertise. It can be explained by analogy to DropBox, or as something on top of git, or as an approach to avoid to cloud vendor lockin. Most of my previous work would be much harder to explain to a broad audience in a Kickstarter. But this still appeals to very technical audiences too. I hit a sweet spot here.
I'm enhancing software I've already written. This made my Kickstarter a lot less vaporware than some other software projects on Kickstarter. I even had a branch in git where I'd made sure I could pull off the basic idea of tying git-annex and inotify together.
I put in a lot of time on the Kickstarter side. My 3 minute video, amuaturish as it is, took 3 solid days work to put together. (And 14 thousand people watched it... eep!) I added new and better rewards, posted fairly frequent updates, answered numerous questions, etc.
I managed to have at least some Kickstarter rewards that are connected to the project is relevant ways. This is a hard part of Kickstarter for Free Software; just giving backers a copy of the software is not an incentive for most of them. A credits file mention pulled in a massive 50% of all backers, but they were mostly causual backers. On the other end, 30% of funds came from USB keychains, which will be a high-quality reward and has a real use case with git-annex.
The surprising, but gratifying part of the rewards was that 30% of funds came from rewards that were essentially "participate in this free software project" -- ie, "help set my goals" and "beta tester". It's cool to see people see real value in participating in Free Software.
I was flexible when users asked for more. I only hope I can deliver on the Android port. Its gonna be a real challange. I even eventually agreed to spend a month trying to port it to Windows. (I refused to develop an IOS port even though it could have probably increased my funding; Steve's controlling ghost and I don't get along.)
It seemed to help the slope of the graph when I started actually working on the project, while the Kickstarter was still ongoing. I'd reached my original goal, so why not?
I've already put two weeks work into developing the git-annex assistant.
I'm blogging about my progress every day on its
The first new feature, a
git annex watch command that automatically
commits changes to git, will be in the next release of git-annex.
I can already tell this is going to be a year of hard work, difficult problems, and great results. Thank you for helping make it happen.
going to DebConf 12
Nicaragua here I come! I plan to be at DebCamp for a few days too, taking advantage of some time on internet-better-than-dialup to work on fixing the tasks on the CDs. Also will be available for one-on-one sessions on debhelper, debconf, debian-installer development, pristine-tar, or git-annex. Give me a yell if you'd like to spend some time learning about any of these.
Also, my git-annex Kickstarter ends in 3 days. It has reached heights that will fund me, at a modest rate, for a full year of development!
(I'm also about to start renting my house for the first time. Whee!)
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!