Older blog entries for wlach (starting at number 146)

First Eideticker Responsiveness Tests

[ For more information on the Eideticker software I'm referring to, see this entry ]

Time for another update on Eideticker. In the last quarter, I’ve been working on two main items:

  1. Responsiveness tests (Android / FirefoxOS)
  2. Eideticker for FirefoxOS

The focus of this post is the responsiveness work. I’ll talk about Eideticker for FirefoxOS soon. :)

So what do I mean by responsiveness? At a high-level, I mean how quickly one sees a response after performing an action on the device. For example, if I perform a swipe gesture to scroll the content down while browsing CNN.com, how long does it take after
I start the gesture for the content to visibly scroll down? If you break it down, there’s a multi-step process that happens behind the scenes after a user action like this:

input-events

If anywhere in the steps above, there is a significant delay, the user experience is likely to be bad. Usability research
suggests that any lag that is consistently above 100 milliseconds will lead the user to perceive things as being laggy. To keep our users happy, we need to do our bit to make sure that we respond quickly at all levels that we control (just the application layer on Android, but pretty much everything on FirefoxOS). Even if we can’t complete the work required on our end to completely respond to the user’s desire, we should at least display something to acknowledge that things have changed.

But you can’t improve what you can’t measure. Fortunately, we have the means to do calculate of the time delta between most of the steps above. I learned from Taras Glek this weekend that it should be possible to simulate the actual capacitative touch event on a modern touch screen. We can recognize when the hardware event is available to be consumed by userspace by monitoring the `/dev/input` subsystem. And once the event reaches the application (the Android or FirefoxOS application) there’s no reason we can’t add instrumentation in all sorts of places to track the processing of both the event and the rendering of the response.

My working hypothesis is that it’s application-level latency (i.e. the time between the application receiving the event and being able to act on it) that dominates, so that’s what I decided to measure. This is purely based on intuition and by no means proven, so we should test this (it would certainly be an interesting exercise!). However, even if it turns out that there are significant problems here, we still care about the other bits of the stack — there’s lots of potentially-latency-introducing churn there and the risk of regression in our own code is probably higher than it is elsewhere since it changes so much.

Last year, I wrote up a tool called Orangutan that can directly inject input events into an input device on Android or FirefoxOS. It seemed like a fairly straightforward extension of the tool to output timestamps when these events were registered. It was. Then, by synchronizing the time between the device and the machine doing the capturing, we can then correlate the input timestamps to events. To help visualize what’s going on, I generated this view:

taskjs-framediff-view

[Link to original]

The X axis in that graph represents time. The Y-axis represents the difference between the frame at that time with the previous one in number of pixels. The “red” represents periods in capture when events are ongoing (we use different colours only to
distinguish distinct events). [1]

For a first pass at measuring responsiveness, I decided to measure the time between the first event being initiated and there being a significant frame difference (i.e. an observable response to the action). You can see some preliminary results on the eideticker dashboard:

taskjs-responsiveness

[Link to original]

The results seem pretty highly variable at first because I was synchronizing time between the device and an external ntp server, rather than the host machine. I believe this is now fixed, hopefully giving us results that will indicate when regressions occur. As time goes by, we may want to craft some special eideticker tests for responsiveness specifically (e.g. a site where there is heavy javascript background processing).

[1]Incidentally, these “frame difference” graphs are also quite useful for understanding where and how application startup has regressed in Fennec — try opening these two startup views side-by-side (before/after a large regression) and spot the difference: [1] and [2])

Syndicated 2013-10-08 00:36:22 from William Lachance's Log

Early morning questions

Last night while I was lying in bed the mystery of my being here, present, again occurred to me. Pondered that a bit upon waking up. Let me formulate two mysteries that, as far as I know, no one has given really satisfactory answers to:

  1. Why does anything exist at all? And given that things do exist, why should they take the form that they do (planets, suns, nebulae, even life)?
  2. What accounts for the “subjectivity” of experience? That is, why is life not only here, but (in humanity’s case at least, probably in the case of other higher-order life, and possibly all life) there is a *conscious* experience that goes on with our perceptions of the world? It does not seem necessary for (1), does it?

Perhaps the answer here is just that the way our minds (and hence anything we could form into thought or language) is based on descriptions of the world according to our perception. But (1) and (2) are in a sense, beyond this. I think in the case of (1) it is obvious why. In the case of (2) this might just be a limitation of our language/thought — certainly we can express that someone/something is conscious in a 3rd party sort of way (i.e. “she perceived red”), though this does not (as far as I can tell) express the realness of the experience. It’s a description, not the experience. To really understand experience from a 3rd person perspective (and hence why it exists?), you would need to go outside experience — but description is part of experience! It is impossible outside of it.

[ Maybe I am just restating Kant here ]

Syndicated 2013-09-25 14:04:44 from William Lachance's Log

How to make great coffee that doesn’t generate 966 million pounds of waste a year

I was kind of appalled today to see this:

Story of Stuff Picture

I initially thought this had to be a tall tale told by hippies, but doing a back of the envelope calculation, I realized that such a figure is entirely possible. Assume each packet weighs 0.05 pounds. Typing that into python I get:

>>> 966*(10**6)/0.05
19320000000.0

19 billion packets. Seems awfully big. But divide that by, say, 10 million people:

>>> x = 966*(10**6)/0.05
>>> x/10**7
1932.0

1932 cups. Hmm, still seems big. That’s more than 5 cups a day. But if we say 30 million people are drinking this stuff, we rapidly get to the zone of plausibility.

People, it doesn’t have to be this way. You can have way better coffee that produces zero waste for only marginally more effort. Allow me to present the Will method of coffee production. First off, you use this thing:

bialetti_coffee_maker

I have tried alternatives: french presses, filter coffee, “cowboy” percolators, even “professional” espresso makers. I maintain that the Bialetti filter produces the best cup of coffee: one full cup of espresso goodness. Not too strong, not too weak. Just perfect. Add some milk and you have an amazing café au lait. Of course, part of getting the best cup is using the right beans. If you’re brewing at home, you can afford to go a little fancy. Here’s what I’m currently using:

portlandia_coffee

Yep, that’s right. A slice of Portlandia. Got this bag of espresso from Cafe Myriad, a rather upscale coffee joint. I think it was 15 dollars. A small bag like this is good for 30 cups or so. A keurig k-pack is $17.45 for 24. I’d say I’m still ahead. If you’re on a tighter budget you can get fair trade beans for cheaper ($10 a pound?) from Santropol in Montréal. Or whatever. Even generic stuff is probably fine (though I encourage fair trade if you can possibly afford it).

And what do I do with the waste? The only waste product of the Bialetti filter is coffee grinds. If I happened to live in a borough of Montréal with composting, I could dump it there. Unfortunately I don’t (if you live in NDG, please vote for these people in the upcoming municipal election; municipal composting is part of their platform, amongst other awesomeness) so I have a vermicompost. My morning ritual is dump yesterday’s coffee grinds into this bin:

vermicompost_pic

… and then my numerous worms do the work of turning it into beautiful soil which I use in my balcony garden to grow tomotatos, kale, swiss chard, basil, and oregano.

What I want to emphasize most of all is that my ritual takes very little time. Scraping out and cleaning my Bialetti in the worm compost bin takes around minute. Refilling it with water and coffee takes maybe 30 seconds. Yes, once a year I have to take the worm trailings out of my vermicompost bin. That takes longer (maybe 30 minutes to an hour) but it’s a once a year thing and you avoid having to go to the store to buy fertilizer. Less waste. Way better coffee. Only a marginally more time spent. To me, this is a no-brainer.

Syndicated 2013-09-18 03:42:58 from William Lachance's Log

NIXI Update

I’ve been working on a new, mobile friendly version of Nixi on-and-off for the past year and a bit. I’m not sure when it’s ever going to be finished, so I thought I might as well post the work-in-progress, which has these noteworthy improvements:

  • Even faster than before (using the Bootstrap library behind the scenes, no longer using slow canvas library to update map)
  • Sexier graphics (thanks to the aforementioned Bootstrap library)
  • Now uses client side URLs to keep track of state as you navigate through the site. This allows you to bookmark a favorite spot (e.g. your home) and then go back to it later. For example, this link will give you a list of BIXI docks near Station C, the coworking space I belong to.

If you use BIXI at all, check it out and let me know what you think!

nixi screenshot

Syndicated 2013-08-25 21:28:53 from William Lachance's Log

Simple command-line ntp client for Android and FirefoxOS

Today I did a quick port of Larry Doolittle’s ntpclient program to Android and FirefoxOS. Basically this lets you easily synchronize your device’s time to that of a central server. Yes, there’s lots and lots of Android “applications” which let you do this, but I wanted to be able to do this from the command line because that’s how I roll. If you’re interested, source and instructions are here:

https://github.com/wlach/ntpclient-android

For those curious, no, I didn’t just do this for fun. :) For next quarter, we want to write some Eideticker-based responsiveness tests for FirefoxOS and Android. For example, how long does it take from the time you tap on an icon in the homescreen on FirefoxOS to when the application is fully loaded? Or on Android, how long does it take to see a full list of sites in the awesomebar from the time you tap on the URL field and enter your search term?

Because an Eideticker test run involves two different machines (a host machine which controls the device and captures video of it in action, as well as the device itself), we need to use timestamps to really understand when and how events are being sent to the device. To do that reliably, we really need some easy way of synchronizing time between two machines (or at least accounting for their differences, which amounts to about the same thing). NTP struck me as being the easiest, most standard way of doing this.

Syndicated 2013-07-08 23:22:49 from William Lachance's Log

Proof of concept Eideticker dashboard for FirefoxOS

[ For more information on the Eideticker software I'm referring to, see this entry ]

I just put up a proof of concept Eideticker dashboard for FirefoxOS here. Right now it has two days worth of data, manually sampled from an Unagi device running b2g18. Right now there are two tests: one the measures the “speed” of the contacts application scrolling, another that measures the amount of time it takes for the contacts application to be fully loaded.

For those not already familiar with it, Eideticker is a benchmarking suite which captures live video data coming from a device and analyzes it to determine performance. This lets us get data which is more representative of actual user experience (as opposed to an oft artificial benchmark). For example, Eideticker measures contacts startup as taking anywhere between 3.5 seconds and 4.5 seconds, versus than the 0.5 to 1 seconds that the existing datazilla benchmarks show. What accounts for the difference? If you step through an eideticker-captured video, you can see that even though something appears very quickly, not all the contacts are displayed until the 3.5 second mark. There is a gap between an app being reported as “loaded” and it being fully available for use, which we had not been measuring until now.

At this point, I am most interested in hearing from FirefoxOS developers on new tests that would be interesting and useful to track performance of the system on an ongoing basis. I’d obviously prefer to focus on things which have been difficult to measure accurately through other means. My setup is rather fiddly right now, but hopefully soon we can get some useful numbers going on an ongoing basis, as we do already for Firefox for Android.

Syndicated 2013-05-06 22:23:16 from William Lachance's Log

Further meditative practice

biodome

Okay, remember last time when I said I was going to continue my “sham of a human existence” and not commit to a Zen practice? Well, I came back to the idea sooner than I thought: the experience was just too compelling for me not to do some further exploration. In some strange coincidence, Hacker News had a great thread on meditation just after I wrote my last blog entry, where a few people recommended a book called Mindfulness in Plain English. I figured doing meditation at home didn’t involve any kind of huge commitment (don’t like it? just stop!), so I decided to order it online and give it a try.

Mindfulness in Plain English is really fascinating stuff. It describes how to do a type of Vipassana (insight) meditation, which is practiced with a great deal of ritual in places like Thailand, India, and Sri Lanka. The book however, strips out most of the ritual and just gives you a set of techniques that is quite accessible for a (presumably) western audience. It seems like the goal of Vipassana is quite similar to that of Zen (enlightenment; release from attachment and dualism), though the methods and rituals around it are quite different (e.g. there are no koans). Perhaps it’s akin to the difference between GIMP and Photoshop: as those two programs are both aimed at the manipulation of images, both Vipassana and Zen are aimed at the manipulation of the mind. There are differences in the script of how to do so, but the overarching purpose is the same.

Regardless of the ultimate differences between the two traditions, the portion of the Vipassana method that the book describes is almost exactly that which I tried at the Zen workshop: sit still and pay attention to your breathing. There’s a few minor differences in terms of the suggested posture (the book recommends either sitting cross legged or the lotus positions) and the focal point (Mindfulness recommends the tip of the nostrils). But essentially it’s the same stuff. Focus on the breath — counting it if necessary, rince, repeat.

As I mentioned before, this is actually really hard to do properly. The mind keeps wandering and wandering on all sorts of tangents: plans, daydreams, even thoughts about the meditation itself. Where I found Mindfulness in Plain English helpful was in the advice it gave for dealing with this “monkey mind” phenomenon. The subject is dealt with throughout the book (with two chapters on it and nothing else), but all the advice boils down to “treat it as part of the meditation”. Don’t try to avoid it, just treat it as something to be aware of in the same way as breathing. Then once you have acknowledged it, move the attention back to the breath.

“Mindfulness” can be described as a non-judgemental awareness of what we are doing (and what we are supposed to be doing). Every time a distraction is noticed, felt, and understood, you’ve just experienced some approximation of the end goal of the meditation. Like it is with other things (an exercise regimen, learning to play a musical instrument), every small victory should push you further and the path to where you want to go. With enough practice, mindfulness might just become part of your day-to-day experience.

Or so I’m told. Up to now, I haven’t enjoyed any longlasting effects aside from (possibly?) a bit more mental clarity in my day-to-day tasks. But I’ve found the meditation practice to be extremely interesting both from the point of view of understanding my own thought, as well as being rather relaxing in and of itself. So while I’m curious as to what comes next, but am happy enough with things as they are in the present. More updates as appropriate.

Syndicated 2013-04-28 20:55:18 from William Lachance's Log

Actual useful FirefoxOS Eideticker results at last

Another update on getting Eideticker working with FirefoxOS. Once again this is sort of high-level, looking forward to writing something more in-depth soon now that we have the basics working. :)

I finally got the last kinks out of the rig I was using to capture live video from FirefoxOS phones using the Point Grey devices last week. In order to make things reasonable I had to write some custom code to isolate the actual device screen from the rest of capture and a few other things. The setup looks interesting (reminds me a bit of something out of the War of the Worlds):

eideticker-pointgrey-mounted

Here’s some example video of a test I wrote up to measure the performance of contacts scrolling performance (measured at a very respectable 44 frames per second, in case you wondering):

Surprisingly enough, I didn’t wind up having to write up any code to compensate for a noisy image. Of course there’s a certain amount of variance in every frame depending on how much light is hitting the camera sensor at any particular moment, but apparently not enough to interfere with getting useful results in the tests I’ve been running.

Likely next step: Create some kind of chassis for mounting both the camera and device on a permanent basis (instead of an adhoc one on my desk) so we can start running these sorts of tests on a daily basis, much like we currently do with Android on the Eideticker Dashboard.

As an aside, I’ve been really impressed with both the Marionette framework and the gaiatests python module that was written up for FirefoxOS. Writing the above test took just 5 minutes — and the code is quite straightforward. Quite the pleasant change from my various efforts in Android automation.

Syndicated 2013-04-22 15:32:51 from William Lachance's Log

The need for a modern open source email client and Geary’s fundraiser

One of my frustrations with the Linux desktop is the lack of an email client that’s in the same league as GMail or Apple’s mail.app. Thunderbird is ok as far as it goes (I use it for my day-to-day Mozilla correspondence) but I miss having a decent conversation view of email (yes, I tried the conversation view extension — it didn’t work particularly well) and the search functionality is rather slow and cumbersome. I’d like to be optimistic about these problems being fixed at some point… but after nearly 2 years of using the product without much visible improvement my expectation of that happening is rather low.

The Yorba non-profit recently started a fundraiser to work on the next edition of Geary, an email client which I hope will fill the niche that I’m talking about. It’s pretty rough around the edges still, but even at this early stage the conversation view is beautiful and more or less exactly what I want. The example of Shotwell (their photo management application) suggests that they know a thing or two about creating robust and useable software, not a common thing in this day and age. In any case, their pitch was compelling enough for me to donate a few dollars to the cause. If you care about having a great email experience that is completely under your control (and not that of an advertising or product company with their own agenda), then maybe you could too?

Syndicated 2013-04-20 03:02:17 from William Lachance's Log

A visit to the Montreal Zen Center

The Road to the Montreal Zen Center

So for a bit of a departure from the usual technical content, a personal anecdote. I went to the Montreal Zen Center today for a workshop, which was a most illuminating experience. I’d been pretty fascinated with the idea of zen for a while (see this post of mine from 2006, for example) but was pretty stuck on how to put it into practice (aside from being sure it was something you had to live). So, this was a step in that direction. After having gone to it, I wouldn’t say I’ve figured anything out (in fact I’m more confused than ever), but I would say one thing with conviction: this is the way to learn more.

It was pretty simple stuff: exactly how they describe on the web page I linked to. A short verbal introduction on some of the ideas of zen, then a tea break, then instruction on how to begin practising meditation, another tea break (this time with biscuits), then actually practising meditation, then question & answer about the meditation. It doesn’t really sound like much, and it wasn’t. But nonetheless I can’t stop thinking about the experience.

As far as I can gather, the “revelation” offered by Zen Buddhism is simple: our existence as separate, unique beings is an illusion of the mind. This illusion makes us suffer. However, it is possible with practice to overcome this illusion and realize your true nature as being one with the world. I’m probably butchering it a little bit by writing about it in this way, to a certain extent that’s me, but in another way it’s rather unavoidable since in a way the concepts are beyond words (since words imply a dualism). Regardless, the important thing isn’t to grasp zen intellectually, but to come to a natural understanding through the practice of meditation (aka “the practice”).

And on that note, the meditation is austere and almost certainly less than you’d expect. There is no prayer and very minimal ritual. Just a very minimal breath counting exercise conducted in a seated posture for 20 minutes, followed by a short walking exercise that lasts 5 minutes, then repeating the breath counting exercise for another 20 minutes. For its utter simplicity, I found it incredibly difficult. I imagine like anything with weeks, months, years of practice it (and the variations of it that experienced practitioners use where they meditate on koans) it would become easier.

I’m still giving thought on whether I want to take the next steps with them and begin a regular meditation practice. It sounds like really hard work (self meditation practice 6 days a week by yourself, plus regular visits to the zen center), which brings up the question: why do you want to do this? There’s some kind of weird contradiction between realizing that you as a self don’t really exist and committing yourself radically to this kind of practice. The only thing I can call it would be a “leap of faith”. My current thinking is that I’m not quite ready for that right now, but maybe in a while. For now I think I’m pretty happy going to yoga a few times a week and living my sham of a human existence. ;)

Syndicated 2013-03-24 20:47:18 from William Lachance's Log

137 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!