Older blog entries for wlach (starting at number 139)

Actual useful FirefoxOS Eideticker results at last

Another update on getting Eideticker working with FirefoxOS. Once again this is sort of high-level, looking forward to writing something more in-depth soon now that we have the basics working. :)

I finally got the last kinks out of the rig I was using to capture live video from FirefoxOS phones using the Point Grey devices last week. In order to make things reasonable I had to write some custom code to isolate the actual device screen from the rest of capture and a few other things. The setup looks interesting (reminds me a bit of something out of the War of the Worlds):

eideticker-pointgrey-mounted

Here’s some example video of a test I wrote up to measure the performance of contacts scrolling performance (measured at a very respectable 44 frames per second, in case you wondering):

Surprisingly enough, I didn’t wind up having to write up any code to compensate for a noisy image. Of course there’s a certain amount of variance in every frame depending on how much light is hitting the camera sensor at any particular moment, but apparently not enough to interfere with getting useful results in the tests I’ve been running.

Likely next step: Create some kind of chassis for mounting both the camera and device on a permanent basis (instead of an adhoc one on my desk) so we can start running these sorts of tests on a daily basis, much like we currently do with Android on the Eideticker Dashboard.

As an aside, I’ve been really impressed with both the Marionette framework and the gaiatests python module that was written up for FirefoxOS. Writing the above test took just 5 minutes — and the code is quite straightforward. Quite the pleasant change from my various efforts in Android automation.

Syndicated 2013-04-22 15:32:51 from William Lachance's Log

The need for a modern open source email client and Geary’s fundraiser

One of my frustrations with the Linux desktop is the lack of an email client that’s in the same league as GMail or Apple’s mail.app. Thunderbird is ok as far as it goes (I use it for my day-to-day Mozilla correspondence) but I miss having a decent conversation view of email (yes, I tried the conversation view extension — it didn’t work particularly well) and the search functionality is rather slow and cumbersome. I’d like to be optimistic about these problems being fixed at some point… but after nearly 2 years of using the product without much visible improvement my expectation of that happening is rather low.

The Yorba non-profit recently started a fundraiser to work on the next edition of Geary, an email client which I hope will fill the niche that I’m talking about. It’s pretty rough around the edges still, but even at this early stage the conversation view is beautiful and more or less exactly what I want. The example of Shotwell (their photo management application) suggests that they know a thing or two about creating robust and useable software, not a common thing in this day and age. In any case, their pitch was compelling enough for me to donate a few dollars to the cause. If you care about having a great email experience that is completely under your control (and not that of an advertising or product company with their own agenda), then maybe you could too?

Syndicated 2013-04-20 03:02:17 from William Lachance's Log

A visit to the Montreal Zen Center

The Road to the Montreal Zen Center

So for a bit of a departure from the usual technical content, a personal anecdote. I went to the Montreal Zen Center today for a workshop, which was a most illuminating experience. I’d been pretty fascinated with the idea of zen for a while (see this post of mine from 2006, for example) but was pretty stuck on how to put it into practice (aside from being sure it was something you had to live). So, this was a step in that direction. After having gone to it, I wouldn’t say I’ve figured anything out (in fact I’m more confused than ever), but I would say one thing with conviction: this is the way to learn more.

It was pretty simple stuff: exactly how they describe on the web page I linked to. A short verbal introduction on some of the ideas of zen, then a tea break, then instruction on how to begin practising meditation, another tea break (this time with biscuits), then actually practising meditation, then question & answer about the meditation. It doesn’t really sound like much, and it wasn’t. But nonetheless I can’t stop thinking about the experience.

As far as I can gather, the “revelation” offered by Zen Buddhism is simple: our existence as separate, unique beings is an illusion of the mind. This illusion makes us suffer. However, it is possible with practice to overcome this illusion and realize your true nature as being one with the world. I’m probably butchering it a little bit by writing about it in this way, to a certain extent that’s me, but in another way it’s rather unavoidable since in a way the concepts are beyond words (since words imply a dualism). Regardless, the important thing isn’t to grasp zen intellectually, but to come to a natural understanding through the practice of meditation (aka “the practice”).

And on that note, the meditation is austere and almost certainly less than you’d expect. There is no prayer and very minimal ritual. Just a very minimal breath counting exercise conducted in a seated posture for 20 minutes, followed by a short walking exercise that lasts 5 minutes, then repeating the breath counting exercise for another 20 minutes. For its utter simplicity, I found it incredibly difficult. I imagine like anything with weeks, months, years of practice it (and the variations of it that experienced practitioners use where they meditate on koans) it would become easier.

I’m still giving thought on whether I want to take the next steps with them and begin a regular meditation practice. It sounds like really hard work (self meditation practice 6 days a week by yourself, plus regular visits to the zen center), which brings up the question: why do you want to do this? There’s some kind of weird contradiction between realizing that you as a self don’t really exist and committing yourself radically to this kind of practice. The only thing I can call it would be a “leap of faith”. My current thinking is that I’m not quite ready for that right now, but maybe in a while. For now I think I’m pretty happy going to yoga a few times a week and living my sham of a human existence. ;)

Syndicated 2013-03-24 20:47:18 from William Lachance's Log

Eideticker: Limitations in cross-browser performance testing

Last summer I wrote a bit about using Eideticker to measure the relative performance of Firefox for Android versus other browsers (Chrome, stock, etc.). At the time I was pretty optimistic about Eideticker’s usefulness as a truly “objective” measure of user experience that would give us a more accurate view of how we compared against the competition than traditional benchmarking suites (which more often than not, measure things that a user will never see normally when browsing the web). Since then, there’s been some things that I’ve discovered, as well as some developments in terms of the “state of the art” in mobile browsing that have caused me to reconsider that view — while I haven’t given up entirely on this concept (and I’m still very much convinced of eideticker’s utility as an internal benchmarking tool), there’s definitely some limitations in terms of what we can do that I’m not sure how to overcome.

Essentially, there are currently three different types of Eideticker performance tests:

  • Animation tests: Measure the smoothness of an animation by comparing frames and seeing how many are different. Currently the only example of this is the canvas “clock” test, but many others are possible.
  • Startup tests: Measure the amount of time it takes from when the application is launched to when the browser is fully running/available. There are currently two variants of this test in the dashboard, both measure the amount of time taken to fully render Firefox’s home screen (the only difference between the two is whether the browser profile is fully initialized). The dirty profile benchmark probably most closely resembles what a user would usually experience.
  • Scrolling tests: Measure the amount of undrawn areas when the user is panning a website. Most of the current eideticker tests are of this kind. A good example of this is the taskjs benchmark.

In this blog post, I’m going to focus on startup and scrolling tests. Animation tests are interesting, but they are also generally the sorts of tests that are easiest to measure in synthetic ways (e.g. by putting a frame counter in your javascript code) and have thus far not been a huge focus for Eideticker development.

As it turns out, it’s unfortunately been rather difficult to create truly objective tests which measure the difference between browsers in these two categories. I’ll go over them in order.

Startup tests

There are essentially two types of startup tests: one where you measure the amount of time to get to the browser’s home screen when you explicitly launch the app (e.g. by pressing the Firefox icon in the app chooser), another where you load a web page in a browser from another app (e.g. by clicking on a link in the Twitter application).

The first is actually fairly easy to test across browsers, although we are not currently. There’s not really a good reason for that, it was just an oversight, so I filed bug 852744 to add something like this.

The second case (startup to the browser’s homescreen) is a bit more difficult. The problem here is that, in a nutshell, an apples to apples comparison is very difficult if not impossible simply because different browsers do different things when the user presses the application icon. Here’s what we see with Firefox:

And here’s what we see with Chrome:

And here’s what we see with the stock browser:

As you can see Chrome and the stock browser are totally different: they try to “restore” the browser back to its state from the last time (in Chrome’s case, I was last visiting taskjs.org, in Stock’s case, I was just on the homepage).

Personally I prefer Firefox’s behaviour (generally I want to browse somewhere new when I press the icon on my phone), but that’s really beside the point. It’s possible to hack around what chrome is doing by restoring the profile between sessions to some sort of clean “new tab” state, but at that point you’re not really reproducing a realistic user scenario. Sure, we can draw a comparison, but how valid is it really? It seems to me that the comparison is mostly only useful in a very broad “how quickly does the user see something useful” sense.

Panning tests

I had quite a bit of hope for these initially. They seemed like a place where Eideticker could do something that conventional benchmarking suites couldn’t, as things like panning a web page are not presently possible to do in JavaScript. The main measure I tried to compare against was something called “checkerboarding”, which essentially represents the amount of time that the user waits for the page to redraw when panning around.

At the time that I wrote these tests, most browsers displayed regions that were not yet drawn while panning using the page background. We figured that it would thus be possible to detect regions of the page which were not yet drawn by looking for the background color while initiating a panning action. I thus hacked up existing web pages to have a magenta background, then wrote some image analysis code to detect regions that were that color (under the assumption that magenta is only rarely seen in webpages). It worked pretty well.

The world’s moved on a bit since I wrote that: modern browsers like Chrome and Firefox use something like progressive drawing to display a lower resolution “tile” where possible in this case, so the user at least sees something resembling the actual page while panning on a slower device. To see what I mean, try visiting a slow-to-render site like taskjs.org and try panning down quickly. You should see something like this (click to expand):

Unfortunately, while this is certainly a better user experience, it is not so easy to detect and measure. :) For Firefox, we’ve disabled this behaviour so that we see the old checkerboard pattern. This is useful for our internal measurements (we can see both if our drawing code as well as our heuristics about when to draw are getting better or worse over time) but it only works for us.

If anyone has any suggestions on what to do here, let me know as I’m a bit stuck. There are other metrics we could still compare against (i.e. how smooth is the panning animation aka frames per second?) but these aren’t nearly as interesting.

Syndicated 2013-03-20 22:29:20 from William Lachance's Log

Documentation for mozdevice

Just wanted to give a quick heads up that as part of the ateam’s ongoing effort to improve the documentation of our automated testing infrastructure, we now have online documentation for mozdevice, the python library we use for interacting with Android- and FirefoxOS-based devices in automated testing.

Mozdevice is used in pretty much every one of our testing frameworks that has mobile support, including mochitest, reftest, talos, autophone, and eideticker. Additionally, mozdevice is used by release engineering to clean up, monitor, and otherwise manage our hundred-odd tegra and panda development boards that we use in tbpl. See sut_tools (old, buildbot-based, what we currently use) and mozpool (the new and shiny future).

Syndicated 2013-03-11 13:53:57 from William Lachance's Log

Follow up on “Finding a Camera for Eideticker”

Quick update on my last post about finding some kind of camera suitable for use in automated performance testing of fennec and b2g with eideticker. Shortly after I wrote that, I found out about a company called Point Grey Research which manufactures custom web cameras intended for exactly the sorts of things we’re doing. Initial results with the Flea3 camera I ordered from them are quite promising:

(the actual capture is even higher quality than that would suggest… some detail is lost in the conversion to webm)

There seems to be some sort of issue with the white balance in that capture which is causing a flashing effect (I suspect this just involves flipping some kind of software setting… still trying to grok their SDK), and we’ll need to create some sort of enclosure for the setup so ambient light doesn’t interfere with the capture… but overall I’m pretty optimistic about this baby. 60 frames per second, very high resolution (1280×960), no issues with HDMI (since it’s just a USB camera), relatively inexpensive.

Syndicated 2013-03-08 23:10:31 from William Lachance's Log

Finding a camera for Eideticker

[ For more information on the Eideticker software I'm referring to, see this entry ]

Ok, so as I mentioned last time I’ve been looking into making Eideticker work for devices without native HDMI output by capturing their output with some kind of camera. So far I’ve tried four different DSLRs for this task, which have all been inadequate for different reasons. I was originally just going to write an email about this to a few concerned parties, but then figured I may as well structure it into a blog post. Maybe someone will find it useful (or better yet, have some ideas.)

Elmo MO-1

This is the device I mentioned last time. Easy to set up, plays nicely with the Decklink capture card we’re using for Eideticker. It seemed almost perfect, except for that:

  1. The image quality is really bad (beaten even by $200 standard digital camera). Tons of noise, picture quality really bad. Not *necessarily* a deal breaker, but it still sucks.
  2. More importantly, there seems to be no way of turning off the auto white balance adjustment. This makes automated image analysis impossible if the picture changes, as is highlighted in this video:

Canon Rebel T4i

This is the first camera that was recommended to me at the camera shop I’ve been going to. It does have an HDMI output signal, but it’s not “clean”. This blog post explains the details better than I could. Next.

Nikon D600

Supposedly does native clean 720p output, but unfortunately the output is in a “box” so isn’t recognized by the Decklink cards that we’re using (which insist on a full 1280×720 HDMI signal to work). Next.

Nikon D800

It is possible to configure this one to not put a box around the output, so the Decklink card does recognize it. Except that the camera shuts off the HDMI signal whenever the input parameters change on the card or the signal input is turned on, which essentially makes it useless for Eideticker (this happens every time we start the Eideticker harness). Quite a shame, as the HDMI signal is quite nice otherwise.

To be clear, with the exception of the Elmo all the devices above seem like fine cameras, and should more than do for manual captures of B2G or Android phones (which is something we probably want to do anyway). But for Eideticker, we need something that works in automation, and none of the above fit the bill. I guess I could explore using a “real” video camera as opposed to a DSLR acting like one, though I suspect I might run into some of the same sorts of issues depending on how the HDMI output of those devices behaves.

Part of me wonders whether a custom solution wouldn’t work better. How complicated could it be to construct your own digital camera anyway? ;) Hook up a fancy camera sensor to a pandaboard, get it to output through the HDMI port, and then we’re set? Or better yet, maybe just get a fancy webcam like the Playstation Eye and hook it up directly to a computer? That would eliminate the need for our expensive video capture card setup altogether.

Syndicated 2013-02-19 19:22:59 from William Lachance's Log

Eideticker for FirefoxOS

[ For more information on the Eideticker software I'm referring to, see this entry ]

Here’s a long overdue update on where we’re at with Eideticker for FirefoxOS. While we’ve had a good amount of success getting useful, actionable data out of Eideticker for Android, so far we haven’t been able to replicate that success for FirefoxOS. This is not for lack of trying: first, Malini Das and then me have been working at it since summer 2012.

When it comes right down to it, instrumenting Eideticker for B2G is just a whole lot more complex. On Android, we could take the operating system (including support for all the things we needed, like HDMI capture) as a given. The only tricky part was instrumenting the capture so the right things happened at the right moment. With FirefoxOS, we need to run these tests on entire builds of a whole operating system which was constantly changing. Not nearly as simple. That said, I’m starting to see light at the end of the tunnel.

Platforms

We initially selected the pandaboard as the main device to use for eideticker testing, for two reasons. First, it’s the same hardware platform we’re targeting for other b2g testing in tbpl (mochitest, reftest, etc.), and is the platform we’re using for running Gaia UI tests. Second, unlike every other device that we’re prototyping FirefoxOS on (to my knowledge), it has HDMI-out capability, so we can directly interface it with the Eideticker video capture setup.

However, the panda also has some serious shortcomings. First, it’s obviously not a platform we’re shipping, so the performance we’re seeing from it is subject to different factors that we might not see with a phone actually shipped to users. For the same reason, we’ve had many problems getting B2G running reliably on it, as it’s not something most developers have been hacking on a day to day basis. Thanks to the heroic efforts of Thomas Zimmerman, we’ve mostly got things working ok now, but it was a fairly long road to get here (several months last fall).

More recently, we became aware of something called an Elmo which might let us combine
the best of both worlds. An elmo is really just a tiny mounted video camera with a bunch of outputs, and is I understand most commonly used to project documents in a classroom/presentation setting. However, it seems to do a great job of capturing mobile phones in action as well. :)

The nice thing about using an external camera for the video capture part of eideticker is that we are no longer limited to devices with HDMI out — we can run the standard set of automated tests on ANYTHING. We’ve already used this to some success in getting some videos of FirefoxOS startup times versus Android on the Unagi (a development phone that we’re using internally) for manual analysis. Automating this process may be trickier because of the fact that the video capture is no longer “perfect”, but we may be able to work around that (more discussion about this later).

FirefoxOS web page tests

These are the same tests we run on Android. They should give us an idea of roughly where our performance when browsing / panning web sites like CNN. So far, I’ve only run these tests on the Pandaboard and they are INCREDIBLY slow (like 1-3 frames per second when scrolling). So much so that I have to think there is something broken about our hardware acceleration on this platform.

FirefoxOS application tests

These are some new tests written in a framework that allows you to script arbitrary interactions in FirefoxOS, like launching applications or opening the task switcher.

I’m pretty happy with this. It seems to work well. The only problems I’m seeing with this is with the platform we’re running these tests on. With the pandaboard, applications look weird (since the screen resolution doesn’t remotely resemble the 320×480 resolution on our current devices) and performance is abysmal. Take, for example, this capture of application switching performance, which operates only at roughly 3-4 fps:

So what now?

I’m not 100% sure yet (partly it will depend on what others say as well as my own investigation), but I have a feeling that capturing video of real devices running FirefoxOS using the Elmo is the way forward. First, the hardware and driver situation will be much more representative of what we’ll actually be shipping to users. Second, we can flash new builds of FirefoxOS onto them automatically, unlike the pandaboards where you currently either need to manually flash and reset (a time consuming and error prone process) or set up an instance of mozpool (which I understand is quite complicated).

The main use case I see with eideticker-on-panda would be where we wanted to run a suite of tests on checkin (in tbpl-like fashion) and we’d need to scale to many devices. While cool, this sounds like an expensive project (both in terms of time and hardware) and I think we’d do better with getting something slightly smaller-scale running first.

So, the real question is whether or not the capture produced by the Elmo is amenable to the same analysis that we do on the raw HDMI output. At the very least, some of eideticker’s image analysis code will have to be adapted to handle a much “noisier” capture. As opposed to capturing the raw HDMI signal, we now have to deal with the real world and its irritating fluctuations in ambient light levels and all that the rest. I have no doubt it is *possible* to compensate for this (after all this is what the human eye/brain does all the time), but the question is how much work it will be. Can’t speak for anyone else at Mozilla, but I’m not sure if I really have the time to start a Ph.D-level research project in computational vision. ;) I’m optimistic that won’t be necessary, but we’ll just have to wait and see.

Syndicated 2013-02-01 17:50:16 from William Lachance's Log

Using the dm utility to interacting with Android or FirefoxOS devices

I promised a few people I’d blog about this, so here you go. :)

To help with the business of making Android or FirefoxOS devices do our bidding, Mozilla Automation & Tools developed a Python library called mozdevice which allows you to control these devices either using the Android Debug Bridge protocol (which is actually not Android specific: FirefoxOS devices use it too) or the System Under Test protocol (a Mozilla-specific thing).

Anyone familiar with debugging these devices is doubtless familiar with adb, which provides a command line interface that allows you to push/pull files, run a shell, etc. To help test our python code (as well as expand the scope of what’s possible on the command line), I created a similar utility a few months ago called “dm” which provides a command-line interface to the aforementioned mozdevice code. It’s shipped as part of mozdevice, and testing it out is pretty simple if you have virtualenv installed:

virtualenv mozdevice
cd mozdevice
./bin/pip install mozdevice
source bin/activate
dm --help

I generally use this utility for two things:

  1. Testing out mozdevice code. For example, today we discovered an (unfortunate) bug in devicemanagerADB’s killProcess routine. It was easy to verify both the problem and my fix did what I expected by starting my custom build of fennec and running this command:

    dm --package-name org.mozilla.fennec_wlach killapp org.mozilla.fennec_wlach
    

    (yes, it’s a bit unfortunate that this bug occurred in the first place: devicemanagerADB really needs unit tests)

  2. Day-to-day menial tasks, like getting device info/status, capturing screenshots, etc. You can get a full list of what this utility is capable of by running –help. E.g.:

    (mozbase)wlach@eideticker:~/src/eideticker$ dm --help
    Usage: dm [options]  []
    
    device commands:
      info [os|id|uptime|systime|screen|memory|processes] - get
          information on a specified aspect of the device (if no argument
          given, print all available information)
      install  - push this package file to the device and install it
      killapp
     - kills any processes with a particular name
          on device
      launchapp  - launches
          application on device
      ls  - list files on device
      ps  - get information on running processes on device
      pull  [remote] - copy file/dir from device
      push  - copy file/dir to device
      rm  - remove file from device
      rmdir  - recursively remove directory from device
      screencap
     - capture screenshot of device in action
      shell  - run shell command on device
    
    Options:
      -h, --help            show this help message and exit
      -v, --verbose         Verbose output from DeviceManager
      --host=HOST           Device hostname (only if using TCP/IP)
      -p PORT, --port=PORT  Custom device port (if using SUTAgent or adb-over-tcp)
      -m DMTYPE, --dmtype=DMTYPE
                            DeviceManager type (adb or sut, defaults to adb)
      -d HWID, --hwid=HWID  HWID
      --package-name=PACKAGENAME
                            Packagename (if using DeviceManagerADB)
    

    Before you ask, yes, it’s technically possible to do much of this with the original adb utility. But (1) some of our internal stuff can’t use adb a variety of reasons and (2) some of the tasks above (e.g. launching an app, getting a screenshot) involve considerably more typing with adb than with dm. So it’s still a win.

Happy command-lining!

Syndicated 2012-10-18 19:30:21 from William Lachance's Log

Catching problems early with python

Just a few quick notes on how to avoid a class of errors I’ve been seeing in Mozilla’s automation over the last year. Since python interprets code dynamically, it’s pretty easy for problems like undefined variables to slip through, especially if they’re in a codepath that isn’t frequently tested. The most recent example I found was in some cleanup-after-error code for remote mochitest/reftest, which tried to call a “self.cleanup” from a standalone method.

def main():
      ...
      try:
        dm.recordLogcat()
        retVal = mochitest.runTests(options)
        logcat = dm.getLogcat()
      except:
        print "TEST-UNEXPECTED-FAIL | %s | Exception caught while running tests." % sys.exc_info()[1]
        mochitest.stopWebServer(options)
        mochitest.stopWebSocketServer(options)
        try:
            self.cleanup(None, options)
        except:
            pass

testing/mochitest/runtestsremote.py

The end result of this is that the cleanup code will *never* be called, it will just silently fail to run. Probably not the end of the world in this case (there are other parts of our mobile automation which will perform the same cleanup later), but it’s easy to imagine another one where this would be a more serious problem.

There’s two very easy ways to help stop errors like this before they hit our code:

  1. Try to avoid using a blanket try…except. In addition to catching legitimate problems which we want to ignore (in the remote case for example, devicemanager exceptions), it also catches (and thus obscures) things like syntax, name, or type errors. Instead, try just catching the exception you’re looking for. For example, we might rewrite the case above as:

    try:
      mochitest.cleanup(None, options)
    except devicemanager.DMError:
      print "WARNING: Device error while cleaning up"
    
  2. pyflakes, pyflakes, pyflakes. Pyflakes is a fantastic tool for analyzing your python code for common problems. It’s kind of analagous to jslint, for those of you familiar with that. Here’s what happens when we run pyflakes against the offending code:

    wlach@eideticker:~/src/mozilla-central$ pyflakes testing/mochitest/runtestsremote.py
    testing/mochitest/runtestsremote.py:7: 'time' imported but unused
    testing/mochitest/runtestsremote.py:481: undefined name 'self'
    testing/mochitest/runtestsremote.py:500: undefined name 'self'
    

    I’ve found pyflakes to be an indispensable part of my workflow. I generally run it after making any substantial change to a python file, and certainly before pushing anything to be consumed by others.

Ultimately there’s no substitute for actually thoroughly testing your code, no matter what language you’re using. But using the right techniques and tools can certainly make your life easier. :)

[ for those wondering, a fix for the issue mentioned in this post is part of bug 801652 ]

Syndicated 2012-10-15 16:30:35 from William Lachance's Log

130 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!