W2E Day 2: Morning Presentations
I’ve resigned myself to being a day behind in blogging. I’m just too slow at organizing my thoughts to get them into an order worthy of publishing in the evenings. Better to come at it fresh in the morning.
The shift from Monday’s 3 hour long workshops to yesterday’s shorter presentations was pretty jarring. I kept thinking the presenters were just finishing their introduction and finally about to go into detail – and the talk would be over. So my biggest complaint about a lot of these talks is that they gave such a broad overview that I didn’t really learn much.
How to Develop a Rich, Native-quality User Experience for Mobile Using Web Standards
by David Kaneda of Sencha, a company that was namedropped quite a few times by Jonathan Stark yesterday as the makers of a really high quality Javascript UI toolkit.
I expected this to overlap quite a bit with Jonathan’s workshop, but I wanted to see it anyway because I spoke with David briefly afterwards and I wanted to see how his opinions differed.
He offered quite a few nuggets of info like this interesting graph: in January 2010 the breakdown of US mobile web traffic was 47% iOS (iPhone/iPad), 39% Android, 7% RIM, 3% WebOS (Palm-now-HP Pre), 2% Windows Mobile, 2% Other – so 95% webkit. (Actually David was wrong about that – if these were released in January the RIM numbers would have to be pre-Torch, so that’s only 86% WebKit. For now.) Even at the height of IE’s popularity that kind of homogeneity is new for the web. It means the mobile developers, unlike web developers, are free to target the WebKit-specific subset of HTML5, without worrying about, for a trivial example, including both -webkit and -mozilla namespaced CSS selectors. That’s good because with limited networking, avoiding redundancy is important. I suspect it’s temporary, though, as new features are added and phones diverge in which exact webkit versions they support.
(Of course, this list reflects web usage, not phone ownership: these are the phones whose users spend the most time online. Even still I was surprised to see the Pre passing Windows Mobile.)
Wow, there are a lot more Android phones than I realized. David believes a year from now the basic free phone you get with your mobile account will be a shovelware Android phone, with a touch screen, GPU, etc, and a good browser, so really rich web apps will be available to everyone, not just the high end. (But, as I mentioned above, WebKit will continue to evolve and these phones would be good candidates for lack of upgrade support.)
As I expected, David mentioned a lot of the same things yesterday’s workshops did: he glossed over web storage and form validation (as well as web workers, which everybody seems to mention and then says “but I don’t have time to discuss that”.) He did cover, briefly, the new input types, video/audio (mentioning that video can be styled using the new CSS transforms, which stands to reason but hadn’t occurred to me to try; I assume it doesn’t distort the contents, that would be crazy!), meta viewport tags, CSS3 features, and the application cache manifest. That’s my biggest eye-opener so far – I helped to test the cache manifest in the Torch browser, although someone else did the implementation, and when I read the spec I thought it was ridiculous and would never catch on. But all the mobile app developers here seen really excited about it. It’s a huge pain in the ass to use, though, so clearly there’s some room for improvement (or at least tool support.)
David then went on to talk about some glitches in the mobile implementation, which is valuable info:
Touchscreen events: apparently there is a hardcoded 350ms delay in tapping an element before the click event is fired. That’s crazy sluggish! (He speculated that the reason was an attempt to allow people to cancel quickly if they touch the wrong thing, since fingers are thick and clumsy. I’ve seen other posts online guessing that this is to allow the user to continue and turn the click into a gesture.) David recommends working around this by binding custom events to touch down and touch up, which generate a click event immediately if the user does touch down and then touch up within a small radius. I dunno – sounds hackish and fragile to me. (I looked this up to see if it was in a spec somewhere or just an implementation detail, and all I could find was posts complaining about it on the iPhone. I’ll have to check with our UI people and see if it’s iPhone-specific or built into WebKit.)
No fixed elements: position:fixed doesn’t work on mobile – he didn’t say why, so I had to look it up. Actually it works fine to fix an item to a specific position on the page, that’s just not very useful on mobile since most people want it fixed to the viewport. This makes building pages with independently scrolling areas difficult. To fix this, he again recommends writing custom touch handlers to track movement events and update content based on that – a simple implementation would just move linearly, but to match the platform native scrolling, you would need to track acceleration and add bounce at the end, which gets pretty complex.
Wow. That’s even worse! After all the talk of moving transitions and animations into CSS so they can be implemented in the browser and accelerated, it saddens me deeply to hear people talk about implementing kinetic scrolling of all things in Javascript.
Finally he listed some Javascript frameworks to help with mobile development:
iScroll and TouchScroll both wrap touch scrolling as described above, and contain their own implementations of acceleration.
jQTouch he described as “limited” and “hypercard-like”; I guess he’s allowed to denigrate it since he wrote it. It fixes the 350 msec delay but not scrolling.
And Sencha Touch, his company’s product which is in beta, abstracts all touch events to generate artifical events like doubletap, swipe, and rotate, implements independantly scrollable areas, and a ton of other features.
For deployment he mentioned OpenAppMkt, an app store for web apps, and PhoneGap again for wrapping web apps in a native shell.
David’s slides and additional links are online at his site, including a lot of links to further resources. (Including a bunch I took the time to Google for myself. Oops.)
And at this point my fingers started to cramp up and my notes became much sketchier, so the rest of these writeups will be much, much shorter.
The Browser Performance Toolkit
by Matt Sweeney of YUI/Yahoo!
I was expecting this to describe a specific product, but it turned out to be a metaphorical toolkit – a list of performance measurement tools.
I thought this talk could have benefited from some more demonstration: how to read a waterfall diagram, how to use a profiler. It came off as just a list of products to check out later. It also do a lot to distinguish between them: there was a lot of, “And this has a network waterfall display which is a lot like the last one.” So why should I choose this one over the last one? When somebody asked which products he personally used, Matt’s answer was, “All of them,” which isn’t too helpful. Would have liked more depth here.
I won’t bother linking to these since there are so many and they’re easy to Google:
Firebug: Firefox’s builtin debugger, gives a waterfall display of asset loading, and can break out each load into DNS lookup time, connection time, data transfer time, etc; also has a Javasript profiler, which unfortunately is limited because it only gives function names and call counts, not full call stacks.
YSlow: an open source Firebug add-on from Yahoo, which gives a grade on various categories of network usage and detailed stats, good for getting a high-level view of network usage.
Page Speed: another open source Firebug add-on, from Google; similar to YSlow but also tracks some runtime performance. It can profile deferable Javascript by listing functions not yet called at onload time (candidates for deferral), and displays waterfall diagrams including separate JS parse and execute times.
Web Inspector: WebKit’s built-in debugger. Similar to Firebug, plus an “Audits” tab similar to YSlow/Page Speed. It includes the standard network loading waterfall, and a JS profiler which unlike Firebug’s includes system events like GC and DOM calls, and does include call stacks but not call counts, just run times.
IE8 “has a robust set of developer tools”, but he didn’t say any more about them. IE9 has a network usage tracker and profiler which seem on par with Firebug and Web inspector, with one huge caveat: the profiler’s timer is in 15msec chunks! So runtimes are not close to accurate due to rounding. At least call counts are still useful.
DynaTrace AJAX Edition is a free IE add-on supporting versions 6-8, with IE9 and Firefox support coming soon. It has the standard network waterfall, plus some nice high-level pie charts showing percentage of time spent loading various resources. Its profiler tracks builtins like Web Inspector’s, and can also capture function arguments, which sounds very useful, and has the nicest UI demonstrated, including pretty-printing the source code when looking at a function (especially useful when debugging obfuscated JS).
Matt also mentioned a few online tools: apparently showslow.com aggregates YSlow, Page Speed and DynaTrace scores, and publishes the results so it’s a good way to compare your page to others. But when I tried to go there I got what looked like a domain squatter, and I see no mentions of it in Google – did I copy the name down wrong? Webpagetest.org does network analysis for IE7.
Mentioned but not detailed were Fiddler and Charles (no relation), a proxy which among other things can be used to see Flash resource requests over the wire.
His final point was that, since browsers vary, you can’t just use one tool and need to profile in multiple browsers with multiple tools. Which makes sense, but it would have been nice to give more detail on what YSlow or Page Speed give you over the browsers’ builtin debuggers.
NPR Everywhere: The Power of Flexible Content
by Zach Brand, head of NPR’s digital media group
I could have gone to the HTML5 vs Flash presentation in this time slot, but after seeing 3 variants on HTML5 in a row I figured I should see something a little different instead. Here, the head of NPR’s digital media group described the challenges in getting NPR’s original news reporting formatted in a way that could be used in many contexts: large screens, small mobile devices, summarized on affiliates’ web sites, serialized through RSS, embedded in blogs.
This was another presentation where I would have liked to see more technical detail and less overview, but I guess there wasn’t time. The talk was interesting, but didn’t contain anything directly applicable to my work so I won’t bother to summarize it.
I will list two important links, though:
An API for public access to NPR’s news stories is at npr.org/api.
Zach promised a followup on the blog at npr.org/blog/inside
(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers. That ghost is full of cake.)
Syndicated 2010-09-29 16:53:00 from I Am Not Charles