Older blog entries for danbri (starting at number 205)

Learning WebGL on your iPhone: Radial Blur in GLSL

A misleading title perhaps, since WebGL isn’t generally available to iOS platform developers. Hacks aside, if you’re learning WebGL and have an iPhone it is still a very educational environment. WebGL essentially wraps OpenGL ES in a modern Web browser environment. You can feed data in and out as textures associated with browser canvas areas, manipulating data objects either per-vertex or per-pixel by writing ‘vertex’ and ‘fragment’ shaders in the GLSL language. Although there are fantastic tools out there like Three.js to hide some of these details, sooner or later you’ll encounter GLSL. The iPhone, thanks to tools like GLSL Studio and Paragraf, is a great environment for playing with GLSL. And playing is a great way of learning.

GLSL fragment shaders are all about thinking about visuals “per-pixel”. You can get a quick feel for what’s possible by exploring the GLSL Sandbox site. The sandbox lets you live-edit GLSL shaders, which are then applied to a display area with trivial geometry – the viewing area is just two big triangles. See Iñigo Quilez’s livecoding videos or ‘rendering worlds with two triangles‘ for more inspiration.

All of which is still rocket science to me, but I was surprised at how accessible some of these ideas and effects can be. Back to the iPhone: using Paragraf, you can write GLSL fragment shaders, whose inputs include multi-touch events and textures from device cameras and photo galleries. This is more than enough to learn the basics of GLSL, even with realtime streaming video. Meanwhile, back in your Web browser, the new WebRTC video standards work is making such streams accessible to WebGL.

Here is a quick example based on Thibaut Despoulain‘s recent three.js-based tutorials showing techniques for compositing, animation and glow effects in WebGL.  His Volumetric Light Approximation post provides a fragment shader for computing radial blur, see his live demo for a control panel showing all the parameters that can be tweaked. Thanks to Paragraf, we can also adapt that shader to run on a phone, blurring the camera input around the location of the last on-screen touch (‘t1′). Here is the original, embedded within a .js library. And here is a cut down version adapted to use the pre-declared structures from Paragraf (or see gist for cleaner copy):

vec3 draw() {
  vec2 vUv = p;
  float fX=t1.x, fY=t1.y, illuminationDecay = 1.0,
  fExposure = 0.2, fDecay = 0.93,
  fDensity = .3, fWeight = 0.4, fClamp = 1.0;
  const int iSamples = 8;
  vec2 delta = vec2(vUv-vec2(fX,fY))/float(iSamples)*fDensity,
coord = vUv;
  vec4 FragColor = vec4(0.0);
  for(int i=0; i < iSamples ; i++)  {
    coord -= delta;
    vec4 texel = vec4( cam(coord), 0.0);
    texel *= illuminationDecay * fWeight;
    FragColor += texel;
    illuminationDecay *= fDecay;
  }
  FragColor *= fExposure;
  FragColor = clamp(FragColor, 0.0, fClamp);
  return(vec3(FragColor));
}

Cat photo

Blur

As I write this, I realise I’m blurring the lines between ‘radial blur’ and its application to create ‘god-rays’ in a richer setting. As I say, I’m not an expert here (and I just post a quick example and two hasty screenshots). My main purpose was rather to communicate that tools for learning more about such things are now quite literally in many people’s hands. And also that using GLSL for real-time per-pixel processing of smartphone camera input is a really fun way to dig deeper.

Syndicated 2012-07-23 11:43:59 from danbri's foaf stories

Schema.org and One Hundred Years of Search

A talk from London SemWeb meetup hosted by the BBC Academy in London, Mar 30 2012….

Slides and video are already in the Web, but I wanted to post this as an excuse to plug the new Web History Community Group that Max and I have just started at W3C. The talk was part of the Libraries, Media and the Semantic Web meetup hosted by the BBC in March. It gave an opportunity to run through some forgotten history, linking Paul Otlet, the Universal Decimal Classification, schema.org and some 100 year old search logs from Otlet’s Mundaneum. Having worked with the BBC Lonclass system (a descendant of Otlet’s UDC), and collaborated with the Aida Slavic of the UDC on their publication of Linked Data, I was happy to be given the chance to try to spell out these hidden connections. It also turned out that Google colleagues have been working to support the Mundaneum and the memory of this early work, and I’m happy that the talk led to discussions with both the Mundaneum and Computer History Museum about the new Web History group at W3C.

So, everything’s connected. Many thanks to W. Boyd Rayward (Otlet’s biographer) for sharing the ancient logs that inspired the talk (see slides/video for a few more details). I hope we can find more such things to share in the Web History group, because the history of the Web didn’t begin with the Web…