Older blog entries for conrad (starting at number 44)

CFFPP: linux.conf.au 2010

The call for papers for linux.conf.au 2010 has been open for a few weeks, and closes soon (July 24).

I really want to encourage some talks about functional programming! The conference has a pretty strong developer focus, and most talks are about a practical topic. More importantly, we're looking for talks that inspire people to try new techniques, to approach design and troubleshooting with clarity and vigor (yarr!), to boldly consider that they should perhaps spend some time honing their craft before writing yet another application that inexplicably fails at runtime -- all in a friendly and entirely non-condescending environment of hackers having fun hacking.

Here's some suggestions for the kind of talks that I think could be interesting:

  • systems programming in Haskell/OCaml/whatever: how you wrote an interface to some hardware, handled lots of IO, controlled a robot, whatever
  • functional programming for kernel development: verification, security etc.
  • game programming: higher order design for 3D, AI etc.
  • proof vs. testing: (can anyone do a tutorial on proof without greek letters? not that Patryk Zadarnowski's talk about the Curry-Howard Isomorphism a few years ago wasn't *awesome*, but as a result of that people are clamoring (clamoring!) for some advice about how to prove their programs have no bugs).
  • some ... other ... practical benefit of functional programming!

The conference is in Wellington in January. January! it'll be windy, and it's in New Zealand!

Syndicated 2009-07-15 03:53:00 (Updated 2009-07-15 03:58:02) from Conrad Parker

Release: libfishsound 0.9.2

Fishsound has moved to Xiph.org! The new home page is at http://www.xiph.org/fishsound/.

New in this release

This release contains security and other bugfixes:

  • Security fixes related to Mozilla bugs 468293, 480014, 480521, 481601.
  • Fix bounds checking of mode in Speex header
  • Handle allocation failures throughout due to out of memory
  • Added support for libFLAC 1.1.3
  • Add conditional support for speex_lib_get_mode() from libspeex 1.1.7 If available, this function is used in place of static mode definitions. For ticket:419
  • Check for Vorbis libs via pkgconfig, required for MacPorts etc.

Syndicated 2009-04-07 22:48:00 (Updated 2009-04-07 22:56:53) from Conrad Parker

A proposal for generalizing the byte-range referral HTTP Response header

Re: the Media Fragments WD. Here I am using the term "byte-range referral" for multiple concatenated HTTP requests, for the purpose of improving cacheability; this is called a "4-way handshake" in the current working draft.

Shortcomings of the existing byte-range referral scheme

The above WD, and the current Annodex scheme, are specified to allow sharing of non-header data between different temporal views of media resources. They limit the positioning of custom data to the media headers. different segments to have different headers, which is useful for Ogg but not necessarily so for other formats.

Even for Ogg, it could be useful to refer to the codebooks separately from the Skeleton for more finely-grained data re-use. Then a client can locally cache the codebooks and know not to bother retrieving them over and over; but to get the updated skeleton and keyframe data for temporal segment requests.

Hence, I am proposing that we should specify an ordered list of tuples of (URI, byte range) which the concatenation of is byte-wise identical to the byte contents of the requested URI

This response can also contain data, so if you want to refer to this response you can include a tuple of (this, range) where this is the literal string "this", and refers to the body of the current response.

This syntax then allows the server to include parts from many different URLs. The custom data is then centralized in this response, and can be used for any parts of construction of the response so that it can be used for tail data (such as ID3 tags, divx seek tables etc.)

List and tuple separator characters

The list separator should be commas, as this then allows the list to be separated over HTTP response lines (without re-ordering).

Hence the tuple separator should not be commas; it can simply be whitespace:

Range-Referral: http://www.example.com/video.ogv?headers 0-1280
Range-Referral: http://content1.example.com/video.ogv 5380-48204
Range-Referral: this 0-950
Range-Referral: http://content1.example.com/video.ogv 60880-238382

By comma replacement, this set of headers is equivalent to the single header:

Range-Referral: http://www.example.com/video.ogv?headers 0-1280, http://content1.example.com/video.ogv
5380-48204, this 0-950, http://content1.example.com/video.ogv 60880-238382

Interpretation of other response headers

The body of this request is simply all the custom parts for this view, concatenated bytewise. The Range-Referral header explains how to use this data.

Content-Length: is the length of the body.

A Range request is made relative to the body. So for example a client could just do a HEAD request to get the Range-Referral headers, and then do multiple Range requests to retrieve the reqired parts in sequence (rather than locally caching all the data for tailers etc.). Coherence of the concatenated responses can be assured by the use of existing HTTP/1.1 caching identifiers.

So, this constructed response is only special in that a user agent knows how to use it in conjuction with other URI response data to display a media segment. Otherwise it is standard HTTP, and can have caching headers/tags attached, be cached by intermediate proxies, and itself be the subject of range requests.

Generalization to other segment types

This mechanism allows a complex sequence of byte-ranges to be specified. It explicitly marks data ranges which are re-usable, allowing them to be cached. It generalizes so that any complex data subview can be served, where re-usable data is keyed canonically and can be cached on the network.

For example, it may be useful for specifying the data for a spatial subrange of video.

Syndicated 2009-04-07 16:32:00 (Updated 2009-04-07 16:44:06) from Conrad Parker

liboggplay, liboggz, libfishsound migrated to git.xiph.org

The source repositories for some Ogg libraries developed as part of the Annodex project have moved from from svn.annodex.net to git.xiph.org. These libraries are:

  • liboggplay, an Ogg Theora playback library used by Mozilla Firefox;
  • libfishsound, a simplified API for using audio codecs, used by liboggplay and the by the DirectShow Oggcodecs; and
  • liboggz, a library for seeking, reading and writing Ogg (used by liboggplay), and tools for managing Ogg streams. This includes oggz-chop, which is used by various sites including the Internet Archive to serve Ogg files.

Reasons for the migration

Xiph.org, which develops free codecs (Ogg Vorbis, Theora, Dirac, Speex, CELT, FLAC), already provided the hosting for Annodex.net projects. The move to the xiph.org domain reflects that these libraries are recommended for general use by projects requiring Ogg support.

The move from Subversion to Git allows for distributed development, letting developers without write access to the central Subversion repository develop code using a version control system, and making it easier for developers and packagers to track multiple independent changes. Among distributed version control systems, Git was chosen for its flexibility and popularity. It is already used within Xiph.org for Speex, the ultra-low latency, high quality audio codec CELT, and the experimental text overlay codec Kate.

Checking out the sources

To do a fresh checkout of the code, make a new git repository This assumes that you begin with an empty working directory:

$ git clone git://git.xiph.org/liboggz.git

Adding a remote to an existing git-svn checkout

Many developers already used git-svn to access the previous svn repositories. In this case you will already have a local git clone of the sources, perhaps with your own local changes. In that case, simply add a new remote to your existing repository, eg.:
$ git remote add xiph git://git.xiph.org/liboggz.git

Syndicated 2009-04-03 06:13:00 (Updated 2009-04-03 06:17:52) from Conrad Parker

Discovery and fallback for media segment addressing over HTTP

This post concerns the use of queries or fragments in the URI specification for accessing segments of media over HTTP. We outline the user-visible differences between the two approaches, including the form of the URIs seen by users in each scenario and the consequent user interface activity, and then explain the HTTP request and response mechanisms that result. The purpose of this analysis is to better understand the trade-offs in usability and the impact on network performance, with reference to existing implementations rather than hypothetical scenarios.

I will make the case that the user-visible differences between the two syntaxes are immaterial, and that a more important distinction is that they induce different protocols. I will also claim that the use of the fragment syntax introduces unnecessary complexity in that it lacks a discovery mechanism and has no useful fallback to existing HTTP.

User-visible differences

We are constructing a URI syntax for addressing segments of media data. Taking the simple case of addressing some video content beginning at an offset of 10 seconds, we consider the two forms:

  • Query syntax: http://www.example.com/media.ogv?t=10
  • Fragment syntax: http://www.example.com/media.ogv#t=10

For simplicity here we are using a shortened segment identifier t=10; I touched on the topic of segment identifiers in a recent article about pretty printing durations.

Regarding the direct HTTP semantics of these two forms, if the user is already viewing the specified media.ogv, the query syntax reloads the portion from 10 seconds as a new resource, whereas the fragment syntax modifies the view of the current resource.

Although developers are rightly wary of a page refresh due to the time required to render complex HTML, in practice no visible change occurs when reloading a video. The query syntax has been used to control video seeking in JavaScript (using the Java cortado video player plugin, or an earlier Oggplay plugin), and also natively in the current Firefox 3.5 implementation.

In any case, this distinction is only user-visible if the video is the top-level resource. In the common case of a web page that embeds a video, the user-visible resource is the HTML page. In this case, the mechanism for controlling video is under the control of the embedding web page via JavaScript.

For example, URIs to YouTube pages allow a time segment to be appended using a fragment syntax. However, this fragment is used by JavaScript to control the embedded Flash video player; the mechanism for then retrieving video data is then managed by the Flash player. Similarly, in HTML5 Ogg <video> implementations, a fragment identifier appended to the HTML page may be interpreted by JavaScript to control seeking in the <video> source using a non-fragment mechanism, like query syntax.

Differences in request mechanisms

Either way we introduce a new behaviour that user agents can use to retrieve media segments over HTTP.

When handling a media segment which is specified by a query, the user agent initiates a standard HTTP request. It connects to port 80 on the specified host, and uses the entire path, including the query specifer, in the GET request. The server then begins transferring the required data representing that segment of the media.

To retrieve the URI http://www.example.com/media.ogv?t=10:

GET /media.ogv?t=10 HTTP/1.1
Host: example.com

However the proposed request mechanism for handling a segment specified by a fragment is not standard HTTP. In conventional HTTP, a fragment specifier is stripped by the user agent and not sent to the server at all; rather, the server sends the requested response (representing the entire resource), and after retrieval, the user-agent uses the fragment specifier to select the view shown to the user.

A recently proposed behaviour for handling media segments involves placing the segment specifier into the Range HTTP Request header, with a new units of seconds.

To retrieve the URI http://www.example.com/media.ogv#t=10:

GET /media.ogv?t=10 HTTP/1.1
Host: example.com
Range: seconds=10-

Response mechanism: byte-range redirection

The byte-range redirection response mechanism involves identifying parts of the segment view which are byte-wise identical to the original resource, and specifying redirections to those.

How discovery works

A user-agent will only receive a byte-range redirection response if it has indicated that it is capable of interpreting that, by including an extra HTTP request header. For example, here using a media segment URL specified with a query parameter:

GET /media.ogv?t=10 HTTP/1.1
Host: example.com
X-Accept-Range-Redirect: bytes

If the server is capable of handling the byte-range redirection mechanism, it will do so and indicate that it has done so explicitly in its response headers.

Query syntax has a sensible fallback to standard HTTP

However if the extra request header is not present, the server will simply send an entire response corresponding to the requested segment. Similarly if the header is present but the server is not capable of this new mechanism, it will simply continue with a standard HTTP response. The client can tell if the response is a segment response or not by the presence of an acknowledging response header.

If either client or server does not understand the byte-range redirection protocol, the request falls back to standard HTTP and the required segment is correctly returned. The cost of this fallback, compared to the case where both client and server understand the new request/response headers, is a loss of cacheability for subsequent overlapping segment requests.

Fragment syntax has a high cost of failure

The mechanism involving the fragment specifier does not have a fallback to standard HTTP: if the client does not understand that it should add the Range header with newly defined units, then it will end up simply requesting the entire resource. Similarly, if the server does not understand the new header then it will simply respond with the entire resource. If the cost of failure is to download some number of hours of extra video, as it would be in the case of MetaVid's congress proceedings, that is a prohibitive cost.

Summary

  • The distinction is one of protocol mechanism
  • For the common case of video displayed in HTML, the distinction is not user-visible
  • The use of fragment specifiers do not have a fallback to standard HTTP
  • The cost of discovery failure for fragments is high (retrieval of entire resource)
Actions

  • To clarify within the Media Fragments WG how queries can be used effectively, for both considered user scenarios.
  • To consider how the byte-range redirection mechanism can be generalized for other segment specifiers, such as spatial regions.

Syndicated 2009-04-01 06:56:00 (Updated 2009-04-01 07:04:27) from Conrad Parker

The economics of Twitter spam

Recently more and more people have reported that they are being followed by spammers on Twitter. It's easy to track this problem: just search for #spam. Being followed by a Twitter spammer isn't like being stalked by a murderer; actually in the current environment, these guys are a fairly benign parasite that can work in your favor. So let's look at the economics of Twitter spam.

The upside for spammers is the usual obvious SEO shite: you've got something useless to peddle (yourself, your scam, your illegitimate business selling poor copies of pretentious luxury goods, your legitimate business selling enhancement placebos to suckers); you spend your time trying to defile fine and upstanding web pages with links to your pathetic piece of virtual real estate; Twitter comes along and your primitive brain realizes it can post its links there. You follow people so that they get a notification in their email pointing to your Twitter feed. Maybe they read it, maybe they click the tinyurl-obscured link. You cream yourself if they choose to follow you, because then they'll get all your spam, and you'll look more legit by having actual followers (like, real people from outside your cluster of bots and morons).

Now, what's the upside for normal humans in being followed by these scum?

Knowledge is work, a means for putting food on the table; information is power, a means for taking food from others.

Following as many people as you can on Twitter is a useful way to stay in front of your game: you know what people are up to, you see trends evolve, you get notice of articles before they're syndicated, you watch news unfold in your little niche of the world. And of course, the more people that follow you, the further your own message spreads: how great you are, how you're beating the system, how your pretentious beautiful designs and products can uplift and empower.

So there's an incentive to increase both the number of people you follow and the number of people who follow you. The first is easy; you just find people and press their button. The second is more difficult: you need to say something worthwhile in your tweets. Sometimes, not always, people will reciprocate when you follow them -- (SEO tip here!:) it helps if your own tweets are interesting.

However, there is a 2000 following limit: you can't follow more than 2000 people until you have 2000 followers. So, if you want to expand your reach into the info-verse, every follower counts -- even those spambots. So, now, these guys have evolved a little symbiotic, parasitic relationship with their hosts (you). You feel the first bite when they follow, but it feeds your ego. All you need is followers! no-one's going to do background checks on your popularity!

Relevance ranking anyone?

There's more to it though: Twitter search is currently being rolled out across the default user interface, and various bloggers are describing Twitter as a "search engine" (apparently that's the appropriate noun to describe someone that collects ideas). Twitter search is currently a realtime feed of query matches (the zeigeist! *fap* *fap* *fap*) with no relevance ranking. As the search feature gains usage, people will want relevant results to more complex queries. An obviously useful ranking input is the number of followers that a Twit has. These spambots will make you appear relevant!

We can follow this down silly paths -- eg. the more you tweet, the more spambot-followers you get, the more ranking relevance you have. The spammers introduce an incentive to posting often, and that mechanism has positive feedback.

More useful ranking mechanisms are things like reply frequency and analysis of re-tweets. Re-tweets are interesting to track because you can find the users who originate popular ideas: give them the microphone, dammit.

Action items

So there's an imbalance in the Twitter economy. Spammers are using Twitter and the environment encourages it.

Wishlist for Twitter:

  • Track how often users are blocked, warn against and auto-ban them.
  • Add user-initiated "Report spammer" buttons.
  • Implement detection of spammer clusters and auto-ban them.

Action items for Twitter users:

  • Block spammers on Twitter.
  • Block spammers on Twitter.
  • Block spammers on Twitter.

Please rant about how much you love the symbiotic parasitic relationship with your spambot-followers!

Syndicated 2009-03-08 23:59:00 (Updated 2009-03-09 01:40:27) from Conrad Parker

Random code: Pretty printing durations in Haskell

Recently I've really enjoyed reading blog posts which just explain a little bit of code, so that's what this is. I had this code lying around from a few months ago so I added some context and links. It combines two of my favourite things: Annodex and Haskell!

YouTube's video offset syntax

Some time last year, YouTube introduced a feature which allows you to specify a hyperlink that plays a video from a given time offset. If you used the syntax on a random video site, it would look like this:

http://www.example.com/player.html#t=3m54s

That syntax for this is very close to that which we use in Annodex for Temporal URIs, now running on Archive.org (and soon on Wikipedia):

http://www.example.com/video.ogv?t=3:54

Two differences:

1. YouTube uses a fragment instead of a query parameter.

A fragment is something starting with '#' that tells the client to jump to a particular offset in the document -- in general the fragment text is never seen by the server. In the case of YouTube the HTML page contains JavaScript that tells the embedded Flash video player to seek to the offset in the video.

Fragments are useful in this use case, where you are instructing the embedding web page to play the video from a given time offset. How it actually retrieves the video from the network is not specified, but importantly there is no requirement for the embedding web page to be reloaded.

(This distinction between fragments and queries is part of the W3 Media Fragments WG discussion on syntax).

2. The syntax uses unit markers h, m, s to separate the parts of time, whereas our specification uses the kind of specifiers common in industrial equipment (and clock radios).

Perhaps one advantage of the format YouTube have chosen is readability: sometimes it is difficult to read times such as 03:36:14.

http://www.example.com/video.ogv?t=3:54
http://www.example.com/video.ogv?t=00:03:54.000
http://www.example.com/video.ogv?t=npt:00:03:54.000
http://www.example.com/video.ogv?t=smpte-25:00:03:54::0

We had a recent discussion about these issues in the Media Fragments WG: Action-28: updated syntax document with time formats. I'm pretty happy with the syntax we have settled on, allowing for both readable short timestamps and more accurate long ones.

Pretty printing of durations

Anyway, I was bored so I hacked up a sweet fold to display the format used by YouTube.

Haskell hackers use folds like C programmers use for loops; the Haskell wiki page Fold is a beautiful introduction to the topic. My favourite Web 1.0 interactive visualization of a left fold is at foldl.com (and also be sure to check out its companion site for right folds, foldr.com).

Here's a concise fold that gets us most of the way to the right syntax:

> ts = [("ms", 1000), ("s", 60), ("m", 60), ("d", 24), ("y", 365)]
>
> duz ms = ss
>   where (ss, _) = foldl (\(ss, x) (s, y) -> (show (rem x y) ++ s ++ ss, quot x y)) ("", ms) ts

Yeah, concise. Read it slow! if it was in C or Python, that one-liner would be a 10 or 5 line loop.

You might say that you use the fold function to iterate through a list of time units, and at each step of the iteration you do an integer division by the unit, label the remainder, and pass the quotient on to the next step of the iteration. A real Haskell programmer, however, might say something like "you fold the duration quotiently through the units, labelling into the syntax!", with much wringing of hands and wishful glances for abstract ponies. Fold is a verb, because functions are alive! Quotiently is not a word.

A problem with duz (apart from the crappy name) is that it shows times like 0y0h3m54s0ms. The next implementation of duration strips the leading and trailing zeroes:

> dur ms = years:rest
>   where (rest, years) = foldl (\(ds, x) y -> ((rem x y):ds, quot x y)) ([], ms) [1000, 60, 60, 24, 365]
>
> duration ms = concat $ map (\(n, s) -> show n ++ s) (takeWhile (not . zero) $ dropWhile zero labelled)
>   where labelled = zip (dur ms) ["y", "d", "h", "m", "s", "ms"]
>         zero (n, _) = (n==0)

eg. to display the duration of 2^32 milliseconds:

*Main> duration (2^32)
"49d17h2m47s296ms"

*Main> duration 3600000
"1h"

Fold is a generic list processing device; if you want to limit the amount of the list that is processed, you can use functions like takeWhile and dropWhile. These will take, or drop, elements from the list as long as some criterion is satisfied; you can use them both together to trim both the start and end of the list. Of course you can use these on the input list to limit what data is processed; but because Haskell evaluates lazily, you can also use these on the output list to limit how much of the processing is actually done (like in duration above). The bits of the evaluation that don't really need to get done, aren't: the idea of doing them is written down (on a "thunk") and thrown away. Burn your todo lists! Be lazy lazy lazy! Haskell rules. Do you like verbs?

Syndicated 2009-03-01 03:13:00 (Updated 2009-03-03 14:03:18) from Conrad Parker

Is OpenMAX important for Free Software?

Much as OpenGL gives you access to 3D hardware, OpenMAX allows you to take advantage of hardware codecs. This is a brief overview introducing what OpenMAX is, explaining why it is useful for the open source community, and outlining steps for integration with free codecs, and open source multimedia frameworks and applications.

What is OpenMAX?

OpenMAX is a set of C APIs specified by the Khronos Group (who also co-ordinate standards like OpenGL and OpenAL). Whereas media frameworks like GStreamer and DirectShow are quite generic, providing all capabilities from codec integration through to synchronization of playback and recording and network access, OpenMAX more strictly defines three layers of operation:
  • OpenMAX IL (Integration Layer) is an interface to multimedia codecs implemented in hardware or software. It does not provide any interfaces for synchronized capture or playback of video and audio.
  • OpenMAX DL (Development Layer) APIs "specify audio, video and imaging functions that can be implemented and optimized on new CPUs, hardware engines, and DSPs and then used for a wide range of accelerated codec functionality such as MPEG-4, H.264, MP3, AAC and JPEG."
  • OpenMAX AL (Application Layer) provides acceleration of capture and presentation of audio, video, and images.
The significance of this layering is that it allows hardware and software developers to implement conformance to a particular layer, so that device manufacturers can more reliably integrate components from each. This creates a free market for media components as commodities; and of course open source businesses are well suited to operating in such an environment.

OpenMAX is already availabile in generally open source platforms like Maemo and Android. As part of my work with Renesas I've been developing OpenMAX IL components for the video encoding and decoding hardware on the SH-Mobile processor series. (However, this post does not necessarily reflect the views of my employer).

Open Source implementations

OpenMAX components implement a specific C API. All components need to manage their ports and synchronize access to their input and output data buffers, so implementations generally include a shared library for the IL core, as well as some OpenMAX components required to pass Khronos conformance tests. There are (at least) three open source implementations of OpenMAX IL:
  • Bellagio, developed mainly by STMicroelectronics and Nokia.
  • TI have an implementation of OpenMAX for OMAP.
  • OpenCore, the multimedia framework used by the Android platform, includes an open source implementation of OpenMAX IL. [gitweb]
So far I've been working with Bellagio, which has an active open source community. It has a good balance between commercial concerns like manufacturer deadlines and conformance testing, and openness to the community by encouraging and integrating development forks, and having a responsive mailing list and bug tracker.

Xiph OpenMAX

I haven't mentioned specific codecs yet; OpenMAX currently encourages use of non-free codecs like MP3, MPEG-4 and H.264. This in itself is not good for the aims of Free Software, but I think that the API standardization that OpenMAX offers can simplify the productization of hardware implementations of free codecs.

Xiph.org develops free codecs (Ogg Vorbis, Theora, Dirac, Speex, CELT, FLAC). Ogg Vorbis is required by the OpenMAX IL specification, but there are not yet any other OpenMAX IL implementations of the other codecs. Developing software OpenMAX IL components will allow application developers to implement Ogg support ahead of hardware support. It would also give hardware manufacturers a set of specific, well-defined goals for implementing Ogg support, with the understanding that the hardware components, when shipped with these software control APIs, will work in a variety of open source applications with minimal modifications.

There were a few Xiph.org people at FOMS 2009, so I introduced what we'd need to do to implement OpenMAX IL components for Xiph.org codecs:

  • Choose an OpenMAX IL framework
  • Implement generic Ogg mux/demux components (instead of single Ogg Vorbis component)
  • Implement IL components for each codec (Theora, Dirac, Speex, CELT, FLAC)
  • Implement GStreamer OpenMAX plugins for each codec

A recent thread, [Flac-dev] FLAC support for Android?, discusses requirements for implementing OpenMAX IL component for the lossless audio codec FLAC.

Free Software application support

In order to make use of OpenMAX components, applications need to either use the OpenMAX APIs directly or use a framework which does. For example, there is already an OpenMAX-GStreamer project which implements GStreamer plugin wrappers for Bellagio OpenMAX IL components. This allows any GStreamer application to take advantage of hardware codecs when they are available, or fall back to software implementations otherwise. This fits well with the GStreamer project's stated aim of of not implementing codecs, but providing routing, discovery and synchronization.

Other applications will need to use OpenMAX directly; good candidates would be applications that target mobile/embedded systems like Gnash, Fennec, WebKit and VoIP clients, as well as server-side transcoding or rendering software that needs high throughput.

Remember this:

  • Mobile processors increasingly have hardware units for video encoding and decoding, as well as audio and image processing
  • OpenMAX gives you access to hardware codecs (audio/video, image processing etc.)
  • Implementing OpenMAX components for free codecs will give manufacturers a clear path to hardware implementation

At some point in the near future it'd be great to get a few open source OpenMAX implementers together at a conference, ideally at a more general multimedia workshop like FOMS to discuss application integration. Perhaps at FOMS 2010, or FOMS Europe? In any case it'd be good to get some more discussion going: do you think OpenMAX is important for Open Source, and for Free Software? What other barriers do you think there are to hardware support for free codecs? And would you be interested in helping out with developing and testing OpenMAX support for your favourite codecs, and in your favourite applications?

Syndicated 2009-02-24 10:37:00 (Updated 2009-02-25 06:47:17) from Conrad Parker

A month of Mondays

The last month or so has been fairly busy. I'll write more about each of these activities, but here's a quick summary of what I was up to (from about mid-January to mid-February):

Somewhere in there I also managed to fit in a few days skiing in Hokkaido, and some time in the office in Tokyo. I've spent the last week relaxing back home in Kyoto and taking stock before getting back into things.

Break's over! onwards and upwards.

I love Mondays! Over the last two years I've learned a lot about how to get multiple tasks done in parallel -- a mix of GTD and some other techniques I've been developing. Unfortunately it sometimes means making sacrifices, like getting up in the wee hours of the morning to get work done on a ski trip; but every few months I also need a reset; I tend to go a bit crazy whenever I visit friends back in Sydney ;-)

If you've got any advice or words of encouragement please share them in the comments!

Syndicated 2009-02-21 02:24:00 (Updated 2009-02-21 04:09:31) from Conrad Parker

Tractorgen on github

Tractorgen is now on github:

REPOSITORIAL

The contents of this revision controlled document repository are a computer
source code implementation of TRACTORGEN, being a model of ASCII tractor
mechanics.

It is recommended that one study these documents closely in order to better
understand the finer details of the subject at hand. The authors firmly
believe that only through such preparation, preferably during the course of
one's daily study regimen, can a deeper appreciation of the theory be
attained.

As a side note, it has been noted by correspondents that it is possible to
derive a computer readable binary executable from these documents through
the use of sophisticated compiler technology. On the off chance that any
readers would wish to pursue this path, we include the apparent preparation
for doing so herein, as quoted:

$ automake -a
$ autoreconf

Upon completion of this procedure, which we expect should take on the
order of one to two weeks (of course the actual time depends on the
staffing resources of your local computer centre), a new document shall
be generated _as though from nought!_ [emphasis added]. The name of
this document is expected to be "configure", and it may itself be
executed thus:

$ ./configure

We recommend scheduling a vacation!

Upon your return, type "make", then "make install", and prepare your
experimental apparati forthwith:

$ tractorgen

Generates ASCII tractors.

Commit messages

One must eschew the typically terse and perfunctory style of commit messages that are common in software projects, and ensure that the purpose, significance, and experimental procedure for each incremental change are appropriately recorded.

Obviously, commit messages are a good place to store source code for important tools: 9112c05.

         r-------
        _|
       / |_______\_    \\
      |          |o|----\\
      |_____________\_--_\\
     (O)_O_O_O_O_O_(O)    \\

Syndicated 2008-12-23 23:56:00 (Updated 2008-12-24 00:23:07) from Conrad Parker

35 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!