Older blog entries for pabos (starting at number 3)


I followed a link Raph mentioned recently to David McCusker's site. I was amazed at the openness with which David shares his personal life and seemingly, intentionally with the general public. Here is a quandary for me. Reading a personal diary evokes a personal response. But I don't David at all (well, apart from what I've read). Are social norms sufficiently different in this context from regular conversational contexts that a response to (hard) personal matters from a stranger would be normal here? The additional, complication is that not everything I would like to say seems appropriate for 'first contact'. More specifically, if a friend were to unburden his heart to me there is an openness in the friendship that allows both comfort and criticism. Can this level of familiarity be forged in a moment? I don't think so and that leads me to say that a response is more likely to come from someone who has an established relationship with David already. But in that case why is everyone told? In any case, having publically aired my confusion, I will say that I hope for the best for all involved without going into what exactly that means.


While refactoring some Quixotic code I obliterated large chunks of working code accidently. Coupled with the increasing need to organize my Makefile into something sensible I attempted a swan dive into the GNU pool. Having crawled out of the pool I now have stinging, red skin from the gigantic belly flop.

More specifically I tried to start a personal CVS repository, conquer the auto* tools and perform experimentation with gcc, gdb and gprof simultaneously. I am slightly confused by what happened. I don't think there are any conceptual barriers to the tools; in fact I'm fairly sure I understand what need each of them addresses. On the other hand, I'm not as surprised as I might make it sound. I've known for a long time why I haven't approached many of the tools - they seem inflexible and inelegant.

That may sound harsh but that's not my intention. I feel the need to explain further but have the dreaded feeling that this is going to come out all wrong. Nonetheless I'll try to give a few examples (which may not be strictly true of the GNU tools but hopefully are generally true of the way they're used -- see I'm qualifying already):

  • inflexible

    So using Makefiles you can cross-compile to a plethora of platforms, write shell scripts, call any external tools you need and its inflexible?

    • no structure to create within

      The majority of a platforms (to my limited knowledge) which the auto* tools are used on are Unix variants with frustratingly petty differences (my impression, I remain ignorant). It is nice to be isolated from these differences as much as possible. Nicer still would be to fix stupid differences. Shells are not guarenteed but may be entrenched. Bash scripting also has excellently archaic syntax. External tools become dependencies which need to be pre-installed and as such are used only in a restricted manner.

    • structure to create within is anoying :)

      Hardcoded directory format and files. The guts of the build process are always hanging out all over the place. I'd prefer build related files to be somewhere nice like in a directory called build.

  • inelegant
    • strange consistency and cyrptic historically evolved techniques (survival of the obscure?)

      bin_PROGRAMS = hello
      hello_SOURCES = hello.h hello.cc main.cc
      This may be familiar to some and its not a major conceptual barrier - its an annoyance barrier. Why did compound words suddenly appear and taken on special meanining? It just doesn't *feel* right. Configure.in and Makefile.am *seem* like a gigantic bundle of IFs, GOTOs and black boxes (M4 macros).

      This point could be summarized by saying that using Makefiles isn't an a-ha or clean or fun experience. Its more of 'alright here we go..." manual labour kind of thing.

    • Hierarchies and relationships appear to be hard to accomplish in a clean manner.

      It seems hard to make multiple build pathways (ie. different compilers, or different compiler flags, etc.). It seems harder still to visualize these pathways as time goes on. It appears difficult to chain makefiles together efficiently in, to sub-projects, or out, to connect broader projects together.

    • Poor means of establishing boundaries and contexts

      For instance, I'd like to be able to create a project with sub-projects which might be shared libraries, or loadable modules, or give a name to some aspect of the project which I treat as some kind of unity. More broadly it seems that there is a lack of ability to name things and then refer to things by name.

While I'm on the soapbox, I'll also take this opportunity to say that it has been my experience that this is a universal problem. Makefile problems seem to be a very common frustration when experimenting with source, the easiest thing for maintainers to dismiss, the least likely to be fixed and the most prone to magic hand-waving explanations ("experts only please") rather than thorough investigations.

Also, bear in mind that this is more of an emotional rant borne of ignorance and mild frustration.

Oh, and finally, several auto* tutorials and sites recommend reading the "Recursive Makefiles considered harmful" document. Are these techniques in practical use anywhere?

Drex & a creamy white paper request

This was what I was going to talk about originally so I'll keeping going despite the length of this post.
I've also been thinking of beginning to develop some of the concepts for Drex Authoring (-ing in a title is weird isn't it?) that I've thought about. I prefer the term authoring to the processing of words but it will have similarities to a word processor - ie. content is predominantly text. However, presentation of the content will exist symbiotically with the content rather than being integrated with it. More simply, the content can be created/edited and presented separately as needed. Drex Authoring is entirely about on-screen editing so I don't want to deal with physical limits like pages. I'd like the text to be displayed larger than your average word processor text (probably pseudo-zoomable), wrapping to fit the screen not the page but with typographic quality spacing. Additionally, I'd like a creamy white paper texture rather than a stark white fill for the background. Think of the nicest paper you've seen and nice crisp, black letters and that's it. I'd like to find a good procedural algorithm for the creamy paper to consistently generate the same texture results for a given (x,y) offset given a random seed (ie. no tiling).

Quixotic : Repainting, Paths & Fonts

My foray into 2d graphics and typography is provining to be very stimulating. I've been exploring how to implement the rendering pipeline for my Quixotic project recently.

I have a vague notion in my head that when redrawing an image, an efficient and complete redraw will do better overall than using logic and data structures to minimize redraw areas. Undoubtedly this is a simplification but I persist in thinking it while trying to avoid it at the same time. Mostly this is a combination of prior experience and inexperience. Some years ago I made some assembly language demos in DOS which often involved blitting some image to the screen as fast as possible to draw some sprite, animatated fire or the like. The concept which has stuck with me is "wait for the refresh signal and blit like mad". Still, anything beyond basic 2d vector shapes will require bitmap intermediaries so blitting will undoubtedly be a very import component of the whole solution. The inexperience factor is my unfamiliarity with how data structures behave when random lookups, caching, and computations all play variable parts in a time constrained redraw. 'Behave' sounds like an odd word to use to my ears but I think it describes it well enough since I can think of what the data structures should do but I don't have a feeling for what I they should do.

Since I'm trying to get some experience with data structures suitable for editable, easily renderable objects, I've been playing with paths and trying to decide on a good way to do partial path renderings. Redrawing the entire path if the path's bounding box is dirtied seems inelegant at best and, more importantly, the inevitable chained redraws that result would be a killer. For example, redrawing a path may cause a redraw of another path which partially overlaps it, which may cause the redrawing of text touching it, etc. This becomes enormously expensive very quickly, especially if the paths are filled with some form of bitmap paint (ie. gradient, image, pattern), masked, filtered and then composited. For that matter, since I'd like to support very large images redrawing even a single path can be expensive.

The problem I'm having is that I can't find a good way to represent partial paths. I've recently read that libart has a bounding box for each segment of its Sorted Vector Paths (SVPs) which could help at the expense of much higher memory use and the need to continually regenerate the SVP of paths being edited. However, my target canvas is in physical coordinates, not pixels, and I think SVP coordinates are already rasterized to pixel coordinates. I'll have to check into this further.

After trying to form all kinds of data structures for this purpose, I suddenly realized that clipping the paths within the regions of the canvas to be redrawn to that region would effectively bound the redraw to a limited area and draw only the portions of the path that I need. SVPs may prove useful here.

Mailing Lists and Journals

Since subscribing to desktop-devel and gtk-devel a few weeks (months?) ago I often have the desire to respond to some particular topic. I usually don't though because I feel like there are too many issues I don't understand and I find that when I think of my initial reactionary response, a few days after I might have posted, it is usually less insightful or helpful than I originally felt it was. It seems as though this Advogato journal will be a good middle ground alternative for me - an opportunity to give somewhat thoughtful, somewhat reactionary responses.

I think this is possible because Advogato is intentionally personal - my journal is my journal. Often on mailing lists I have the impression that everyone feels they must argue their particular case; as a result discussion quickly degenerates into criticism of who was unfair to whom, who was misrepresented, who wasn't consulted, the list goes on... Mailing lists are public forums where each voice tries to be heard above another. Posts which provoke response co-opt the entire mailing list. A journal on the other hand, is also public, but it invites rather than challenges. If people want to hear what I have to say they come to read it. If they disagree, its in a less confrontational manner and we can discuss it. If they really dislike it, they can avoid it.

Now having said that some journals can also be used in the same way as flames in mailing list and I have vague recollections of skipping journals like that when I browsed through Advogato journals before joining. Additionally, when I joined two days ago, I explored the site a little more thoroughly and for the first time discovered the "Recent Diary Entries" link which I had previously assumed was a heading to the journal entries below it. When journals are viewed this way, its a little more like a mailing list where you just get a barrage of messages irregardless of who wrote them so you may end up reading more inflammetory pieces. Still, even in this case, the nature and frequency of journal postings doesn't build up to the level of flame threads because the responses are less frequent and not directly linked into a theme-like hierarchy. Dilution makes it less potent and therefore less irritating.

I wrote some more about a specific issue on desktop-devel but the html form entry won't accept the entire journal so I'll just post this for now. I'll have to look into other ways of entering a journal soon.

My first post.

A preview of some intial topics I'll likely write about

  • Quixotic
    A desktop publishing canvas. Initial focus will be on low level source layout and performance experiments for the rendering and caching systems. Tests will probably begin with path rendering. The canvas is intended to be composed of modular, and therefore replaceable components, at as many layers as is practical and worthwhile to do so. Two examples of where this might be useful are a replaceable text layout system and a replaceable canvas view system. Primary canvas views will be a freetype/libart/Xshm view and an Xft/Xrender view.

  • Verso
    A fully versioned file system within a file system - primarily for Drex.

  • Drex
    A desktop publishing suite. Depends on Quixotic and Verso so comments will likely be high-level descriptions and planning notes for the immediate future.

  • unnamed: notes & associations
    A program for taking unstructured notes quickly and applying arbitrary associative relationships between elements at any point in time afterwards.

  • Gnome
    Comments on, suggested improvements, enhancements to, etc. Discussion regarding a tentative series of articles detailing how to develop software using the Gnome libraries. The tentative title of the series is "the Exemplary Gnome". As the title suggests the idea would be to create articles that capture the best practises for particular libraries with a heavy use of examples. Part of the appeal may, or may not, be that it would be written by someone who is coming to the Gnome libraries for the first time himself.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!