Older blog entries for cinamod (starting at number 99)

5 Feb 2006 (updated 5 Feb 2006 at 20:49 UTC) »

In a thoughtful email, I've been told that I didn't accurately address Davyd's concern in my last post, so let me excerpt the email and further clarify and qualify my statements.

In your reply to Davyd, you're basing your argument on a different use case than he. [snip] Its not about finding out the file-type. What Davyd talked about, and what I also regularly do, is the opposite: Finding files of a certain type. [snip] You are right in saying that simply putting the file-type as text into the icon might not be the optimal choice but so far, I at least am not aware of a better one.

So let's run with that.

Let's forget that you can probably do this based on the file's name easily enough. Let's put aside the fact that having this info embedded in a smallish image probably isn't ideal. Let's put aside the fact that representing text as an image is lousy for a variety of reasons and recommended against by the GNOME HIG (most likely for i18n purposes). Let's see what other tools Nautilus gives you to solve this problem.

If one wants to, one can organize the icons by their type. One can choose to display the file's type in its caption. One can also choose to use the list view, which includes the file's type in the view by default.

There are at least 3 other easy ways to view and use this information for that particular use-case. I contend that there are alternate (I'd contend, better) ways of viewing this information than having the information embedded inside the file's icon.

[update]

Christian has suggested that if you really want this behavior, it might be accomplishable via Nautilus emblems. Dave Camp's blog from yesteryear suggests that it might be possible.

As Christian mentions, breaking API/ABI isn't something to be done lightly, even if your module isn't constrained by GNOME's API/ABI policies. My argument doesn't concern itself with that, however, only with the apparent "loss of information and usability" that has been claimed.

Even if you think the puported usability arguments are thin, I think that I've shown that usability arguments for keeping them in are at least equally thin. In that case, please at least consider deferring to the HIG, which quite clearly says "no text in icons" and decide on a better way of achieving the semantics you're after.

5 Feb 2006 (updated 5 Feb 2006 at 17:21 UTC) »

I'm going to have to go ahead and disagree with Davyd here. Maybe you just picked bad examples that don't really illustrate your point. In both cases, you can easily find out what type of file or share they are via the Nautilus contextual menu. Heck, you even have extensions showing, so saying that you don't know if "FOO.JPG" is a JPEG image because "JPEG" is not in the file's icon is complete rubbish.

My mother doesn't care whether the share is a SSH one, but if you do, right click->properties. The contextual information should be accessible via the "context" menu. If it's not, I'd argue that the bug lies there. The information in the icon is ugly, gets in the way, and is probably redundant. These are exactly the kinds of thing you want a drill-down interface for, not bubble-up (IMHO, of course).

Sometimes I wonder whether people are percieving real problems, or if they are just acting out their past frustrations with Dobey...

I'm going to have to disagree with some of Raph's points about the auto* toolchain. While I can't say that I'm pleased with many of auto*'s decisions, I've yet to see a build system that attempted to fill auto*'s niche and fill it as well as auto* currently does.

  1. The project's original goals still are relevant. Maybe you don't care about toolchains other than the GNU one. But I (and more importantly, my customers) care about binaries being compiled with the native compiler and linked with the native linker. Face it, even GCC 4.x isn't as good performance-wise as most platform-specific compilers, and suffers from occasional ABI-related issues in places where it's not the platform's native compiler.
  2. Auto* does solve real-world portability problems, especially the Windows portability problem. For example, compiling in a MSYS/MingGW or Cygwin environment is trivial. At my current job, I was able to take a few million lines of C source code managed with auto*, add 2 lines to my configure.ac file for improved Win32 libtool support, add a reimplementation of mmap, and it all "just worked". Auto* is also capable of cross-compiling Win32 binaries from Linux platforms (workrave's Win32 nightly builds do that, for instance). I hear that it's even possible to use Microsoft's cl.exe and link.exe with auto*.
  3. Regarding error reporting, the errors it reports are only as good as the people who wrote the error messages. Because FooProject's author didn't write a good AC_MSG_ERROR or AC_MSG_WARN message, doesn't mean that it's not possible to do so or that their shortcomings are somehow the tool's fault. The tool is only as good as those who would use it.
  4. Regarding auto*'s tendency to work around deficiencies in ld/cc/nm/etc..., all I can counter with is "we don't control the horizontal and the vertical". Sure, it'd be great if problems were fixed at their root, or if they were never introduced in the first place. But please, let's be pragmatic. I live in the real world. Imperfect software gets produced all the time. And the support lifespan of a typical *NIX sytem can easily be measured in decades. Sure, there's timed/planned obsolescense, within reason. But there's a reason why people still code to C89 standards instead of C99. It's because that timed obsolescence is measured in similar decades-long increments.

Is auto* a perfect toolchain? No. Like you, I wish that we had a better alternative and feel that one is long past due. But in an imperfect world of having to support legacy linkers, compilers, and ancient platforms, auto* still fills a useful niche. Things are not as bad as you make them seem. You're just lamenting that the tools attempt to address a niche that you'd rather ignore entirely. That's fine for you. But writing off the tool entirely for those reasons is just sour grapes...

Neutering the DMCA

What follows aren't particularly well-formed thoughts, just ideas floating around that I need to jot down so that I don't forget them. These are original (in the sense that I'm not copying these from anywhere that I'm aware of), but I'm not sure if they're unique or even valid ideas. I'd like to investigate them more in the near future. I'm admittedly lacking in both IP and Constitutional law education and experience, so there are probably glaring holes and errors in my argument. Take what you read here with a grain of salt.

  • Section 8 of the US Constitution empowers Congress to enact Copyright protections in order to "promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries".
  • This time-limited "exclusive right" is, in effect, a monopoly on said Writings and Discoveries with an expiration date attached.
  • The DMCA further extends Copyright protections, making it illegal to reverse-engineer, re-implement, or otherwise circumvent a "Copyright-enforcing" mechanism.
  • The Sherman Antitrust Act forbids companies with a monopoly in one market from using that monopoly to expand its powers into other markets. Illegal actions include:
    • Price fixing
    • Bid rigging
    • Market allocation/segmentation schemes
  • Consider (for example) that a monopoly is not limited to the "only company X makes printers" case but can also be drawn along product-lines, such as "only company X makes $brand_name printers".
  • Consider (for example) that by putting access restrictions on ink cartridges so that company X can control who makes and distributes ink for $brand_name printers, that company X is fixing prices and employing market allocation schemes. In doing so, it extends its $brand_name printer monopoloy to also include inks made for that printer.
  • Lather, rinse, repeat for product/technology of choice (eg: DVD/CSS region encoding, eBooks, etc...)
  • Hypothesize that exercising DMCA-protected vendor lock-in amounts to a violation of the Sherman Act, as it necessarily results in segmented-market oligopolies.
  • Hypothesize that a "catch-22" exists. While it may be Constitutionally legal for Congress to forbid its citizenry from circumventing copy-protection schemes, it would be illegal for companies operating in the US to employ such schemes.
  • Hypothesize that invoking the DMCA as defense of said schemes amounts to an admission of guilt with regards to Sherman.
  • ...
  • Profit???
Inkscape and SVG filters

It seems as though SVG filters are an oft-requested feature that the Inkscape developers haven't gotten around to implementing quite yet.

In case any of them are reading this syndicated somewhere, I'd just like to offer them a small piece of the solution and some advice, which they're free to do with as they will. Librsvg has implemented the drawing-side code for filters. No doubt, it could use some cleaning up and restructuring (and we'd love contributions back), but the code is there, is freely available, and just works. You'll still need to write all of the code for creating and manipulating the filters, but at least the rendering part is already done.

From what I understand from their emails, they're blocking filters on Cairo/Xara integration, which I don't quite understand. The filters all operate on pixel data, not SVG maths. As such, they're independant of Cairo, Libart, or whatever one wants to draw with. The small-but-important exception is that one needs to be able to turn a specified region of the source graphic into a pixel buffer in order to use it with these filter functions (which Inkscape probably almost already has, since it uses libart to draw), and the ability to blit pixel data back onto the source graphic (which Inkscape necessarily has, since it supports the <image> tag). In librsvg, this has always been a pretty small shim, regardless of whether we were using Cairo or libart.

Also, they claim that Cairo is "slow". From librsvg's experience, it's anything but, at least compared to what we were using. Link Inkscape, we once used libart_lgpl. Cairo's image surface has been between 1.5x and 10x faster than using libart. My best guesses as to why are because it uses ARGB rather than RGBA pixel ordering and that libpixman's gradient-generating code is pretty good. Sure, it might not be able to (currently) draw 10 gazillion polygons per second like Xara claims to do, but does Inkscape really need that level of performance? I'd also like to see the code/tests Xara used in their performance comparison.

Anyway, we abstracted out our rendering into a 6-method interface that's implementable via Cairo or libart in ~1100 lines of C code. It's proven indispensable. Once you've chosen a good enough rendering abstraction, swapping in Cairo, Xara, or $flavour-of-the-week under the hood shouldn't be a problem down the road.

If SVG filters are your most requested feature, and it really is more than "just a one person job", please consider stealing liberally from us. GNOME needs more pretty icons and widget themes.

30 Dec 2005 (updated 30 Dec 2005 at 05:57 UTC) »
Expensive evening...

So, Ruth and I were driving from her grandparents' place in upstate Pennsylvania to Boston tonight. It's a long drive (7 hours in ideal conditions), and it was rainy/foggy most of the way. After about 5 hours, Ruth gets tired and I took over driving. All's well for about 1.5 hours until I get ticketed: I was going 63mph in a 40mph zone. This 40mph zone was near an otherwise *vacant* toll booth area where the speed limit drops from 65mph to 50mph to 40mph and then to 30mph in a matter of 300 meters. I pulled my foot off the gas pedal and applied the brake, but apparently not fast enough for the state trooper's liking. Utter B.S...

Then we get home and open a week's worth of mail that's been waiting for us. The choice piece of mail is a letter from the I.R.S. where they tell Ruth that she owes about $3000 in back "self employment" taxes. Ruth is a graduate student who makes about $18k per year in taxable income (which she pays quarterly, like she's supposed to) from an extremely modest stipend. She is in no way "self employed", nor should the government feel that they're entitled to another 1/6th of her measly income ON TOP of the other taxes she'd already paid. What The FSCK...

Today's lesson learned: Government- can't live with 'em, can't run or hide from 'em either.

24 Dec 2005 (updated 24 Dec 2005 at 15:56 UTC) »
Burgundavia,

Why is ODF a leapfrog? While it doesn't appear to be so, it offers something MS Office is never going to be able to offer, full portability of documents regardless of program. I call that a killer feature.

I think that you're forgetting other "open" specs like RTF, which offer full portability of documents regardless of the program used to edit or render them. And you also forget that the other MS Office formats are pretty well-understood these days by a *lot* of other programs. To say that you don't have something close to full portability of your documents today is to live in igorance. To say that we don't have "simply documents" these days is to abound in ignorance.

ODT does have a few things that the MSFT formats don't have:

  1. A standards body around the format
  2. Buzzword compliance
  3. Ignorant zealots (aren't they all?)
  4. Momentum

From a technical perspective, ODT, DOC/XLS, etc. are on fairly equal playing ground and enjoy broad application support. Why I have to claim otherwise in order to "be there socially", as you claim, I have no idea. Your excluding/ignoring/rejecting other people's well-founded technical opinions because they aren't they aren't your type of zealotry is an inexcusable offense.

I'm not claiming that ODT isn't technically good or morally good. It is both. I'd rather support it than MSFT's formats any day. But you're arguing all the wrong things for all the right reasons.

21 Dec 2005 (updated 21 Dec 2005 at 16:45 UTC) »
Burgundavia, our ODT support is on-par with our RTF and DOC support, which is to say that it's pretty darn good right now. It will be in our upcoming 2.4.2 release, due out in a few days. If you like to live dangerously, try out the CVS HEAD build.

I don't know who the "they" is in your last anectdote, but I never said that ODT support wasn't important; rather, I don't understand the hype around the format. And if you'd been using MS file formats, you'd still have the choice of switching to Abi+Gnumeric or (gasp) even Microsoft Office. OpenDocument is surely a useful tool, but it is not the panacea that people make it out to be.

10 Dec 2005 (updated 10 Dec 2005 at 21:09 UTC) »
svg hacking

Today I deprecated the old 'rsvg' command-line tool in favor of the new hotness, rsvg-convert. rsvg is now just a small python wrapper around rsvg-convert. rsvg-convert has a bunch of useful improvements over its predecessor:

  • It's faster and lighter on RAM, since we no longer go through the intermediate step of converting the SVG to a GdkPixbuf, and then the GdkPixbuf to a PNG
  • It can accept input from stdin [default behavior]
  • It can emit output to stdout [default behavior]
  • It can preserve the image's aspect ratio when you scale it
  • It can set the image's base uri, so you can download a SVG from the web but not resources (PNGs, JPEGs, etc.) relative to it, and it will all "just work"
  • It can emit things in the PDF, PS, and SVGPrint (!!) formats, thanks to Cairo hotness
  • It can merge multiple SVGs specified on the command-line into a single PDF/PS/SVGPrint document (caveat: due to Cairo API limitations, all pages are constrained to the size of the first image)

That last point was made possible in large part due to Emmanuel Pacaud's nice work on an exprimental Cairo SVG backend. Today, I did my part and modified his SVG backend to be a SVGPrint backend so we can get multi-page output. Not that any renderers I'm aware of support more than the first page of output. But that's probably a chicken-vs.-egg kind of problem.

8 Dec 2005 (updated 8 Dec 2005 at 15:50 UTC) »

So, I took the LSAT last weekend at Northeastern University. I think that I did pretty well, but of course, we'll know for certain in a few weeks. I'm hoping to do well enough to be known as "the white Luis", as Ruthie affectionately calls me now.

So, with the admission slip, there's a list of rules. One of them says to "dress appropriately" because you can't control the temperature of the testing room. Also, it enumerates about a dozen prohibited devices and activities that might be distracting to one's fellow test-takers - alarms, eating and drinking, cell phones, etc.

So the room I'm taking the test in is apparently the only room on Northeastern's campus with heat on this Saturday. About 45 people are stuffed into a relatively small room, and big glass windows face East on a sunny morning. All of this adds up to one hot classroom and a bunch of drowsy test-takers.

I'm sitting next to a very attractive girl - maybe 22-23 years old. At the beginning of the test, she has about 7 layers of clothing on - it's < 0C outside and she followed the LSAC's clothing recommendations. During the test, she gets hot and starts stripping. One layer after the next comes off, until about 3/4 through the test, she's sitting next to me in a bra, a tatoo between her shoulders, and a belly button ring. Far more distracting than any beeping alarm clock, let me tell you! These certainly weren't on the list of prohibited, distracting items ;^) Not that I'm complaining...

90 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!