Older blog entries for Raphael (starting at number 33)

Unmaintainable Code?

How To Write Unmaintainable Code

Two days ago, I enjoyed reading the collection of tricks titled How To Write Unmaintainable Code and I mentioned it to a colleague. We both had fun reading it and commenting on some entries, but then forgot about it.

The Mysterious JSP Bug

Yesterday, he came to me to check if I could help him debug an application. That was a bit of JSP code that I had written some time ago and that he extended. Note that I seldom write JSP or even Java - he is a much better Java programmer than I am. The problem was that after his modifications, the JSP page did not produce the expected results. That page was supposed to display some results after submitting a form, but it didn’t. There was a rather large amount of code in that page, but I will spoil the fun for you by quoting only the part that caused the problem (of course he initially thought that the problem came from a completely different part of the code):

    String submit = request.getParameter("submit");
    if (submit == null) {
        /* if the user did not confirm, go to the exit page */
        %><jsp:forward page="./SomeExitPage.jsp" /><%

Nothing very fancy in that code. Now, since he was testing a modification of that code, he was not sure that the form submission would always be correct. So he did the obvious thing and commented out some parts of the code that were not ready yet for testing, including the one that I just mentioned. That part of the code now looked like this:

    String submit = request.getParameter("submit");
    // if (submit == null) {
    //     /* if the user did not confirm, go to the exit page */
    //     %><jsp:forward page="./SomeExitPage.jsp" /><%
    // }

Nothing unusual, right? Just commenting out a few lines that are not ready yet. Well, this is wrong! I found out that the problem was precisely there: the unexpected results that he got were just the contents of the exit page. The problem did not come from some other part of the code that we were looking at. It came from the lines that were commented out.

Why? Well, it should have been obvious: the scope of the JSP tags <...> and <jsp:.../> is evaluated before the language-specific features such as comments, etc. As a result, the <jsp:forward.../> was not commented out. On the contrary, it was now unconditionally included, since the if condition had been removed. That was a nasty trick!


The bug was fixed quickly, but we thought again about one of the interesting examples in “How To Write Unmaintainable Code”, specifically the one titled “Code That Masquerades As Comments and Vice Versa”.

Syndicated 2005-11-24 11:29:40 from Raphaël's Last Minutes

Visited countries

Following the meme started on planet.debian.net (but one week late), here is a list of countries that I have visited…

Rather dense in Europe, but unfortunately not much outside of it. I am planning to change that.

North America is shown as one big piece, but to tell the thruth, my visits to Canada have been limited to Quebec (plus one airport stop that doesn’t count) and my visits to the US include only CA, AZ, NV and UT (plus DC and IL if you include airports). Note that the isolated red dots (islands) around Hawaii are incorrect and came as a side effect of selecting the US, but most of the other ones are correct.

Update: Two months later, I managed to fix the title of this entry. It turns out that NewsBruiser gets sometimes confused with its authentication cookie and displays the unhelpful error message “Error: I don’t think you meant to enter that as the title.” if you put anything in the title. Submitting an entry without title worked, though. Solution: go to the configuration page, select “Security”, re-enter your password and enable or disable the authentication cookie. After that, you can enter titles again. <sarcasm>Why didn’t I think of this obvious solution before?</sarcasm>

Syndicated 2005-09-22 13:02:42 from Raphaël's Last Minutes

ADSL adventures, part 2

Once I managed to get the necessary information for configuring my Speedtouch 350 DSL modem (see part 1), the next logical thing to do was to start using it. Or at least try to.

The first problem was that Belgacom apparently never sent me the letter containing the user name and password that I was supposed to use for accessing their services. After spending a few minutes on the phone (that music sounds familiar) I got a login and password that I could use. Well, that’s what I thought. I learned later that what I got was not the login/password pair that I asked for, but just a pair of passwords (for PPPoE and for POP). No login. Doh!

My second call to the support center (ah, that music again!) was barely more successful: this time I got a user name and a (new) password, but again I discovered later that the user name that I got was incomplete (last characters missing).

The third call was more interesting. After 20 minutes of music (I really know it by now), a technician told me about the missing characters in my user name and asked me to try logging in while he was monitoring their side of the DSL line. This time, the PPP authentication was successful but then the PPP connection went down immediately after that. Strange! The modem re-tried a few seconds later, with the same results. And again, and again… After a few more minutes of debugging, he told me that he was resetting their card and asked me to power-cycle my modem. I did that and when the line came back, the connection worked and I was able to access the Internet. Oh joy! But I also noticed something else while looking at the system log of the modem: the connection speed after the reset had dropped from 3 Mbps to 1 Mbps. I mentioned that to the guy, who said that it was normal. Ah well, at least the ADSL connection was usable so I was happy (after wasting two hours on that).

According to a colleague who had a similar experience, the reason why my line went down immediately after a successful authentication was related to the 3 Mbps. By default, the DSL access is configured for 384K/3M up/down. But the offer that I had accepted had a cap at 1 Mbps (apparently, because I never got the letter with the details of the offer). Although the telco part of Belgacom handling the DSL access was happy to let me in with 3 Mbps, the ISP part of the company was not happy with that and dropped the connection immediately. That could make sense, but I am still wondering why the access line had not been configured correctly on their side in the first place and why it took so long for the problem to be identified. Ah well, at least I can use my connection now… And I am glad that I could do all the tests using the built-in web interface of the modem over Ethernet instead of USB. I’m wondering what would have happened if they had required me to use some Windows software for configuring the stuff.

Syndicated 2005-08-04 08:21:06 from Raphaël's Last Minutes

I have moved to blogs.gnome.org. I don't know yet if I will update that blog more frequently than this diary. We'll see...

Long time no write.... My last diary entry was almost one year ago!

Playing with LILO and Slashdot

This morning, I loaded the Slashdot home page and... Oops! What's there in the story at the top of the page? Three links to my LILO pages. Ouch! This is going to hurt... Welcome to the Slashdot effect! Quick look at the logs of the web server: since this morning, the server has already seen more than 20,000 visitors making more than 300,000 requests. And many people in the US are still in bed at this time. All these downloads are going to suck a significant amount of bandwidth...
Playing with LILO is fun. It is also interesting because it encourages good programming practices. Testing a modified boot screen requires a reboot of the PC, and any fatal error in the program is likely to prevent the computer from booting at all. So I take the time to re-read my code before rebooting. This reminds me of the good old time when I was programming in Z80 assembler on my ZX Spectrum.

Playing with the Linux kernel

Yesterday, I had to run some tests at work with a modified version of the Linux TCP stack. The goal was to change the initial size of the congestion window and to run some performance tests on a dedicated network (with high bandwidth*delay product). Of course, there is no /proc interface for changing that, because this would violate the standards. So I decided to add my own. I had never looked closely at the Linux kernel code before, and I never touched the TCP stack.
It took me a while to find the file that I had to change, but find, grep and emacs are very useful tools. Once I found the file (net/ipv4/tcp_input.c), it was really easy to change the way the cwnd was initialized. Half an hour later, I had created two new interfaces in /proc/sys/net/ipv4 and everything was working. I even added a new option in net/ipv4/Config.in to make these features optional. By reading or writing to the pseudo-files in /proc, I could dynamically alter the behavior of the TCP stack and make it standards-compliant or not.
This was a very interesting experience for me, because I have been working on free software for a long time, but still I did not expect that it would be so easy to add a new feature to something as complex as the TCP stack of Linux. Of course, I only had to do a very small change that was limited to a few files, but it was interesting for me to see how easy it was to understand how the /proc interfaces work and how the kernel configuration works, considering that it was the very first time that I looked at it. So I have to congratulate the kernel hackers for all this nice work.

There is a pointer to the improvements for TCP in the Ericsson Eifel license that I mentioned. The first paragraph contains a reference to the Internet-Draft that describes the Eifel algorithm. Mind you, this is a draft and not yet an RFC.
In the References section of the draft, there is a link to a paper that gives a bit more information about why the Eifel algorithm could be useful for TCP.
Oh and by the way, I come from the french-speaking part of Belgium, not from France. ;-)

I just saw your AskAdvogato message in which you ask how to keep ants out without killing them. Although killing them is usually the easiest solution (using boxes with small ant-sized holes containing a poison that the ants eat), the best way to keep them out is to make it hard for them to get in. If it is not possible for you to seal all openings in your house, you can try to smear grease in their path, or to use chalk or talc powder around the openings through which the ants enter your house. They hate these things because it makes it harder for them to walk, and they give up after a while... or find another opening that you had forgotten. Good luck!

More patents usable in free software...

Following the example set by Raph with his royalty free license for using his patents in free software (released under the GPL), there is now a similar license granted by Ericsson for some proposed improvements of the TCP protocol (the Eifel algorithm). More power to free software!

That license allows GPLed software to include the proposed improvements to the TCP stack, as well as any operating system that is entirely Open Source. So this covers Linux, FreeBSD, OpenBSD and NetBSD, among others.

(Disclaimer: I work for Ericsson and I contributed to the wording of that license, but I am currently only speaking for myself, not for my employer.)

David O'Toole writes:

[...] Looking at stuff like this makes me get just a tiny bit upset about how badly the linux world is dragging its political feet with respect to improving the interface. I'm not talking about making all the OK buttons respond to the Enter key (currently my biggest pet peeve about GNOME, and it's slowly being fixed---recent GIMP etc.)

I'm talking about the imaging model. I don't want to criticize X unfairly. The X Window System was brilliant for its time and in its environment. But it simply does not support what people want to do now well enough to continue. Fast vector imaging, transparency, high-resolution monitors, antialiasing. Yes, you can implement software on top but there's no standard and it's slow.

The first defense I hear all the time is network transparency. I respond: who cares.

Well... I, for one, care very much about the network transparency of X. I am currently typing this from a Solaris machine on which I have other windows displayed remotely from a Linux machine and other Solaris machines. Not only some XTerms and Emacs that could also work over telnet/rsh/ssh, but also graphical applications like Purify, Quantify, Netscape, XMMS and some other goodies. They are all on the same LAN so speed is not really an issue. Without X's ability to display anything anywhere, writing and debugging my programs would be much harder.

So maybe I am among the 1% of people who really use the remote displays and would not be satisfied with text-based remote logins. This does not mean that nothing should be done for the other 99% who would like to get a much better performance from the applications that are running on the local display.

I don't think that it is necessary to throw X away and to start again from scratch. The DGA extension (available on OpenWindows and XFree86) proves that you can get decent performance out of X, although this requires some specific code that is rather ugly and not easy to write and maintain. Most programmers do not want to write some additional code for specific X extensions, and indeed they should not be required to do so.

But it would be possible to get a better performance while keeping the X API. Imagine that someone modifies the shared X library (libX11.so) so that if the client connects to the local server, all X calls which are normally sent to the X server over a socket would be translated into some optimized drawing operations accessing the video buffer directly. The shared X library would more or less contain some bits of the server code (actually, a stub could dlopen the correct code). If the X client connects to a remote server, then the X function calls would fall back to the standard X protocol. All clients that are dynamically linked to that modified library would automatically benefit from these improvements without requiring any changes to the code. So it can be done without throwing away the benefits of X. Actually, I believe that some people are working on that for the moment...

Question: maximum information density in the print-scan process?

Does anybody know how much information can be stored and reliably retrieved from a piece of paper, using a standard printer (inkjet or laser, 300dpi) and a scanner (1200 dpi)? Since a piece of paper can be affected by bit rot (literally) and can be damaged in various ways, some error correction (e.g. Reed Solomon) and detection (e.g. CRC) is necessary. Also, I do not want to rely on high-quality paper so I have to accept some ink diffusion and "background noise" introduced by defects in the paper.

I found some references to 2D barcodes (such as DataMatrix, PDF-417 and others) but these codes are designed to be scanned efficiently by relatively cheap and fast CCD scanners. I am not worried about the scanning time (I am using a flatbed scanner) or the processing time (I can accept some heavy image processing). Also, I would like to encode raw bits and pack as much information as possible on a sheet of paper, regardless of its size. These 2D barcodes have a fixed or maximum symbol size and it is necessary to use several of them if I want to fill a sheet of paper, wasting space in the duplicated calibration areas and guard areas.

PDF-417 has a maximum density of 106 bytes per square centimeter (686 bytes per square inch, for you retrogrades), which is quite low. It is certainly possible to do better, but I would like to know if there are any standards for doing that. I am especially interested in methods that are in the public domain, because most 2D barcodes are patented (e.g. PDF-417 is covered by US patent 5,243,655 and DataMatrix is covered by 4,939,354, 5,053,609 and 5,124,536).

If you know any good references, please post them in a diary entry (I try to check the recent diaries once a day, but I may miss some of them) or send them to me by e-mail: quinet (at) gamers (dot) org. Thanks!

Hmmm... This is a bit long for a diary entry. But I don't think that such a question deserves an article in the front page. If you think that I should I have posted this as an article, then send me an e-mail and I will re-post this question and edit it out of my diary.

I posted my opinion on using GdkRgb in Ghostscript, in the LinuxToday article about Raph's open letter to the Ghostscript community. IMHO, GdkRgb is the best solution and those who see it as an attempt to force them to use "Gnome stuff" on their desktop do not understand the way GhostScript works or what GdkRgb is.

This is not new, but it looks like anything that mentions Gnome is flamed by KDE bigots, and vice-versa (yes, it does happen both ways). The interesting thing here is that the most vocal critics are not developers and/or show clearly that they do not understand what they are talking about. Sure, they want someone (who?) to fork GhostScript, presumably to create a highly productive KDE branch or something like that. What a bright idea! Sure, they could get rid of any Bonobo linking, but throwing GdkRgb away would be stupid.

Sigh! Even if you are careful about what you communicate (I think that Raph's letter was nice and explained very well that using GdkRgb would have no influence on KDE), some morons will find a way to interpret it in a different way.

24 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!