Older blog entries for slamb (starting at number 49)

lkcl, SQLite

when you get up to 30mbyte files, such as those used by embedded.org, the access time will drive you nuts.

People use SQLite databases much larger than that without problem, so your blanket statement is false - there's more to it than that. You said this is a monotone database? Maybe it's not normalized well, it isn't creating the right indexes, its queries are ill-considered, or it's doing unusual things like creating huge blobs in the database. (The last seems likely, since it's a version control system. I don't know if anyone has bothered to optimize this case in SQLite.) Has anyone benchmarked an equivalent PostgreSQL database for comparison?

Admittedly, there are ways SQLite could be causing your performance problem. I don't think the query planner is too sophisticated. And worse, SQLite doesn't support any sort of concurrency, though there's talk on their wiki of implementing it.

p.s. salmoni: if you get a crash in the middle of a write to a flat-file, then, just like ms-access, your data is corrupted. don't use it.

I assert otherwise, with the exception that your application corrupting SQLite's data structures can of course make anything happen. It sounds like you've encountered a problem with a pretty complex application that happens to use SQLite. Link to your bug report with a small demo application and I'll believe you when you say that SQLite is at fault.

27 Nov 2006 (updated 27 Nov 2006 at 22:56 UTC) »
salmoni, SQLite

SQLite's reliability is quite good. If you set pragma synchronous = full (the default in SQLite 3.0) and pragma fullfsync = 1 (for OS X, where they've decided to make fsync(2) dangerously worthless and nonstandard), SQLite is an ACID database. All bets are off if you corrupt its memory, which is possible since it runs in-process. Otherwise, it's the real deal.

9 Nov 2006 (updated 9 Nov 2006 at 21:45 UTC) »
robogato

I was one who pushed phpgurru over the spam threshold. It's a shame his last diary post was lost, because that's what convinced me he was a spammer. He said something about knowing a guy who will provide you content for your website so you get more AdWords money or something, and linked to the guy's spammy website.

ncm, C++ thread cancellation

That's interesting! But I bet against you - they'll fail. I'd say pthread cancellation has six problems that Java's interrupt() does not:

  1. widely inconsistent, broken implementations [1]
  2. it's a C standard that proposes a C++ way (exception stacks) rather than a C way (ECANCELED)
  3. inability to pass through language boundaries, like through the Python interpreter
  4. not reversible [2], making it worthless for canceling a single task in a long-running worker thread
  5. allows what I describe in the sigsafe documentation as the "after race" [3] . The standard says: [4]
    ... if the thread is suspended at a cancellation point and the event for which it is waiting occurs before the cancellation request is acted upon, it is unspecified whether the cancellation request is acted upon or whether the cancellation request remains pending and the thread resumes normal execution.
  6. (for the people who implemented it via a C++ exception) crashes if cancellation happens in a destructor while an exception is being handled. (close(2) is a cancellation point, so this is likely! The solution I've seen is for the user to always disable cancellation before calling it in this case, but they won't. It'll just crash.)

The C++ people will probably avoid #2, #3, and #4; and introduce #7:

  1. no standard way of signaling cancellation to underlying C code that makes blocking system calls

Given that most everyone using C++ wraps some third-party C library or makes some direct system call, I think #7 is particularly important. C++ just can't pull it off without C. The pthread situation needs to be fixed first. #2 just can't be, short of them admitting they screwed up, ditching it, apologizing to everyone who tried to use it, and going with ECANCELED. I'd be just a tiny bit surprised to see a standards body do such a thing...

I think the screwed-up pthread cancellation is a consequence of premature standardization. After working with DSL Forum (worst standards body ever), I'm quite familiar with the problem. They stamped approval on a cool new thing before there were three useful, compatible implementations, so the problems weren't discovered until it was too late to change them.

So what hope is there for people who want to use cancellation? Well, you can easily build it on top of sigsafe. Unfortunately, sigsafe's only practical for people who make direct system calls (e.g., people who write servers instead of gtk-based GUI clients), and you either need to (1) port it to each platform you will use (takes me a few hours for each) or (2) have a "correct" path and a "racey but portable" path.

I think portability is overrated - everyone should use new versions of Linux, FreeBSD, or (maybe) OS X. Trying to port to old versions or to a million different systems is a colossal waste of time. [5] But my view is unpopular; thus the lack of major applications using sigsafe.

7 Nov 2006 (updated 8 Nov 2006 at 02:40 UTC) »
ncm

You were in Boston before, right? Welcome to the Bay Area...

robogato

Thank you! In response to your solving one of my long-standing complaints about the recentlog, I'll try to post something interesting to it.

Load test grapher

Background: I've been slowly playing with a suite of software for looking at performance. It's one of those problems that I think few people look at, because even though there's real computer science involved, any project relating to the word "test" is sneered off by most people as QA work. (Another project I've been playing with is a monitoring system for sysadmins; that's basically the same way. Sysadmin tools suck.) It's possible there's a little more computer science involved than necessary because lately I've been wanting to do a little less software engineering and a little more computer science...

No release, documentation, or even a name yet, but there's a little bit of code in my Subversion repository here.

Today's problem: I made a graph of transaction duration vs. time of a quick Apache load test.

It's worthless. You have basically no idea how long the average transaction takes because the scale is blown by a few outliers. I tried a couple obvious things to solve this:

  • switching to log scale
  • setting ymax to be some function of the mean and standard deviation

but wasn't happy with the result. What I really wanted was what network people do for billing: look at the 95th percentile. The obvious way to do that is to sort the data and pick element n/.95. But my program works with huge data sets - this graph was generated from 1.3 million transactions. Everything else uses (nearly) linear time and not much disk. I don't want to sort stuff.

Off to the ACM Digital Library:

  1. The P^2 algorithm for dynamic calculation of quantiles and histograms without storing observations.
  2. Space-efficient online computation of quantile summaries

These are really cool algorithms to estimate quantiles like the median or 95th percentile. The first has no defined error bounds but uses O(1) space and time. The second returns a value whose rank r' is within [r-εN, r +εN] (with ε chosen in advance). It runs in a single pass using O(1/ε log (εN)) space. Neither requires you to know N ahead of time.

I implemented the first one. A really simple test:


    def testShuffledUniformSequence(self):
        r = random.Random()
        e = QuantileEstimator(0.5)
        numbers = range(100)
        r.shuffle(numbers)
        for i in numbers:
            e.append(i)
        self.assertTrue(40 < e.get() < 60)

Though there aren't well-defined error bounds, it seems to do quite well on my data set. That range of 40 to 60 in the test case is crazy liberal; I just wanted to catch my blatant implementation errors.

I fed that into my graph tool to bound the graph at five times the 95%.

Much better.

Now I have a general-purpose statistics class I can use all over the place. Maybe I'll plug it into Axamol SQL Library's SQL statement timing to get the median and 95%.

23 Sep 2006 (updated 23 Sep 2006 at 16:39 UTC) »
redi, lloydwood:

I'd go further than redi. look at one of my projects, sigsafe. It's not just quiet; it's inactive. It's never been exactly overflowing with users. And wow, apparently it's been over two years since I've made a release. And there are things I should do to it:

  • a bugfix release - Marcin Kowalczyk pointed out a few problems, which I've fixed in SVN.
  • more documentation - I have way more than a lot of projects, but there are a bunch more areas I'd intended to fill out as well.
  • new platforms - Darwin/x86 and Darwin/x86_64 are suddenly quite popular. I've got some half-finished code for them in my working copy, but sadly no machine to try it on.
  • maybe use Linux's newer system call mechanism; it's a little faster.
  • maybe port the race condition checker to other platforms.
  • maybe move all of the documentation to a trac-based wiki so it's not dependent on my updating it
  • maybe set up a buildbot for continuous integration against all those platforms.

In its current state, is it dead? And since it's at the 89.44 percentile, so are all the projects below it dead, too? Maybe. But the code I've released should still work as well on the very newest versions of all those operating systems as it did on the versions I wrote it for. And the documentation is perhaps more useful than the software itself - it points out the problem and introduces a few other techniques to solve it. I get a lot of hits on it (few of which go through sourceforge). I'd like to think that it's still providing a service to the world, even though I haven't touched it in so long.

Obligatory: long time since last entry, been busy, went to Africa, blah blah blah.

Django Pronouncement

Titus: that's interesting. Think anything will come of his hope that Django and TurboGears "converge"? TurboGears guy says no; the approaches are too fundamentally different.

I've been using Django, unaware of TurboGear's existence. (Though I was using MochiKit, which is apparently part of it. Still, I missed the memo.) I love Django as a whole, but I hate their ORM. I want to be able to actually use the database properly, which means doing stuff like this:

    cursor.execute("update blah blah")
    if cursor.rowcount == expected:
        conn.commit()
        return HttpResponseRedirect('/') # back to here as a GET
    else:
        conn.rollback()
        return HttpResponse(tConflict.render(...)) # or whatever

Here I'm doing optimistic locking. I also use transactions to add tightly interrelated data at once; a lot of software which handles your primary keys for you doesn't do this well. I execute non-trivial joins. I tend to hand-code my SQL for this reason. But ORMs are trendy and everyone wants one. SQLAlchemy's website at least tells me that they understand my complaints. Their feature list includes:

  • SQLAlchemy doesn't view databases as just collections of tables; it sees them as relational algebra engines.
  • Eager-load a graph of objects and their dependencies via joins
  • Commit entire graphs of object changes in one step
  • You can use the connection pool by itself and deal with raw connections

ORMs which don't have those features are useless to me. Think Django will switch?

5 Sep 2005 (updated 6 Sep 2005 at 20:43 UTC) »
Life

In the time since I last posted, I quit my job, moved to Santa Clara, California, applied for jobs, and started working at 2wire. (Yes, in that order.) New surroundings, new challenges. I like it.

Jython

I've been playing around with Jython. I've discovered it's incredibly useful for:

  • making small scripts that use Java APIs. For example, I made a bunch of scripts to do IMAP operations that Mail.app and Thunderbird don't support. It was much more pleasant doing this with JavaMail than with Python's IMAP API, and much more pleasant doing this with Python than with Java. It was much nicer to pass around functions than it would have been to write inner classes for all that stuff.
  • experimenting on large Java systems. Reproducing bugs, doing benchmarks.

It's not obvious from the Jython webpage, but Jython is under active development again. If you haven't seen 2.2a1, download it and play with it. It's buggy, but I'm impressed. Generators, some integration with the Java collections classes, bug fixes, etc.

SSL proxy

After SSL performance problems at work in Java, I benchmarked a few different SSL proxy implementations. I wrote a crude distributed SSL load tester. It connects, handshakes, sends 4KiB, receives 4KiB, and disconnects. I found that:

  • my SSLProxy.java could do 7 transactions/sec. I wrote this in about 15 minutes. It uses two threads per session: one reads from the client and sends to the server; the other does the opposite. This is how you have to do things in Java 1.4 or below, since they only made the SSL engine work with the non-blocking IO API in Java 1.5.
  • stunnel could do about 110 transactions/sec. stunnel uses one process per connection. Each process does non-blocking IO to handle both client->server and server->client in one execution context.
  • my async_ssl_proxy can do 240 transactions/sec, I think. (I had the wrong hardware to test it properly. I need the tester machine(s) to be significantly faster than the testee, since my test client is process-per-connection.) This is a libevent-based server that uses one process for all connections. Don't start deleting your stunnel installations, though - it's seriously lacking in polish.

I'm not sure why the stunnel people chose to use a process for every session. It's not any simpler than my design, since they're already using non-blocking IO. And apparently it's a lot slower.

The Java version's performance is too bad to be explained solely by its threading model. Apparently the Java SSL library just sucks. This was the cause of the problems at work.

socket tests

In the process of writing async_ssl_proxy, I discovered that I don't understand UNIX sockets as well as I'd thought. The problem is that the standards don't specify much. I found some websites discussing behavior, but they were too vague. "Some systems, under some circumstances..." What systems? What circumstances?

I want to know if these problems are relevant to me - if modern systems have these problems. If someone tells me "your program doesn't work on Domain/OS SR10.0", I'll say "Here's a quarter. Go buy yourself a Linux system." But if it happens on my shiny OS X laptop, I'll work around it.

I also want to actually see these problems in action. If I can't see the weird behavior, how do I know that I've accounted for it properly?

I'm writing experiments to fix this. They're Python unittest scripts designed to make all of the corner cases happen consistently. I'm simulating network failures by manipulating firewall rules during each test.

close_tests is still incomplete. I'd like to go from a failure after any socket operation to a real reason like ECONNRESET or ENETDOWN. My program's behavior wouldn't change, but I like to pass the underlying cause on to the user. Some operations like shutdown give bizarre errors like EINVAL on OS X. But I'm making progress.

4 Feb 2005 (updated 5 Feb 2005 at 18:04 UTC) »

In the process of getting ready to apply for jobs, I've been polishing my projects' webpages, bundling releases, and creating some documentation. I just submitted to freshmeat:

  • NetGrowler - (OS X only) displays pop-up notifications of networks events (joining new wireless networks, IP address changes, etc.) through Growl.
  • Axamol SQL Library - executes SQL statements stored in external library files from Java code, with named parameters. Separating SQL and Java code increases readability, eases maintenance, and allows separate testing and documentation.
3 Jan 2005 (updated 16 Oct 2006 at 23:53 UTC) »
Life

...goes well. I graduated from the University of Iowa last month with a B.S. in computer science and minor in physics. I want to stay in Iowa City through the summer, so I'm getting a local job. I'll likely just step up to a professional position at the hospital, since the job listings I've found sound horribly boring.

Microsoft

...sucks. Windows ate my Linux partition! I just found this KnowledgeBase article. It says that if you have more than one "primary" partition[*] and select any but the first to install to, Windows XP must also reformat all partitions before it. That's bad enough, but what the article doesn't say is that there are no prompts or warnings about this. It gave me the usual reformatting warning, but everything lead me to believe it was talking about its little area. When I booted up the Fedora rescue disk to put grub back (as Windows has always silently overwritten the MBR), I was shocked to discover that my Linux partition's type had changed to 0x07 (HPFS/NTFS) and e2fsck found nothing there. No valid magic number at the superblock or any of the backups. It wasn't just unbootable, it was gone.

Just to add insult to injury, it didn't install Windows XP properly, either. It saw the existing Windows 2000 partition on the primary slave (older) hard disk and used that as the "boot" volume (the one containing the NTLDR), again without asking me. In addition to making my drive letters quite weird (C: refers to the second drive and H: refers to the first drive), this means it doesn't boot when I remove the second drive. I need to do some trick in the recovery console to make it work.

If a Linux distribution had this bug, people would raise hell. How does Microsoft get away with this shit?

[*] - The partition table supports up to four partitions, which can be either "primary" or "extended". "Primary" partitions are the simple kind; "extended" ones are useless by themselves but can contain "logical" partitions. I think you should only have one extended partition. Only one partition should be "active" (the one the simple Windows MBR will try to boot) and it must be primary. Though it's actually stored as a boolean in each entry, IIRC, so you could have zero or more active. Who knows what Windows would do then. What a stupid scheme this is.

The actual partition layout:


Disk /dev/hda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes


Device Boot Start End Blocks Id System /dev/hda1 1 9729 78148161 5 Extended /dev/hda2 * 9730 19457 78140160 7 HPFS/NTFS /dev/hda5 1 125 1003999+ 82 Linux swap / Solaris /dev/hda6 126 9729 77144098+ 83 Linux

18 Dec 2004 (updated 20 Dec 2004 at 02:41 UTC) »
.NET seminar

I attended a .NET seminar for U of I developers today. Jeff Brand was the speaker.

First half: Worthless. Marketing. He acted apologetic, but he still spoke the jargon like a native marketroid. He even brought up the pet store example. While he did admit the line counts were questionable, he really drove in the number of hours spent on tuning the stupid, convoluted EJB version. He didn't mention the good Java implementation. And he made the strange assertion that Microsoft's VM is better because it uses JIT. He hedged a little, saying there are implementation differences on the Java side, but he implied that JVMs do not use JIT. (He might have simply deceived by saying the Microsoft JIT-compiles everything for near-native speed. Sun is smarter than that; their hotspot JVM doesn't bother JITing infrequently used code.) I wanted to ask him to name a single major JVM that does not JIT...but I remembered the cardinal rule of seminars: never heckle before they feed you.

Lunch: A disappointment. Microsoft didn't understand how much food we eat. And it was almost all older, full-time developers - I wonder what would have happened if more student/developers showed. We would have eaten his laptop. They ordered more pizza, but it did me little good - had to jet right afterward. Maybe I should have heckled after all.

Second half: Very interesting. He focused on web development with real demos. The controls they have save a lot of work (even the simple ones - a database-driven table that sorts based on clicking the column names and pages, in a themeable way), and the development tools are unbelievable. Eclipse is nice, but after what I saw today, I don't think it's at the same level. I'm sure something similar to the controls they showed exists in the Java world; I will have to find it and design AXP taglibs to interface with it. Maybe JavaServer Faces; I've put off investigating them for too long.

I wouldn't want to use the .NET functionality as-is, though. You'd lose all the advantages if you go a little off their path. For example, the post-back functionality he described as coming in .NET 2.0 is IE-only. No good technical reason why; Google can pull off similar things on any modern browser. This is where open source shines; an OSS framework could do the same partial implementation and I wouldn't care because it'd be easy to hack the source myself.

Update: Jeff Brand just emailed me. They do support the post-back functionality in other browsers. He was thinking of early alpha releases when he said it was IE-only.

40 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!