Older blog entries for slamb (starting at number 52)

ekashp, re: advogato recentlog quality

There are two things you can do to make advogato's recentlog more focused on open source development:

  • Set a good example. Talk about something you've contributed to open source recently. Maybe people will get the hint and do something trendy like syndicating http://their.blog.here/?tag=opensource instead of http://their.blog.here/, thus keeping their open source stuff while suppressing their animé commentary.
  • Use advogato's "interestingness" metric. It will suppress stuff under a threshold, and it will even have some effect on entries you haven't rate at all. I don't know the exact implementation, but for entries you haven't rated, the metric's supposed to guess based on ratings from people whose diaries you've rated highly. My recentlog says "9 entries suppressed at threshold 3" at the bottom, and the first number will be higher next time the metric is recomputed.
4 Jan 2007 (updated 4 Jan 2007 at 23:07 UTC) »
apenwarr, re: scheduling

Thanks for your entry on Schedulator a while back. At work, we have been struggling with scheduling. I'd suggested making more use of bugzilla to track issues and tasks instead of just bugs. Then we had some management changes. Finally, we saw your entry. Looks like it was the tipping point. Your code is useless to us (for starters, we use Linux, Bugzilla, MediaWiki, and Python rather than Windows, FogBugz, GratefulTavi, and C#), but the documentation is wonderful.

We're looking at using bugzilla's existing time tracking features and adding better reporting. Your slip factor, automatic due dates, and load factors will feature in prominently, as well as Joel's original vs. current estimates. (Our existing schedules have had a certain "we've always been at war with Eurasia" feel, so there's no way to review historical slip factors.) We have a lot of other balls bouncing, but hopefully we'll be sending some patches back upstream eventually.

1 Jan 2007 (updated 1 Jan 2007 at 21:10 UTC) »

The spammers are getting more subtle. Case in point: 733tguru. Legitimate[*] post about mysql/ threading performance on Linux vs. FreeBSD, a few spammy links stuck in the middle.

[*] - though stolen, badly formatted, and rather outdated; it refers to FreeBSD 5.x as under development. I'm running 6.1-RELEASE.

lkcl, SQLite

when you get up to 30mbyte files, such as those used by embedded.org, the access time will drive you nuts.

People use SQLite databases much larger than that without problem, so your blanket statement is false - there's more to it than that. You said this is a monotone database? Maybe it's not normalized well, it isn't creating the right indexes, its queries are ill-considered, or it's doing unusual things like creating huge blobs in the database. (The last seems likely, since it's a version control system. I don't know if anyone has bothered to optimize this case in SQLite.) Has anyone benchmarked an equivalent PostgreSQL database for comparison?

Admittedly, there are ways SQLite could be causing your performance problem. I don't think the query planner is too sophisticated. And worse, SQLite doesn't support any sort of concurrency, though there's talk on their wiki of implementing it.

p.s. salmoni: if you get a crash in the middle of a write to a flat-file, then, just like ms-access, your data is corrupted. don't use it.

I assert otherwise, with the exception that your application corrupting SQLite's data structures can of course make anything happen. It sounds like you've encountered a problem with a pretty complex application that happens to use SQLite. Link to your bug report with a small demo application and I'll believe you when you say that SQLite is at fault.

27 Nov 2006 (updated 27 Nov 2006 at 22:56 UTC) »
salmoni, SQLite

SQLite's reliability is quite good. If you set pragma synchronous = full (the default in SQLite 3.0) and pragma fullfsync = 1 (for OS X, where they've decided to make fsync(2) dangerously worthless and nonstandard), SQLite is an ACID database. All bets are off if you corrupt its memory, which is possible since it runs in-process. Otherwise, it's the real deal.

9 Nov 2006 (updated 9 Nov 2006 at 21:45 UTC) »
robogato

I was one who pushed phpgurru over the spam threshold. It's a shame his last diary post was lost, because that's what convinced me he was a spammer. He said something about knowing a guy who will provide you content for your website so you get more AdWords money or something, and linked to the guy's spammy website.

ncm, C++ thread cancellation

That's interesting! But I bet against you - they'll fail. I'd say pthread cancellation has six problems that Java's interrupt() does not:

  1. widely inconsistent, broken implementations [1]
  2. it's a C standard that proposes a C++ way (exception stacks) rather than a C way (ECANCELED)
  3. inability to pass through language boundaries, like through the Python interpreter
  4. not reversible [2], making it worthless for canceling a single task in a long-running worker thread
  5. allows what I describe in the sigsafe documentation as the "after race" [3] . The standard says: [4]
    ... if the thread is suspended at a cancellation point and the event for which it is waiting occurs before the cancellation request is acted upon, it is unspecified whether the cancellation request is acted upon or whether the cancellation request remains pending and the thread resumes normal execution.
  6. (for the people who implemented it via a C++ exception) crashes if cancellation happens in a destructor while an exception is being handled. (close(2) is a cancellation point, so this is likely! The solution I've seen is for the user to always disable cancellation before calling it in this case, but they won't. It'll just crash.)

The C++ people will probably avoid #2, #3, and #4; and introduce #7:

  1. no standard way of signaling cancellation to underlying C code that makes blocking system calls

Given that most everyone using C++ wraps some third-party C library or makes some direct system call, I think #7 is particularly important. C++ just can't pull it off without C. The pthread situation needs to be fixed first. #2 just can't be, short of them admitting they screwed up, ditching it, apologizing to everyone who tried to use it, and going with ECANCELED. I'd be just a tiny bit surprised to see a standards body do such a thing...

I think the screwed-up pthread cancellation is a consequence of premature standardization. After working with DSL Forum (worst standards body ever), I'm quite familiar with the problem. They stamped approval on a cool new thing before there were three useful, compatible implementations, so the problems weren't discovered until it was too late to change them.

So what hope is there for people who want to use cancellation? Well, you can easily build it on top of sigsafe. Unfortunately, sigsafe's only practical for people who make direct system calls (e.g., people who write servers instead of gtk-based GUI clients), and you either need to (1) port it to each platform you will use (takes me a few hours for each) or (2) have a "correct" path and a "racey but portable" path.

I think portability is overrated - everyone should use new versions of Linux, FreeBSD, or (maybe) OS X. Trying to port to old versions or to a million different systems is a colossal waste of time. [5] But my view is unpopular; thus the lack of major applications using sigsafe.

7 Nov 2006 (updated 8 Nov 2006 at 02:40 UTC) »
ncm

You were in Boston before, right? Welcome to the Bay Area...

robogato

Thank you! In response to your solving one of my long-standing complaints about the recentlog, I'll try to post something interesting to it.

Load test grapher

Background: I've been slowly playing with a suite of software for looking at performance. It's one of those problems that I think few people look at, because even though there's real computer science involved, any project relating to the word "test" is sneered off by most people as QA work. (Another project I've been playing with is a monitoring system for sysadmins; that's basically the same way. Sysadmin tools suck.) It's possible there's a little more computer science involved than necessary because lately I've been wanting to do a little less software engineering and a little more computer science...

No release, documentation, or even a name yet, but there's a little bit of code in my Subversion repository here.

Today's problem: I made a graph of transaction duration vs. time of a quick Apache load test.

It's worthless. You have basically no idea how long the average transaction takes because the scale is blown by a few outliers. I tried a couple obvious things to solve this:

  • switching to log scale
  • setting ymax to be some function of the mean and standard deviation

but wasn't happy with the result. What I really wanted was what network people do for billing: look at the 95th percentile. The obvious way to do that is to sort the data and pick element n/.95. But my program works with huge data sets - this graph was generated from 1.3 million transactions. Everything else uses (nearly) linear time and not much disk. I don't want to sort stuff.

Off to the ACM Digital Library:

  1. The P^2 algorithm for dynamic calculation of quantiles and histograms without storing observations.
  2. Space-efficient online computation of quantile summaries

These are really cool algorithms to estimate quantiles like the median or 95th percentile. The first has no defined error bounds but uses O(1) space and time. The second returns a value whose rank r' is within [r-εN, r +εN] (with ε chosen in advance). It runs in a single pass using O(1/ε log (εN)) space. Neither requires you to know N ahead of time.

I implemented the first one. A really simple test:


    def testShuffledUniformSequence(self):
        r = random.Random()
        e = QuantileEstimator(0.5)
        numbers = range(100)
        r.shuffle(numbers)
        for i in numbers:
            e.append(i)
        self.assertTrue(40 < e.get() < 60)

Though there aren't well-defined error bounds, it seems to do quite well on my data set. That range of 40 to 60 in the test case is crazy liberal; I just wanted to catch my blatant implementation errors.

I fed that into my graph tool to bound the graph at five times the 95%.

Much better.

Now I have a general-purpose statistics class I can use all over the place. Maybe I'll plug it into Axamol SQL Library's SQL statement timing to get the median and 95%.

23 Sep 2006 (updated 23 Sep 2006 at 16:39 UTC) »
redi, lloydwood:

I'd go further than redi. look at one of my projects, sigsafe. It's not just quiet; it's inactive. It's never been exactly overflowing with users. And wow, apparently it's been over two years since I've made a release. And there are things I should do to it:

  • a bugfix release - Marcin Kowalczyk pointed out a few problems, which I've fixed in SVN.
  • more documentation - I have way more than a lot of projects, but there are a bunch more areas I'd intended to fill out as well.
  • new platforms - Darwin/x86 and Darwin/x86_64 are suddenly quite popular. I've got some half-finished code for them in my working copy, but sadly no machine to try it on.
  • maybe use Linux's newer system call mechanism; it's a little faster.
  • maybe port the race condition checker to other platforms.
  • maybe move all of the documentation to a trac-based wiki so it's not dependent on my updating it
  • maybe set up a buildbot for continuous integration against all those platforms.

In its current state, is it dead? And since it's at the 89.44 percentile, so are all the projects below it dead, too? Maybe. But the code I've released should still work as well on the very newest versions of all those operating systems as it did on the versions I wrote it for. And the documentation is perhaps more useful than the software itself - it points out the problem and introduces a few other techniques to solve it. I get a lot of hits on it (few of which go through sourceforge). I'd like to think that it's still providing a service to the world, even though I haven't touched it in so long.

Obligatory: long time since last entry, been busy, went to Africa, blah blah blah.

Django Pronouncement

Titus: that's interesting. Think anything will come of his hope that Django and TurboGears "converge"? TurboGears guy says no; the approaches are too fundamentally different.

I've been using Django, unaware of TurboGear's existence. (Though I was using MochiKit, which is apparently part of it. Still, I missed the memo.) I love Django as a whole, but I hate their ORM. I want to be able to actually use the database properly, which means doing stuff like this:

    cursor.execute("update blah blah")
    if cursor.rowcount == expected:
        conn.commit()
        return HttpResponseRedirect('/') # back to here as a GET
    else:
        conn.rollback()
        return HttpResponse(tConflict.render(...)) # or whatever

Here I'm doing optimistic locking. I also use transactions to add tightly interrelated data at once; a lot of software which handles your primary keys for you doesn't do this well. I execute non-trivial joins. I tend to hand-code my SQL for this reason. But ORMs are trendy and everyone wants one. SQLAlchemy's website at least tells me that they understand my complaints. Their feature list includes:

  • SQLAlchemy doesn't view databases as just collections of tables; it sees them as relational algebra engines.
  • Eager-load a graph of objects and their dependencies via joins
  • Commit entire graphs of object changes in one step
  • You can use the connection pool by itself and deal with raw connections

ORMs which don't have those features are useless to me. Think Django will switch?

5 Sep 2005 (updated 6 Sep 2005 at 20:43 UTC) »
Life

In the time since I last posted, I quit my job, moved to Santa Clara, California, applied for jobs, and started working at 2wire. (Yes, in that order.) New surroundings, new challenges. I like it.

Jython

I've been playing around with Jython. I've discovered it's incredibly useful for:

  • making small scripts that use Java APIs. For example, I made a bunch of scripts to do IMAP operations that Mail.app and Thunderbird don't support. It was much more pleasant doing this with JavaMail than with Python's IMAP API, and much more pleasant doing this with Python than with Java. It was much nicer to pass around functions than it would have been to write inner classes for all that stuff.
  • experimenting on large Java systems. Reproducing bugs, doing benchmarks.

It's not obvious from the Jython webpage, but Jython is under active development again. If you haven't seen 2.2a1, download it and play with it. It's buggy, but I'm impressed. Generators, some integration with the Java collections classes, bug fixes, etc.

SSL proxy

After SSL performance problems at work in Java, I benchmarked a few different SSL proxy implementations. I wrote a crude distributed SSL load tester. It connects, handshakes, sends 4KiB, receives 4KiB, and disconnects. I found that:

  • my SSLProxy.java could do 7 transactions/sec. I wrote this in about 15 minutes. It uses two threads per session: one reads from the client and sends to the server; the other does the opposite. This is how you have to do things in Java 1.4 or below, since they only made the SSL engine work with the non-blocking IO API in Java 1.5.
  • stunnel could do about 110 transactions/sec. stunnel uses one process per connection. Each process does non-blocking IO to handle both client->server and server->client in one execution context.
  • my async_ssl_proxy can do 240 transactions/sec, I think. (I had the wrong hardware to test it properly. I need the tester machine(s) to be significantly faster than the testee, since my test client is process-per-connection.) This is a libevent-based server that uses one process for all connections. Don't start deleting your stunnel installations, though - it's seriously lacking in polish.

I'm not sure why the stunnel people chose to use a process for every session. It's not any simpler than my design, since they're already using non-blocking IO. And apparently it's a lot slower.

The Java version's performance is too bad to be explained solely by its threading model. Apparently the Java SSL library just sucks. This was the cause of the problems at work.

socket tests

In the process of writing async_ssl_proxy, I discovered that I don't understand UNIX sockets as well as I'd thought. The problem is that the standards don't specify much. I found some websites discussing behavior, but they were too vague. "Some systems, under some circumstances..." What systems? What circumstances?

I want to know if these problems are relevant to me - if modern systems have these problems. If someone tells me "your program doesn't work on Domain/OS SR10.0", I'll say "Here's a quarter. Go buy yourself a Linux system." But if it happens on my shiny OS X laptop, I'll work around it.

I also want to actually see these problems in action. If I can't see the weird behavior, how do I know that I've accounted for it properly?

I'm writing experiments to fix this. They're Python unittest scripts designed to make all of the corner cases happen consistently. I'm simulating network failures by manipulating firewall rules during each test.

close_tests is still incomplete. I'd like to go from a failure after any socket operation to a real reason like ECONNRESET or ENETDOWN. My program's behavior wouldn't change, but I like to pass the underlying cause on to the user. Some operations like shutdown give bizarre errors like EINVAL on OS X. But I'm making progress.

43 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!