28 Nov 2006 slamb   » (Journeyer)

lkcl, SQLite

when you get up to 30mbyte files, such as those used by embedded.org, the access time will drive you nuts.

People use SQLite databases much larger than that without problem, so your blanket statement is false - there's more to it than that. You said this is a monotone database? Maybe it's not normalized well, it isn't creating the right indexes, its queries are ill-considered, or it's doing unusual things like creating huge blobs in the database. (The last seems likely, since it's a version control system. I don't know if anyone has bothered to optimize this case in SQLite.) Has anyone benchmarked an equivalent PostgreSQL database for comparison?

Admittedly, there are ways SQLite could be causing your performance problem. I don't think the query planner is too sophisticated. And worse, SQLite doesn't support any sort of concurrency, though there's talk on their wiki of implementing it.

p.s. salmoni: if you get a crash in the middle of a write to a flat-file, then, just like ms-access, your data is corrupted. don't use it.

I assert otherwise, with the exception that your application corrupting SQLite's data structures can of course make anything happen. It sounds like you've encountered a problem with a pretty complex application that happens to use SQLite. Link to your bug report with a small demo application and I'll believe you when you say that SQLite is at fault.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!