Older blog entries for k (starting at number 75)

I've spent the weekend building and testing a new VM server to deploy next week.

I've taken the oppertunity to test out FreeBSD/Xen support. I've put my images up here:

http://wiki.freebsd.or g/AdrianChadd/XenImages

It actually works, which is a good starting point!

18 May 2009 (updated 18 May 2009 at 13:22 UTC) »

.. hm, its been a while since I braindumped into here. I really should pick one or two blog sites (nerd, work, personal) instead of having dozens of sites spread around everywhere.

Anyway. I've just been bootstrapping FreeBSD-current/Xen and attempting to document the process so others can also test it out.

http://wiki.freebsd .org/AdrianChadd/XenHackery

I've also forked Squid-2 off into a separate project - lusca

I've also built a small open source CDN out of it which I need to spend more time on (but can't because I have to make money somehow..) - Cacheboy

I'm also doing bits of web/VPS hosting, squid/network/systems consulting and bits of other work - Xenion - this is how I'm trying to pay the bills so I can spend more time working on useful open source stuff.

More to come!

The status of the Squid cyclic fs (COSS) :

COSS was originally implemented as an on-disk LRU. I'll describe the original implementation as I grabbed from Eric Stern now.

A filesystem is just a single large file or physical device.

A membuf - 1 megabyte in size - is initially allocated to represent the first megabyte of the filesystem. Objects are copied into the membuf if their size is known up front (and thus space can be 'pre-allocated' in the stripe.) When the stripe is filled up it is marked as "full" and written to the filesystem. Objects are added to the beginning of a linked list as this happens.

Objects are referenced by their offset on the disk: any read is first checked against the in-memory membuf list. If an object is found to be in a membuf then a copy of the object data is taken and the data is handed back to Squid. If an object is not found in a membuf it is read from the filesystem, placed at the head of the current membuf - and they are re-added to the head of the linked list - and the squid file pointer is updated to point to this new position.

As stripes are successively allocated and written to the filesystem in order the 'popular' objects stay near the 'head'. This happens until the last stripe on disk is written: at which point the write pointer is cycled to the beginning of the filesystem.

At this point the LRU implementation kicks in: the objects which are at the end of this linked list match those at the beginning of the filesystem. COSS will start at the end of the linked list and move backwards, deallocating objects, until it reaches the beginning of the next stripe. It then has enough room to allocate a 1 megabyte stripe (and its membuf.) at the beginning of the disk. It then fills this membuf as described above.

When this membuf is filled it writes the stripe to disk, frees the objects in the next megabyte of disk and then allocates a membuf and fills that.

This implementation wasn't complete:

  • The rebuild-from-logfile didn't seem to work right
  • There was no support for rebuild-from-disk (in case the logfile was missing or corrupt)
  • The implementation used file_read() and file_write() - callback methods of scheduling disk filedescriptor IO - but assumed synchronous behaviour.

When I adapted the codebase to use POSIX AIO I discovered a number of race conditions in the COSS code:

  • Objects which were being read from disk and written into the current membuf had their squid file pointer numbers updated. Subsequent reads of this object would be copied from the current membuf - but async disk IO wouldn't guarantee the data there was written until some time after scheduling. This resulted in a lot of swapin failures as NULL data was being written
  • It was possible, but so far unlikely, that a disk operation would be scheduled for an object which was then overwritten by a full stripe.

The nice features of COSS was the simple writing and object pool maintainence: writes were aggregated and predictable (being in 1 megabyte chunks.) Popular objects had more of a possibility of staying in the current membuf.

I recently took the code and began fixing the bugs. These included:

  • All disk stripes were now an even multiple of the membuf size (1 megabyte.) Eric's original implementation would note when a membuf was free, write the membuf to disk and then start the new membuf at the end of the old membuf. This meant a few bytes weren't being wasted but it did make dividing the filesystem up for analysis/repair/rebuild difficult.
  • Object relocations are now tracked from their creation to completion
  • When an object is relocated its data - and any subsequent read request - is stalled until the object data has been read in from the filesystem.
  • A check (and loud log message!) has been added to catch attempts to write a stripe where a pending relocate is occuring (and the read hasn't completed), hopefully catching (but not repairing for now) instances where said read will result in then-bogus data
  • Rebuild logic has been added - its now easy to read the disk in as 1 megabyte chunks and do basic checks on each stripe. If a stripe has been partially or badly written to disk the contents can be thrown away without affecting the rest of the filesystem
  • Objects no longer live in a single linked list. Each on-disk stripe reigon has an in-memory structure used to track various statistics including a linked list containing which objects are currently there. This makes freeing any arbitrary stripe easy, allowing for much cleaner object expiry and filesystem index rebuild process.

The problems seen so far:

  • The write rate is a function of not only the cacheable data coming in from the network but the hit rate - and subsequent relocation of popular objects - which means the write volume can quickly spiral out of control
  • Some hit-rate issues which I haven't figured out yet. It may be related to my relatively small test caches (~ 8-10 gigabytes) and the polygraph workloads using a much bigger cache data set.

Possible directions to take (although I do need some actual world-testing and statistics first!):

  • Find out what percentage of objects are read in and never referenced again vs objects re-referenced once, twice, four times, eight times, etc.
  • Consider allocating stripe areas as "hot object" stripes which aren't part of the disk LRU. Place popular objects in these stripes and don't relocate them once they're there - this should cut down on the constant object relocation and therefore cut back on the write bandwidth. They can be managed by a different LRU or other replacement scheme.
  • Consider implementing some form of object locality; which will need cooperation from other areas of squid.

Interested in the work? I'm placing snapshots up on my squid website - here.

Henrik Nordstrom pointed out the pread() and pwrite() syscalls which should be supported under Linux. This code now can either use the POSIX AIO calls or the Linux AUFS (user-space threads implementing disk IO) which use pread()/pwrite().

In short; it works, and it works fine.

The trouble now: how to rebuild the store index from disk during startup.

I've been fixing the COSS code in Squid - its a cyclic-style filesystem with a twist - instead of completely cyclic it implements an on-disk LRU.

I've got it mostly stable. The main problem right now? The linux user-space Posix AIO support only seems to de-queue one op at a time per FD. I think this is severely hampering the disk performance; but there's little I can do about it for now. Grr.

Today's amusement: how I messed up the precariously-balanced backup system at work. Grr.

Today's amusement #2: my little co-operative multitasking message-passing thingy is running and passing messages between modules. The most it can do? 10 million events in 14 seconds, and not one memory leak. I wonder if it'll leak memory during error conditions..

22 Mar 2006 (updated 22 Mar 2006 at 06:20 UTC) »

Ah, yay.

Updates:

* still owe a lot on my credit card. Wow, who would've thought financial planning was so crucial. :)
* bored at work. There's stuff to do, but its not challenging. Sigh.
* studying psychology/linguistics at UWA. Yes, I'm a second year (ie, not a first year). I gave up studying CompSci - first and second year stuff just frustrates me and I don't need frustrating things whilst I'm working full time.

What I'm currently working on:

* I'm a programmer for an online MUD. No, I won't say which. Its nifty though - it uses a proprietary engine and a crazy syntax which is the bastard child of C and BASIC. Very good for writing MUD code in.
* I'm still tinkering with fast network application frameworks: check my homepage for the CVS repo. "projects/col" has what I'm working on.

Life:

I finally got paid for a couple of things this week. I burnt $3k on something I purchased as a middle man - the situation was stupid but now I have well and truely learnt my lesson. My credit card is nearly half of what it was at the beginning of the week.

I have to be careful - I still have a long way to go before its paid off.

I also have the last few years of tax returns to do. 2001-2002 is really the only one I need some help with since the others are simply blank (I was working overseas, I paid Dutch tax.)

University:

I have the forms I need to re-enrol for next semester. I might finally get around to finishing first year CS.

Geek:

I found a 3rd year CS assignment which I'm doing because it looks interesting. I'm doing it in C rather than java as, well, I'm still not in third year.

Family:

My brother is going into hospital on Tuesday. He has a lump in his left buttock. Poor guy. Its quite a big lump too. Understandably everyone is a little upset but things aren't as bad as they were when he found out. If all goes well he'll be out of hospital a couple of days later with some painkillers and instructions to lie down for a month or so.

davidr

Yup, I can sympathise with him. Something similar, albeit with a shorter-term relationship, happened to me last year. My girlfriend at the time, Karen, was quite supportive and I thought she understood and accepted that I worked a lot. After three weeks of working inside radio towers and a couple of weeks being very very ill she left me.

She told me afterwards that she was dropping all kinds of hints which I obviously didn't pick up on. I think she associated "getting hints" with "paying attention" and "caring".

I've now realised that, at least for now, I had to make a choice between a love and a passion. I know a lot of you out there will hmph!, citing that there are plenty of examples of people who keep up active passions and a healthy relationship.

I'll simply note that I am not them. Oh, and I have too much on my plate already.

2 May 2003 (updated 2 May 2003 at 12:20 UTC) »

Music:

Wer Bisto - Twarres
K's Choice - Favourite Adventure

Only one person on the planet will understand why. :)

I sat down at my piano again today. 10 minutes of tinkering with it left me with a warm fuzzy feeling and a big grin.

Life:

Exercise is good. More of it is good.

Geek:

Nothing much on this front, I'm afraid. I'm using a PM3 at work to terminate a few L2TP DSL connections. Damn, Cisco's are expensive, not much else is supported in my neck of the woods and the open sauce L2TP implementations are quite average. Hm, I might have to look into writing a business case.

66 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!