13 Jul 2011 berend   » (Journeyer)

Have been doing some work with the KnowledgeTree document management system. Was about 61GB of data, so got the company to sent a disk to Amazon to import. As I didn't know you could import to EBS (or maybe they just announced that) this was imported to S3.

It's a pain to get your data out of S3, let me tell you that, especially as in their case they had deeply structured files. s3fs can mount this, but didn't see any files. In the end I used the Perl Amazon::S3 library to get all files in the bucket and download them one by one.

Although the data is now on the server, it doesn't mean that it's in KnowledgeTree. There is a Best Practice document, but the the link to the bulkimporter is dead. Was able to locate it. Now seeing if this is going to work.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!