Older blog entries for bluefoxicy (starting at number 1)

19 May 2003 (updated 19 May 2003 at 19:04 UTC) »

First I'd like to say, I am not affiliated with Xiph or xiph.org at all. That said...

Well, I'm working on an Ogg structure for fcomp2 ( http://fcomp.sourceforge.net/__fcomp2.plan ). I'll have to modify the format a bit. Here's what I'm thinking right now. This is with N frames of compressed data (one for each file) and assuming m Master Dictionaries.

Now I'm starting to see. Streams. heh.

First off, each Stream starts with 4 bytes of data that identify its type. If the Attribute or File List Streams are damaged, other mirror streams can be located in this way.

The Ogg streams for an archive are interlaced. Citing the Ogg RFC:

      -------------------------------------------------------------
      |*A*|*B*|*C*|A|A|C|B|A|B|#A#|C|...|B|C|#B#|#C#|*D*|D|...|#D#|
      -------------------------------------------------------------
       bos bos bos             eos           eos eos bos       eos

Our interlacing scheme is simple: BOS each stream. Then for each stream IN ORDER, place the entire stream contiguously. Then place an EOS. This is a way to "fool" Ogg into doing a sort of self-contained Interleaved-Chained stream (i.e. [A][b][endb][c][endc][ENDA] is illegal, so it's [A][b][c][bdata][bdata][ENDb][cdata][cdata][ENDc][ENDA], which is legal). You CAN interlace however you want but why bother slowing down the decoding process? It would (preferably) look like this:

      -------------------------------------------------------------
      |*A*|*B*|*C*|B|B|B|B|B|B|#B#|C|C|C|C|C|C|C|#C#||#A#|
      -------------------------------------------------------------
       bos bos bos             eos               eos  eos

This is because there is no reason to interleave; we're not streaming video and audio or anything.


Stream 0: Start Stream

This indicates the start of an interlaced Ogg fcomp2 stream. Our magic number is 0x21050E17, or {'!', ('f' - 'a'), ('o' - 'a'), ('x' - 'a')}; so 21 05 0E 17.

It has no data in it, just a BOS and EOS.


Stream 1: Attribute Stream

The first and last stream are Attribute Streams. They store the following information:

Magic Number -- 0x00000000

Stream Numbers of File List Streams -- There are usually 2 File List Streams. Sometimes there's more. These are in a set. This gives the Serial Numbers of each stream.

Stream Numbers of Attribute Mirrors -- Attribute Mirror serial numbers. If this attribute stream is corrupted, these are read and seeked to first, and their magic numbers checked, to avoid a long-winded full physical stream scan (if possible).

Master Dictionary Index -- Gives a list of Master Dictionary numbers, and a list of stream serial numbers for each of them.

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000000


Stream 2: File List Stream

The File List Stream stores the path/file name/stream serial number/Master Dictionary of each file. Each entry is stored as this.

Magic Number: 0x00000001

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000001


Stream 3..(m+2): Master Dictionaries (originals)

These don't exist if you have no Master Dictionaries. If they do, they contain both the data and the length for each string in the dictionary. This means that the first occurance in the fcomp2 compressed stream of a dictionary entry CAN be replaced with a dictionary index.

Master Dictionaries store both the data AND the length; the indicies are determined during the reading of the dictionary.

Magic Number: 0x00000002

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000002


Stream X..(X+N-1): Compressed Files

The compressed files can go into any streams. Spewed or interleaved throughout are any mirrors of Master Dictionaries, the Attributes Streams, and the File List Streams that you want for error recovery.

I've got to write out an adapted fcomp2 storage format for this. For now, we'll go with the following:

Magic Number: 0x00000003

Master Dictionaries -- A list of Master Dictionaries. If more than one is specified, then each is appended to the end of the dictionary as it is read. Also, the stream may have its own dictionary, which is appended to the end of this.

Dictiorary -- This dictionary is appended to the end of the final dictionary made by the Master Dictionaries. If it exists anyway.

Huffman Encoding information -- Flags and values to tell how to work the Huffman Encoding.

Compressed Stream -- The compressed stream. This uses all of the above to decode to the original stream.

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000003. This is provided for consistency, i.e. the code can just look for the MSB to be on in the Magic Number to see if it's gzip'd. Don't do this. :)

Second to Last Stream -- File List Mirror

Mandatory File List Mirror on the end of the physical stream, for error control.

Final Stream -- Attributes Mirror

Primary Attributes Mirror. If the Attributes Stream is dead, then this should exist at least, as a failsafe, even if no other mirrors exist.



The above should provide enough to allow seeking through the stream to any file; a type of "Solid" compression (shared dictionary; real "Solid" would be a compressed tarball); and enough to actually stream by sending the Attributes and File List, allowing a client to select a file, and then sending the Master Dictionary and File Streams. This is especially good for compressing entire web sites and letting a web server supply these to a (compatible) browser, as the server can find everything from the browser request.

--Bluefox Icy

1 May 2003 (updated 1 May 2003 at 23:04 UTC) »

Heh, no cerification to post. Blarg. Check my sourceforge.net user page http://sourceforge.net/users/bluefoxicy

Right now I'm hoping to find someone who understands trackers (music software) to help me finish the Advanced Tracker pre-planning. Understand: IT WILL HAVE A SEQUENCER BACKEND! The entire playback engine is event-driven. The trick is to convert legacy modules to AT format. It also will be object oriented C++.

fcomp2 should be pretty nice for compression size. I have someone working on that currently--I think :) He's to look at the doc and do it from scratch on his own, except that I'll give him the fcomp code (which doesn't currently work) to look at to rip things like the fcomp_scan functions out of.

fdber is what, sleeping?

Sampletune is one I need for Advanced Tracker, to add extended sample contol (auto-tuning actually). It'll be cool; no more booting Windoze, opening AnalogX SampleTune, tuning a ton of ripped samples, and then loading modplug under Wine in Linux and fine-tuning. Load AT up right in Linux or Windows or on a Mac or whatever, load the converted SPC or the ripped audio, hit tune, fine tune. Bingo. No intermediate steps.

And wtf is daedalus? He's supposed to be working on CPD/UPD, which is a database for package managment on Linux/Unix designed to replace the RPM, DPKG, Ports/Ported/emerge, and whatever Slackware .tgz uses. It is intended to be integrated with all package managers; see http://cpdupd.sourceforge.net/cpm.html for an explaination.

I'll get back to ESIC one day... don't tell NT I said this but I'm really annoyed with him wanting to design it to look like C, i.e. with brackets and C-ish functions. I want to just extend BASIC to have pointers, classes, and to use the struct keyword. Also I want to header-ize it so that everything isn't all crammed together, and make it non-specific to the system.

E-MAIL ADVANCING

The next project I'm starting after all of this is a mailer message control plug-in structure and an e-mail client that uses it, to supply a simple and easy way to pass messages through spam-filter, compression, encryption, and so on. I'm thinking eventually all e-mail clients will support compression, spam filtering, and address-to-address cypher auto-negotiation.

The auto-negotiation requires a database of e-mails that auto-creates itself. First, when sending to an unknown host, you are allowed to tell it to try to get a message there querying supported features. The return message gives a response for each. E-mail query header:

negotiation-query: cipher, compress, gzip, bzip2 requested-cypher: 00-F9-CC-A7-B9-24-17-FF

The Requested-cypher is a number representing the cypher key that is to be used. It is 64 bit hex, as seen there. If there is known cypher between the two addresses, the sender will send one of these, and the client on the other side will reply with a cypher-request-response header.

The response has in its header:

negotiation-response: cipher, gzip, bzip2 cypher-request-response: accept 00-F9-CC-A7-B9-24-17-FF

In this example, compress (proprietary LZ0) is unsupported in the queried address. Once the response (or a "Mailer error" reply) comes back, the message is dealt with (i.e. compressed/cyphered and sent, or halted and the user is informed of the error in the "Mailer Error" issue). Note that the cypher-request-response is {accept/deny} [key-ID].

A cyphered message could need a sync up first, if there is no known cypher between the two clients. Header:

send-features: bzip2 negotiation-cypher-request: 3DES negotiation-cypher-supported: 3DES, blowfish

The response has the cypher key attatched to it, if it accepts.

send-features: bzip2 negotiation-cypher-response: 3DES, 00-F9-CC-A7-B9-24-17-FF

[Key is attatched]

After all these messages go back and forth, the mailer clients finally have a key matched to eachother. The negotiation-cypher-response has two parts: The algorithm, and the key's ID. Now to send messages back and forth:

send-features: bzip2 cipher send-cypher: 00-F9-CC-A7-B9-24-17-FF

Take note that the send-features are processed in order, left to right. In this example, the message data (not header) is bunzip2'd, then the key referenced by 00-F9-CC-A7-B9-24-17-FF (which is 3DES).

Now we've negotiated send-cypher and encryption, by sending like 4 messages back and forth. Now, both sides CACHE the response values. That means they both know eachothers' features. This process can be forced to occur later anyway, i.e. if you now add an fzip2 module for fcomp2 compression.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!