Older blog entries for ingvar (starting at number 246)

26 Jun 2007 (updated 26 Jun 2007 at 21:08 UTC) »
Snooper now has activity graphs enabled by default (only available for the last two days). I also added some code to expire old reports (at the moment set to 30 days and expired until 30 days ago, it might actually be sensible to crank that up a bit).

I'm using the image generation code I wrote for NOCtool (including a rather hideous, but fairly readable font), it's a sliught win over bit-bashing a GIF back-end, though not much more. The currently included basic operations are "draw line", "draw rectangle (optionally filled)" and "draw text". All operations takes RGB and alpha values (the labels for the day dividers are centred by drawing the text with an alpha of 0.0, checking the return value (the right- most pixel drawn) and halving that, subtracting it from the X offset of the line). At the moment it only has exports to GIF images, via Skippy, but I have considered breaking the code loose and make it intoa free-standing library (more so after finding an alternative use for it).

EDIT: Apparently, something, somewhere is subtly wrong and I end up with broken images. The source of the problem is being tracked down. It was apparently me passing in a zero-length comment and Skippy happily passing that on. Today's graph has been re-built and from tomorrow, there should be fresh graphs.

I've been playing with Snooper (well, the analysis bit) and my PCAP library (incidentally, there's been some protective coding in the PCAP library, sufficiently malformed packet captures could cause unhandled errors).

Contrary to my expectation, it seems as if scanning is mostly not dependednt of the time of the day (with the majority of scans I see coming from single IPs, I was fully expecting to see time-dependent numbers), though as of yet, I've only looked at a few runs, TCP only and there's no graphs on the generated pages, should be there shortly.

I've also re-visited the graph-drawing substrate of NOCtool, now updated to a recent version of Xach's Skippy library for GIF manipulation.

For a few months, now, I have been operating a packet- capture on an otherwise unused IP (iptables set to drop any oncoming packets and tcpdump capturing all incoming packets, then using some custom-written code to anallyze the incoming data). I also generate reports daily (see The Snooper Project). Hopefully, this will show trends in network scanning and the like and a few htings are starting to be abundantly clear.

  1. Scanning of TCP is more common than scanning for UDP (in the last week, I've seen three times as much TCP incoming as I have UDP).
  2. There are more TCP-based services scanned for (44 unique TCP destination ports, only 5 UDP destination ports).
  3. Microsoft services are at risk (830 packets to tcp/135 and 9 to udp/137) </oi>

    I've also seen 810 different hosts probing the snooper IP in that timeframe, 469 of them have tried provoking an ICMP ECHO response.

    I'm thinking I probably want graphs of activity-per-hour (or half-hour, I'll have to jigger that around so it looks good, I think).

    In cellular automaton news, it seems as if the code that generates transition matrices from the rule description language isn't, quite, doing teh right thing. I'll have to make sure that the default rules are run before anything else. Ah, well, any day now.

    The erlang learning progresses at snail's pace. Just too much else hogging brain space. I'll have to try to figure out how to get the debugger working since the error messages verge between "understandable" and "rather oracular".

Not much new. I've done some uninspired doodling on noctool. I still haven't, fully, integrated MPLS frame handling (the code-as-is decodes the full MPLS tag stack, but then doesn't do any further parsing of the MPLS frame, though I think I know how to solve that gracefully).

I've also started working on a cellular automaton engine (doesn't everyone, now and then?). Spurred mainly from an idea for sparsely-allocated matrices and a semi-elegant way of defining multiple different automata (not to be run in parallel, but at least to be run side-by-side with the same code). There's also a tentative automaton rule description language, that looks roughly like so:


(dead ((sum alive 3) => alive) (* => dead))
(alive ((or (sum alive 2) (sum alive 3)) => alive) (* => 
dead))

This is then (with some extra info, like names for states and shape/size of neighbourhood) compiled to: one class definition, one transition matrix (a 2D matrix, with current state as one index and "sum-of-neighbourhood" as the other (for the classic Game Of Life, this is a 2x256 matrix)).

The general idea is that the sparse matrix guides where we need to compute new states, while minimizing memory usage for each generation. This way, we can have larger fields for the automata, while still being efficient. There's also a layer on top of the sparse arrays that handles wrapping at the edge. This way, one can explore if the classic torus geometry gives rise to different behaviour than moebius- like tubes (either alnot the X or Y axis) or klein-bottle geometries. I'll have to consider a UI at some point and it's not immediately clear if doing the usual of building something from CLX is best or if I should try to get a CLIM interface going is the best option.

31 May 2007 (updated 31 May 2007 at 14:13 UTC) »

Looking at adding MPLS parsing to my PCAP library. In doing so, I did notice an interesting short-coming of most (though possibly not all) MPLS info available on the net. Namely, is there any indication in the MPLS payload what sort of data is carried? Can one, just by looking at the label stack, say "the encapsulated data is IP", "the encapsulated data is Flubbo" and so on? Or does this rely on configuration on the edge devices? Edit: The label stack goes before the network layer headers, but after the data link headers. Excellent! Now I just need to re-factor the relevant parsing code (essentially, abstract it to another function, so I can use identical code for parsing ethernet payload and MPLS payload).

Not much new, alas. Went to the Cambridge beer festival, started working Tuesday, we got some fry in a shipment of aquarium plants (that was, actually, quite cool). Oh, yes, I've started looking at Erlang. Still wrapping my head around it (there's some odds and ends I think should work, but seemingly are not). Currenmtly building a very simple model of a NAS pool, looking to expand it in the not-too-distant future, so I can play around with how various re-use policies interact with failing modems (something that has fascinated me since I was responsible for around 20k modems in multiple POPs, back in the dark mists of last century).

13 May 2007 (updated 13 May 2007 at 15:35 UTC) »

New (minor) feature added to build-asdf-package. If you add -t to the other options as you build, it will proceed to build a tar-ball named "package- version.tar.gz".

In time, once all packages have been rebuilt that way, I am considering adding symlink maintenance (so that it is checked if there is a symlink called package.tar.gz, it is updated to point at the newly-built tar-ball).

Edit: URL typo, now fixed.

7 May 2007 (updated 7 May 2007 at 11:23 UTC) »

Lots of non-coding stuff ha happened (including a lightning visit to Stockholm, mostly spent looking at touristy stuff, since (a) my wife wanted to see them and (b) I wanted to show them to her).

I have been battling with identification problems. How, with a distributed network management platform, do you assure that the remote end is what you believe the remote end to be? Tricky, isn't it. I looked around (but probably not diligently enough) for assorted means to solve this problem. Didn't, quite, find anything I liked. So I did the classic mistake of embarking on cryptographic protocol design.

First, I had a somewhat promising (though abysmally assymetric) design, where we can assume that there's a few pre-shared secrets (at least one, ideally one per end of the connection) and we do a two-way challenge-response (send a nonce, get another nonce and a HMAC digest for the first nonce back, send a HMAC digest for the second nonce back).

But, this, while neat, is not nearly as symmetric as I would like an identification protocol. It's also vulnerable to future TCP session hi-jacking.

Secondly, I thought of something that at closer look had all the weaknesses of the first, with a replay attack as an addded bonus.

Currently, I am thinking (and implementing) a scheme where all "protocol messages" are embedded in another container, where we have (essentially) (message protocol-data HMAC-SHA1-DIGEST(key, protocol-data)) and we verify the digest at each request. We also require an initial (iam node-name) before we try to do anything else. Once we hit something that is sufficiently fishy, we terminate the connection.

I'll get the network connection side of things in place, then it's time to consider more tests and (possibly) some way of incorporating network architecture into the whole thing (as-is, we check that the latest ping test was successful before making any more intensive testing; one shouldn't block ICMP anyway...).

In memoriam, Martin "rydis" Rydström

I've known rydis for a bunch of years. I think I've even met him face-to-face (that would've been back in 1998-or- so, if (as I seem to recall) he was tagging along with cd and peeps to a pub-meet in Gothenburg). We've swapped stories and jived on #lisp and in LysKOM.

I blame rydis for doing more than just a proof-of-concept of an elisp implementation for Hemlock (and Portable Hemlock). Others spurred the initial creation, but rydis made me continue. Thanks for that.

The finding of his body brings sorrow and relief. Sorrow, because he's no longer with us, relief because we know now. It is never easy, these things. My thoughts go to the rest of his friends and to his family. Remember the good things.

Working on noctool, currently prodding the inter-node networking (the idea is to make monitoring scale as far as possible, using separate display nodes and monitoring nodes, having each display node connect to one or more monitoring node for its status feed).

This, of course, ends up with something amusingly closer to a reader, for the protocol handling.

The way I've done it, there is a home-rolled object, with a stream, a peer, a buffer, a read function and assorted other things. The read functions are specialised for each task (find a #\(, read a string, skip whitespace and so on) and simply fill the buffer with characters (and tracking parenthesis nesting levels).

Once it has at least one character in the buffer and the nesting-level is 0, we then use READ-FROm-STRING to manifest an sexp that can then be passed to the protocol handler (we also reset the connection data object, so we're ready for some more action).

I must say that I really miss VECTOR-PUSH and VECTOR-PUSH- EXTEND when writing similar things in C. The similar hack I wrote in Python is similar, though somewhat hampered by the equality of single-element strings and characters.

Next, prod the BESK emulator until there's (at least) a function plotter and a working "quit" button, then package up for a release.

Doodling with the PCAP library, I have now created some filtering of a PCAP file.

Basic working:

  1. Compose a filter.
  2. Create a filtered stream
  3. Use NECT and PREV on the filtered stream, to only get frames that matches the filter.

Creating a filter is done by composing filters using FILTER-AND, FILTER-OR and FILTER-NOT on "basic filters" created by (FILTER frame-type slot-name value). There are some convenience functions for creating match-values (FILTER-HOST, FILTER-IPV4RANGE and FILTER-RANGE; FILTER-HOST takes one or more octet values and creates a vector with those values, FILTER-IPv4RANGE takes a base IP and a netmask (optional, if none is provided a classful mask is created) and FILTER-RANGE takes a low and a high number and matches anuthing in [ low, high ]).

Package is available.

237 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!