Older blog entries for whatever (starting at number 5)

csed/cgrep was coming along quite well, until the last stages where it was taking far too much effort to convert the earlier stages of lexical parsings into more complicated structures like tables of variable types. There were problems with how to accomodate incremental changes in the source code being analysed and how to work out which caches should be updated, which buffers needed to be flushed, etc.

I hammered away at this for weeks, trying to come up with an insight into the problem that would simplify the accelerating tide of exceptions that were going into the parser with the addition of every new syntax element. Was there some elegant recursive algorithm I could use? Was it better to visualise cache elements as a list of states, a tree of states, or a nested structure of states? For every algorithm I could think of that satisfied the most common case, the remaining list of exceptions still accelerated exponentially with every new facility I added.

I just couldn't figure out how to add all the features I wanted, without reaching impossible levels of complicated code! I downloaded as many parsers as I could, and examined them. It appeared that it was a problem that hadn't been solved as they were all huge, complicated, and hairy as well. There were some very nice small ones (eg, Lua), but I really want my parser to comprehend full ANSI C with GNU extensions.

I was starting to think that perhaps I had bitten off more than I could chew, even though the goal is pretty simple - embed sed and grep inside a parser, with the ability to handle on the fly code changes. How could something so simple sounding be such a problem?

Since I wasn't getting anywhere with my attempts to save the code, I decided to throw it all away and start again. Without the blinkers imposed by the objective of "avoid rewriting! save the code!", I had this incredible few hours where I suddenly realised exactly what I was doing wrong and where I went wrong.

In my quest to keep things simple, I had over-simplified the core of my design to the point where it wasn't sufficient for the task. The rest of the program was difficult to write because it was trying to make up for a core deficit in the design.

By making the core design more complicated, the rest of the program was simplified so dramatically that the final amount of code I expect to write has been halved.

The learning process sucks when I just want to accomplish a task!

On another topic... thanks to darkewolf for certifying me! It's always nice to receive positive feedback. :)

I've been hassling NetworkSolutions (NotwerkProblems) for about 3 months now, trying to get my domain transferred from my old ISP to my new ISP. The problem originated due to my old ISP registering my domain on my behalf so I didn't have the password despite paying directly and being listed as Admin/Tech/Billing contacts.

I got nothing back but endless automated emails generated by their fucked up systems, despite putting in my emails, over and over again, "I WANT TO TALK TO A HUMAN BEING!!!".

Gave up and applied for a domain transfer to MelbourneIT (www.inww.com.au), and the difference is amazing! Humans answer my emails within minutes! I actually feel like a customer, not a taxpayer.

cgrep is coming along well. It now processes post-fix syntax (eg, the difference between a function or a prototype is determined by a ";" or a "{}" at the end) and arbitrary syntax extensions (eg, it's possible to easily add in that "fred" after "int" has a different meaning than "fred" after "float"). It also picks up variables and stores them into namespaces. It can pick out comments, function names, simple variable types, and so on. It's almost useful as it is now, but not quite.

Debugging is a bit painful since I put a multi-state machine into this, so there's about 50 pattern matching threads running concurrently on the input stream. I'll have to beef up the inter-state communications to add some snooping and debugging facilities.

Still pondering how to handle typedefs, structs and other compound types. I'd like this to understand Gtk signals and types, but not sure how to handle this other than putting Gtk specific names into the parser itself, otherwise the parser will just run through merrily extracting all the base types which is very nice and accurate but not at the most useful conceptual level unless actually writing Gtk.

Seems to me that a generic way of setting view levels would be more useful than a hardcoded solution, but still dunno how. Needs more thought.

Like I said at the beginning - this project is boring the shit out of me, but my nose is being kept to the grindstone by the consideration of how useful this will be when it's finished. The vision of being able to upgrade programs to new libraries (eg, Glib 1.2 to Glib 1.4) semi-automatically is just too compelling to give up.

I've been going through some of the levels and certificates. It's interesting to see the groupings within the Advagato community, with separate clusters producing "Master" levels. This has been my strongest confirmation that the Advogato model is capable of expressing complex behaviour. Independent communities are able to form, rather than being mushed into the site owner's personal preferences. I don't think the current presentation will scale well, but the core concept appears more than sound.

On a different topic, I have enjoyed using free software and it's made my computing experience worthwhile through the Microsoft years, so I had already decided to write and release code under the GPL as a way of contributing back some small part of what I have received.

However, I was more than a little nervous about doing so as I imagined my inbox flooded with various complaints, insults, and ungrateful abuse from the typical Anonymous Coward. It's hard to look forwards to being on the wrong end of the shit stick but I'm used to that sort of thing so it didn't worry me too much. Just not really that much of an incentive, really.

I'm also so sick of hearing people write their opinions about Open Source or Free Software or whatever political issue is the hot topic of the day. I realise that there is a need for these issues to be rehashed over and over again, but I'd really rather just exist within the community without having to talk about it all the time. Nobody talks about air and breathing all the time - we just do it! Instead of a flow of assertions like "Code Must Be Free!", I'd really much rather read about personal experiences.

For this reason, I certified rakholh as a Master when I read his article The Thrill of OpenSource Programming, which gave me a positive insight into what it is like to participate in this adventure. After reading his account, I want to be a part of this experience, rather than just trying to satisfy obligations. I dunno about his code, but it was one of the best reasons to be here that I've read.

In a way, our greatest scientists have all been documenters. They explore their surroundings and write documentation about their understanding of nature. Einstein looked at light and wrote the General Theory of Relativity. What is this famous body of text, other than documentation?

The Gnome developers are creating a new universe of code, and the documenters are our scientists, providing the footholds the rest of us need to understand this strange new world. There's really no such thing as "only the guy who does the documents".

Updated: Changed my certification of rakholh to Journeyer to match his expectations. Original comments still stand though. :)

OK, so I lied. I'll write about code visualisation when I can be bothered. Besides, when I read my previous diary entry, I sounded like Steve Job's pet gerbil on speed. I'm writing code, not marketing!

Speaking of Apple... I've been looking at those liquid gel buttons for ages, admiring just how beautiful they are. I'd love to have that under Linux, perhaps with some modelling of ink-drops, so this delicious red mist can swirl around inside a transluent blue button. Moving a window could shake the buttons inside, changing the swirl patterns. I'd buy another CPU to handle that sort of user interface! :)

I've written most of the parser for csed/cgrep. The parser is able to comprehend and step through itself, extract comments, functions, strings and variables (basic types only). Next step is to add in structs and typedefs so the parser can understand more complicated variable declarations. I'll know I've done a good job here when it understands Gtk+ types.

However, I've run into the same old brick wall. I've written the parser two ways and I don't know which one to keep.

The first is to have a central routine which feeds data character by character into state machines. This has the advantage of allowing me to feed in data from anywhere, so it would be easy to hook an editor into it.

The second method allows the state machines to retrieve information themselves, using callbacks to inform the main routines of any changes. This has the advantage of being able to embed any kind of callback, not just state changes, but would require an editor to have knowledge of the parser.

I could just flip a coin and pick one at random, but it would be a lot of work to back out one method and insert the other method if I discover my chosen method won't expand far enough to be capable of doing what I want. The hard part is that I want this to be totally compatible with existing code and methods. Eg, this ain't gonna be another IDE, but it should be easy to integrate into existing environments.

I'm ignoring this problem for the time being and have started writing a multi-state pattern matcher for the variable namespaces. When that's mostly complete, I'll look into how the parser and namespace engines can best hook into each other. I'm not using lex/yacc or any external libraries at this point, just ANSI C, so I have more leeway in choosing how to fit things together.

On the bright side, it should be pretty easy to change things around once I have these things working, so maybe I shouldn't sweat so much.

A major design feature of the world's most advanced transportation system, the capacity of the engines on the Space Shuttle, is determined by the size of a horse's arse. In the same way, the world's most advanced computers are limited by the size of our naughty bits.

How are we supposed to be perfect when Nature only gave us 7 registers? By the time the initial solution is half coded, programmers usually wish they'd written the code another way.

The problem is that it's hard to go back and update existing code.

As a trivial example, if you want to rename a function, you have to change the name in the code, the header, and everywhere the function is called. So much time is wasted doing this, yet this is something the computer could be doing. It could nearly be done with a for "*.c *.h" loop and sed.

The problem is that sed doesn't understand C code. It wouldn't have a clue whether the "fred(" in its buffer is the start of a function declaration or a function call. So Step 1 is to write a sed that does understand C code. Tedious but not hard.

Now we can change function names with a "csed --funcname s/oldname/newname/". We can now also change global variable names without touching local variables with "csed --global s/oldname/newname". And update function parameters. And insert function calls in the right places. And delete function calls. Lots of things like that.

If you're modifying library code that other projects use, you can generate csed commands to go along with your updates. The development teams on the other projects can then run those csed commands on their own code (with the "--ask" option of course!) to do the finding, tagging and most of the conversion for the tweaks and updates to your new API.

Sure, it can't do everything automatically, but you can tag csed changes with explanations, possibly even with URLs to relevant documentation, which is inserted into code above the changed functions to indicate which parts need more attention to complete the conversion.

You can also use this same code to grep for what you want. Looking for a macro definition? Type "cgrep --macdef MACRONAME *.h". Want to know where a function is called? How does "cgrep --calls funcname *.c" look? Want to know how much code depends on a function when planning a change? Try "cgrep --calls --descend funcname *.c".

That's quite a lot of useful functionality for embedding a simple C parser into grep and sed. Also allows for a few useful extensions too.

I'm in the middle of coding this now. It's boring, but it's one of the tools I need for what I REALLY want to do. The visualisation part.

More on that topic next entry.

I've been around for a long time, since the earliest days of Linux. I started with 0.11a and enjoyed hacking the kernel to get it to do what I wanted. I was one of the first developers in the Debian project, but I got burned out by all the ego-wars and left.

10 years later, Gtk+ and Gnome have caught my attention and I'm in the process of learning how it's all put together. I'm interested in writing code visualisation and modification tools; eg, being able to automate things like moving the GtkObject code to Glib, rather than having to spend a lot of time doing this manually.

I'm a drivers and low level kind of programmer, so this is my first foray into the world of writing programs that actually interact with the user. The hardest part is coming to grips with OOPs and other stuff that I bypassed for all these years.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!