Older blog entries for Chicago (starting at number 76)

Debugging & Logging I just fell out of the zone, which means that I'm going to update my journal. Firstly, I'ld like to point people at the Apache Logging group. They have some logging tools which are actually really well put together - I'm now using their Log4Net in my code and its really saved a lot of effort on my part to provide a suitable logging service.

The only strange thing is the lack of error logging on its constructor - I had a massive problem in getting it to find my XML file (partially because I forgot that I wasn't compiling into and running from the same directory that I was programming in). However providing it with a dodgy path to a non-existent file didn't throw any errors - this would mean that the logging would never be able to interrupt the execution of the program. I will have to look into how to detect if the logging has managed to succeed or not (at least on start-up).

Its ability to have a watch on the configuration file is also quite nice, and it doesn't seem to have that much impact on performance when you start using (and ignoring) its debug() functionality. Its main power however is the ability to use the logger in one environment with one configuration, but then (for instance when using the code in a program as opposed to the test harness) having the logging go to different places. The ability to control the end-location of the logs to different streams, emails or files is just awesome. I mean I am really, really impressed with this software.

And its not too annoying in the code either, which means I'm even happier - at the top of each of my classes I just put

private ILog myLog = LogManager.GetLogger(typeof(currentclass));

then I just put

mylog.debug("Some Debug Stuff");
or whatever I want inline, replacing all my Console.WriteLines(). Its great. Go use it!

Human Interfaces

Prozac: I use a tiny mouse - I mean a really really tiny mouse. This gives some idea as to how small my mouse is to my hand (also shown). I hold it in my fingers, and I actually have no problem with the straight down menu styles, because my fingers just move when my hand doesn't.

Plus I generally click as little as possible. I'm annoyed that I haven't found a suitable way in windows to have the sloppy focus of fluxbox that I love and enjoy - just to cast my mouse to a location.

But then again I also have a good memory - or rather not necessarily memory but my hands know the shortcuts. I mean they know the keys off by heart, (I have a das keyboard - did I mention that in here yet? I aught to have - its very cool).

7 Jul 2006 (updated 7 Jul 2006 at 12:26 UTC) »
Doppleganger: I can't work out if you are a bot or a real person. Where are your links to your `posting'. Are you actually some kind of megahal bot that's been fed a data structures and algorithm textbook and just preparing an account that can be later used for link spamming?

Edit Thanks bi. That looks very much like an implementation of a Trie. As such, it would be very efficient, (in searching speed) but I fear the amount of memory used up would be *fearsome* and creating it in the first place may also be very slow.

Stuck with the unending task of applying resources to fit deadlines of a project, what can you do other then beg for more resources (which will invariably be left with a negative response) or drop features from the project? Whilst some more resources have been given (two guys are starting within the next couple of weeks) one of the deadlines has been hastily moved forwards by the combined efforts of demands from customers, the result, a hastily re-drawn map of features and requriements diagrams alowing us to cancel features whilst still leaving us with enough product to move to the beta demoing stage.

The only thing to do now is move from the features based diagrams to the implementation diagrams. These cause me headaches - even though I know that no one diagram can succesfully describe the methods of displaying the code that will be written - class diagrams get horrendiously complicated when you start using more and more generic systems, and sequence diagrams begin to get horrendious when you have one per 'use' of the use diagrams. With seventy user features on the project, that leading to seventy diagrams is overkill, and it would be easier just to write the thing and write how it was written afterwards - but that would be bad now.

15 Jun 2006 (updated 15 Jun 2006 at 12:45 UTC) »

I am unsure what to think about Unified Modeling Language. The sort of size projects we are about to embark on would be difficult to maintain due to the nature of the program. Our new product, which the sales and upper management are in danger of followign the Dilber t 'Name' syndrome. My main questions about UML are will we (the technical team) be given enough freedom to be able to have good software development process whilst still being able to be flexible enough to meet demands of prototypes and last minute deal-clinching feature requests - this is more a case of will the tools that are available to us be easy to use and actually aid, or will they hinder application development? I don't want to have to go down any route which will create barriers to developers enthusiasm for developing cool new products.

Then, there is of course the inevitable battle between the technical and design sides when it comes to good template based design. Perhaps I am being too harsh in expecting the CSS based HTML templates to look the same in both IE and other alternative browsers. I also know that in this area, this is paramount to preaching to the converted, and that I know that you shouldn't back down by saying at the bottom of the website best viewed in - I know of many users who do not have the ability to use IE - not just linux computer geeks, but Apple Mac users (for whom, the IE has not been available for a few years now (see the Microsoft for Apple downloads (although using virtual PC for Mac might enable that use) which include several non-geek friends of mine, of varying ages (including a retired lady who is very good with her Mac) who have no desire to perform complicated manipulations of their operating systems to view a single web page - they would just go elsewhere to a site which does view correctly, as well as certain companies which use Firefox as their main web browsing application by default (whether or not the employees want or like to, or even know.)

Edit: Issues with RSS feed of this post. The rss feed of this post (here) has some issues with it. Primarily, only the Dilbert link shows correctly - the other links all seem to link to the rss feed. The only difference I can tell is that the Dilbert link uses double quote marks, where the others use single. I will change them to double to see if there is any resolution in this. Looking at the RSS feed, it seems to produce:

<a href="" 'http://www.mozilla.com/firefox/'>
from

<a href='http://www.mozilla.com/firefox/'>

Is there any reason for that?

Interviews

At work start tommorrow. My techincal test has been written, and this time dosn't include the difficult database question that caused the thirty minute stumping of so many previous people. I am considering writing a simpler database question, but what would be the point? They have to be challanging questions...

My current questions now consists of:

  • Set Theory This consists of a set of simple maths sets, and then performs different combinations on them, such as AND, NOT, OR and XOR's expressed in different ways. The idea is to see if they can handle sets of data, which transposes to their ability to understand things like sets of customers or indeed more abstract things like problems (where symptoms may be in various sets)</em>
  • Web Programming This aims to see if they understand the principles behind reasonable web programming, security issues, interface issues and the technology behind simple request - response - request - response style programming.
  • OO Programming Possibly my favorite question, which aims to see if they understand references properly.

    Elvaston

    Was great fun - official webpage here. But it was not as good as previous years. I last visited three years ago, and there where at least two more large tents. Everything seemed scaled down - even the Icom tent, which last time bristled with cool equipment was now a shared tent with Kenwood - each having perhaps only two tables of their gear.

    The most interesting thing there has to be the WiNRADiO which is a receiver that does its signal processing on the computer. The basic premisis being that it provides a much better control of the listening range then any of the hardware solutions currently out.

    This product, and several others on display all seemed to point at a trend of moving back to having dual sets - one to listen and one to transmit, which of course leads to dangers of having them set on different frequencies, it does mean that you can have specialist gear for each one.

    The WiNRADiO was seductive though - its got a really nice USB interface to a nice plastic shielded case. It's specs look great, and its has 'alternative' interfaces for selecting listening frequencies. One of these (as well as the tradional dial) was a graph of the entire frequency range, with a selection of the frequencies that are being amplified. The user can spot what frequencies are in use by seeing where the spikes are, and then just drags their selection over the top.

    Seductive yes. Expensive yes - remember, this is only the receiver. With prices at £400 (£450 GBP for the better demodulator) you need to have a (Windows) PC allready (which lets face it, I do). But then if you want to transceive you also need to have another peice of gear to transmit from...

    However if you wanted to do something like.. ooh I dont know, Amature Radio Astronomy, it might provide an exellent base to start from.

  • Trust Metric issues in Advogato:

    salmoni, lkcl: Erm, no not quite - I think there actually has been a bug in Advogato - I created my account a few years ago, before moving to Livejournal as I changed what my journalling was about. I know I aught to seperate out my comptuer geek stuff from my personal stuff, so have come back to Advogato. Unfortunatly, in the meantime, something must have happened and I lost my ranking.

    Now the way I understand the trust metric, that just can't happen - unless someone does something in the database, because you can't fully delete a user, and you cant seem to delete your ranking of someone (change yes, delete no) so any links I had originally should still be there.

    I wasn't too worried by any of this - I was presuming the moment someone ranked or reranked me the system would reset my rankings to being whatever it should be. Mind you, things change over that period of time. I know I've matured a lot and even if I lost all my previous rankings, that may not be such a bad thing, as I don't believe my posts then reflect who I am now, neither in my personal or professional [geek] world.

    Progress on Photo Blogging

    Sending a MMS with the word blog in the message to 447921505050 will now appear on this photo blog which is cool (MMS's cost normal rate for your MMS provider, this is not a premium rate service, and it might even work internationally (but I dont know what that might cost someone)), and if you do test it, please remember to put the word blog in there... otherwise it goes elsewhere (and might cause charges).

    I've been working hard on our new photo blogging solution. The specification is for branded community photo blogging, whilst still being maintable, and blah, blah... and blah. (Standard Software Engineering for Mission Critical Stuff applies). The solution we have (currently the only working demo is sitting here, and has some cool ideas behind it.

    The first is the basics of each page in the blog is powered by xml snippets controlling it - the template to be used, and then customizing elements inside it, mainly the width and heights of the thumbnails and the number of columns and rows in the body of the template. Adding and removing pages becomes simple, and these configurations could be optionally overridden with arguments in the url.

    This system is currently using CSES as a basis for getting images into the system, but currently I'm having problems with the recieving of every message - whilst certain messages work, it seems some emails are getting lost. After dwelling on patterns of success and failures, I have an idea that it might be my implementation of the message spool, which I will have to look at in the morning.

    Apart from that, I'm quite happy with the progress that this templating system is making - this is the first outing for my WAG project - the Web Active Generator, which aims to produce a Smarty style interface for use in .Net applications. It's had to come through a couple of major rethinks to get this far, but generally seems ok now. Its slightly recursive, but does allow me to deal with any kind of sectioning tags that anyone may want to throw onto it at a later point...

    I've just come out of the Google Code Jam 2006 event and I'm sweating like a pig. Two questions in 1 hour, and I have about thirty seconds left on the harder one that wouldn't work correctly. I guess I'll have to wait untill tommorrow to give more information on why I'm feeling dissapointed with myself, but lets just say for the moment I'm not the happiest person in the world.

    I got 153.59 on the 250 mainly because I misread the question *slightly* and took nearly ten minutes extra to rewrite some of my code, and I only got 224.62 on my 500 point one because I just couldnt get the right answer. What I submitted was something that seemd to give the right answer to the examples, but I know inside thats because I mangled it. So I guess I'll score 153.59 / 750 then...

    17 May 2006 (updated 17 May 2006 at 20:31 UTC) »

    I am beginning to believe that the mass of badly designed XML data is due to the difficulty of making an XSL transform stylesheet to go with it. Someone gets a working stylesheet, and then forms the XML data to go with it, usually giving a very flat document structure (maybe only one or two levels deep). Perhaps this is just the case with XML documents which have been created by people with limited understanding of XML - perhaps only after a quick tutorial in what it does, and how to use it.

    In more programming news, I have been writing Singleton C# classes. With its method of using get and set, it means that using the method described here gives not only secure code, but it is nicer to look at - my classes using the singleton object meerly need to go:

    Singleton a = Singleton.Instance;

    The other problems I have been hitting recently have to do with the Mysql .NET database connector and concurrent fast queries using the same connection pool. Even though this was specified, and having enough pool, the database would seem to dissapear. By redesigning the data flow slightly it seems to provide a much more reliable system, which also seems to have better query speed at the expense of very slightly more memory use.

    I would also like to take this oportunity to congratulate the developers of Zed Graph who have managed to make a very good open source graphing product. Their Graph generation software can be easily embedded into any .net application to generate professional looking graphs to rival other applications.

    More on my documentation ranting - C# has a couple of nicely overloaded objects, for instance its DateTime class and TimeSpan class.

    As far as I can tell, the TimeSpan class is actually a DateTime class but with its reference point being at year 0, instead of an Epoch date (1/1/0001). Also ToString() is overloaded, as timespans usually will deal in hours / min and seconds where DateTimes generally need the entire date.

    The only problem comes in the documentation of individual properties - the TimeSpan documentation for its public property milliseconds reads:

    Gets the number of whole milliseconds represented by the current System.TimeSpan structure

    where the DateTime class documentation for the same property reads:

    Gets the milliseconds component of the date reperesented by this instance.

    Now, from this I took that the TimeSpan would return the number of milliseconds inthe time span, so if the time span was 1.5 seconds long, it would return 1500 (as that is the number of whole milliseconds represented by the structure), where the DateTime equivilent would only return 500 as there would be a 1 in the seconds part, as the 500 is the milliseconds component of this.

    But of course it dosn't. They both operate in the same (more sensical) way of doing milliseconds returns the milliseconds component. That of course I found out only after failing to understand the volitile nature of the graph shown in the following examples:

    With out seconds being displayed (ie millisecond component only): Milliseconds Only and after bug fix, Seconds and Milliseconds. Slightly different huh?

    67 older entries...

    New Advogato Features

    New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

    Keep up with the latest Advogato features by reading the Advogato status blog.

    If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!