Older blog entries for salmoni (starting at number 596)

Still working on Salstat from time to time. Latest work involves charting and importing from spreadsheets using xlrd (for Excel files) and ezodf (for Libre Office Calc files). Both libraries had similar interfaces so I cobbled together a lot of common code for both rather than having 2 separate routines.

I've also coded a CSV importer. Python's CSV file only seems to allow a single delimiter but my users sometimes need to handle multiple ones (particularly with files composed of several files from different sources). I wrote my own CSV parser than handles multiple delimiters and key characters within quotes too. The core routine is in here as a Gist (heavily commented too for when I have to trudge my lonely way back to the code to change it). It's not the fastest importer but it does the job accurately with some of the gnarly test data I threw at it.

Salstat code at GitHub

In latest developments, Salstat now displays results nicely, the clipboard functions work well, charts are coming along and bugs have been squashed.

Output display

The full-featured HTML display means that it can do good things when displaying results. It now incorporates JQuery and Twitter Bootstrap to form the output display. This means that tables actually look nice now.

Clipboard

Clipboard functions work across the application (data entry and output) which means the above can be edited (if necessary) and copied into a spreadsheet.

Charts

Salstat also refers to the Highcharts libraries which will be used for charts. Currently, I'm working on a chart window which allows us to generate a chart and edit it to perfection before it gets put into the results. This should help take the guesswork out of charts. And they will be exported to PNG, JPG, PDF and SVG formats directly. This is not yet working but I hope it to be fairly soon.

Bugs

A lot of bugs have been squashed too. Salstat used to freak (rather: refuse to do anything) when inputting anything other than a number into the data grid. Now, it's more relaxed and will try to deal with things downstream intelligently.

Other bugs such as putting data into the first, third and fourth columns have been squashed. Some other bugs with tests have also been squished.

Future plans

Proper data formatting (variable names, data formatting, specifying missing data and marking it visually with a different background colour)
Charts – Salstat has got to have these and they are coming!
Databases – input from and output to databases. Salstat will abstract the interface (using something like SQLAlchemy) in order to tackle a range of databases and dialects. Having said that, the requirements will be fairly simple (retrieve, write and commit) so fairly vanilla SQL will suffice. This, however, is tricky because I want a data browser whereby tables and some content can be browsed easily and data selected for import. This needs to work for remote and local databases as well as SQLite.
Bring in my custom statistics modules (properly unit tested!) from my forthcoming book, "Computational Statistics".

So lots to do yet, but lots done already over the last fortnight or so. I hope to make a new release on 22 October 2013 – 10 years to the day the last proper release was made!

Long time no write.

Ten years after making the last release of Salstat, I've decided to continue with it. The project is on Github now (https://github.com/salmoni/Salstat).

Today's release utilises the excellent xlrd module which has allowed Salstat to read Excel files (xls and xlsx). Many people have asked for this. For now, the basic "happy days" workflow is fine but there is poor error handling.

The next one will have database access. This is a more complex workflow. I also need to harden the Excel and CSV import routines.

Mozilla are looking for a Quantitative user researcher which sounds cool. The emphasis on user research sounds right up my street, particularly the need for mastery of experimental design and statistical analysis. It kind of takes me back to my PhD and work on SalStat (still going strong).

The problem is my covering letter. Can anyone here tell me what style of covering letters are preferred? Long and detailed explaining why I meet each of the requirements? The standard 3 paragraph ["intro", "I'm cool", "thanks"]? Or some combination in between?

In the meantime, I've released Roistr which does some basic semantic analysis / text analytics stuff. I put up some demos but it's hard to really show how useful this thing is. It's based on the open source Gensim toolkit along with numpy and scipy.

Scipy sounds like it's going places. Travis Oliphant recently announced an initiative to bring it to big data properly. I have an idea of what he means and it would be very cool.

Does anyone have any Google Plus invites that they could send (one) to me?

In other news, wife, daughter and I are off to the Philippines for 5 weeks and hoping to get some start-up work moving over there. UX is in demand at the moment so it's a good time to be around.

I've also been looking up versions of principle components analysis in Python and found these:



All the linguistic stuff I've been doing lately is making my head spin but it's coming together.

Lots happening: I've been building a semantic relevance engine - something that can accurately determine the semantic similarity of 2 text documents and it's working reasonably well. Working completely untrained, I'm getting accuracies of well above 0.8 and often above 0.9. Obviously 1.0 is the ideal but even human judgements rarely get above 0.9 with the corpora I've been using for this.

The good thing is that I appear to be discovering new stuff almost every day about how documents are understood. There are some approaches I've used that I've not read about in the literature so there might be some useful stuff for the world here.

However my aim is to make a web service around this. And it's all based on open source software (Python, numpy, Scipy, Gensim etc) which is perfect. There is proprietary knowledge used, however: the corpora, how it's prepared and the architecture of the engine; but that will all come publicly out soon enough.

Log Entropy models

I had problems when I last upgraded to 0.7.8 of Gensim. The main issue was that the package I imported wasn't necessarily the one used: quite often, it seemed as though the top level would be from one install whereas another import would be from somewhere else. The net result was that parts of my software were looking for an id2word method in a dictionary where there were none before.

However, I still want to try 0.7.8 if I can and I found a way. I downloaded and untarred it, and renamed it 'gensim078'. Then, I went and changed each 'from gensim import *' statement to 'from gensim078 import *' which seems to be doing the trick. I'm sure there are better ways to do it but this is working for me so I'm happy.

The advantages are that a) it's faster particularly for similarity calculations, and b) I now have access to the Log Entropy model which I'm building for G1750.

Later tonight, I'll adjust the dictionary and begin pruning words that appear across lots of documents to see if that improves the focus. The program does seem a little 'fuzzy' as it is but that is quite a human characteristic so I'm not too worried. However, it will help me explore vector models and understand them better myself.

Although the results of the word-pair semantic association task were poor, I'm not dismayed (too much!) because my whole construction is not perfect and there is lots of room for improvement. The task is also useful as it gives me an indication of accuracy by another means to the 20NG categorisation task. When I create a new corpus, I should ideally subject it to a battery of tests designed to test different things. With the results of these, I can work out whether the corpus is heading in the right direction or not. It's all good to have these tools even if (initially) not going how I wanted them to.

I'm turning into a perfectionist. I really need to release something useful before I refine... Release early, release often...

I've been having lots of fun lately with Gensim, a Python framework for vector space modelling. It includes fun stuff like latent semantic analysis, latent dirichlet allocation and other goodies. Allied with NLTK, this makes a very formidable Python- based NLP framework.

My tasks are sorting newsgroup posts into correct groups and I've achieved a reasonable level of accuracy (0.92) which isn't bad given that it's entirely dependent upon content. However, most analyses are showing lower accuracies (0.70+) which isn't bad but not far away enough from chance performance to be taken realistically. However, there are a few ways to improve this and I'm conducting an enormous number of experiments to get an effective mental model of how vector space models work.

This is all the beginning of constructing a relevance engine which I'm sure will be useful to some people.

Great fun!

This is a list of things that have to be done to get Infomap working on a modern Linux distribution (tried on Ubuntu 10.10).

* BLOCKSIZE in preprocessing/preprocessing_env.h : needs to be set to the highest number of words a document has in the corpus. If a document has more words than BLOCKSIZE, the building of the model will hang.

* Install libgdbm-dev with Synaptic or apt-get. Infomap needs a header file and without it, Infomap will not compile (not pass ./configure).

* Not finding ndbm.h : All happens in /usr/include

ln -s gdbm-ndbm.h ndbm.h or just copy gdbm-ndbm.h to /usr/include/ndbm.h

Infomap will not compile (not pass ./configure) without this.

Then it should go through configure, make, and make install well.

This is the code for CompareTerms:

# term1, term2 - terms to be compared

vec1 = "associate -q term1"

vec2 = "associate -q term2"

vec1 = numpy.array(vec1)

vec2 = numpy.array(vec2)

product = numpy.sum(vec1 * vec2)

return product

This produces an association between 2 terms.

When calling this, the 'args' string that calls associate must be formatted as a single string and not by Popen. This is important when sending more than 1 term. If not, associate will treat the terms as a quote search rather than an AND search.

Long time no post! I've been very busy with family and work and not had much time to do stuff. If there are no objections, I was thinking of reposting some of my UX stuff here. It's not commercial but informational and might be of use.

As for open source, I've been working a lot on Infomap lately for natural language processing. I had some failures using Semantic Vectors, namely the speed at which it does comparisons between terms. I had an idea for an automated information architecture creator but the speed was too slow. Infomap is much faster so I will try to use that - even though I know it's been superceded by Semantic Vectors.

Plus, being written in C means that it is accessible with Python whereas Semantic Vectors being in Java means going through Jython (and learning lots of new things which I don't have time for) or going through a very awkward process to translate.

With my first run using SV, I generated an information map much like that resulting from a card sort. The card sort took weeks to prepare, perform and analyse - and a lot of staff time. Mine ran in a few hours and got results that weren't entirely dissimilar to the human version. There were some odd surprises but that was because of the corpus (Wikipedia was what I used at the time) which by nature has a focus on particular topics as opposed to general language. This meant that the results were generally quite good but with one or two startling exceptions.

But the difficulty in integrating it with a Python backend is too hard, so back to Infomap. I just need to figure out how to do semantic comparisons of terms in Infomap.

It was a job to get going. The first problem was not having the appropriate symlink to a DB library and a header file. Once rectified, I had to ensure the BLOCKSIZE constant was set to a figure larger than the highest number of words. It defaults to 1 million but the longest document in the corpus was 1.25 million words. Without doing this, I had no warning and left the program building its model for over a week before finding the problem. Once done, the model was analysed and built in under 2 hours on an Asus 701 netbook!

I remember when LSA used to take days...

So in the spirit of openness and the basis of this endeavour being in open source software, I will publish results here to ensure everyone is totally bored.

587 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!