Older blog entries for rufius (starting at number 47)

To Do for December 2008

This is my brief todo list for December 2008, also known as the first vacation I’ve had since I started college.

  1. Build the new PC I bought and get Windows Vista Ultimate (Super Fantastic) Media Center running.
  2. Turn 21 and become a drunkard overnight (hah).
  3. Reinstall the laptop to repartition the operating systems. In the process of this, also install Ubuntu 8.10.
  4. Play a lot of video games.
  5. Sleep… this hasn’t been consistently done in a long time.
  6. Learn C++ better, especially proper template design.
  7. Finish leftover things for my bioinformatics research work. That is, build a database for the organisms for doing phylogenetic classification. Maybe play more with SVM’s… 
  8. Sleep more.
  9. Play more games.
  10. Setup the new Roku Soundbridge M1001 I bought for my parents for Christmas.

Syndicated 2008-12-05 16:36:48 from Zac Brown

Archaea Classification Continued

After having thoroughly examined the code for a couple days and tried the code with replacement of fragments, I’ve convinced myself that the code is correct. After thinking about it, it occured to me that the relative k-mer distribution profiles for larger k-mers (7,8,9) might be skewed by even very small sampling without replacement.

I went ahead and took the difference between the relative distributions for Pyrobaculum calidifontis for 4 different cases:

  • 8-mers - 100% genome vs 99.5% genome
  • 8-mers - 100% genome vs 67% genome
  • 4-mers - 100% genome vs 99.5% genome
  • 4-mers - 100% genome vs 67% genome. 
Since 4-mers showed little variation between training and full genomes, I felt that was a good base for “lack of difference” in the distributions. Here’s the data:

As can be seen, the variation in relative distributions for the 4-mers is very small, generally no larger than +/- 0.002  and thats with training 67% of the genome. Meanwhile, the 8-mers show significant variation with training 67% of the genome there is a variation of up to nearly +/- 0.2 which entirely changes a profile. Even with 99.5% training, it shows variation in the hundreths place which is enough to skew the profile. This was tested on several organisms, but Pyrobaculum calidifontis just happens to be my pick.
That to me, explains why this technique might not be applicable the way its currently designed as the profiles for the organisms don’t match as well. Of course the other side of this is since every one of the genomes’ profiles would be skewed, wouldn’t that even it out. Without some serious statistical analysis (and time), I can’t say for sure.
Here also is a comparison of distributions:
From this, it can be seen that sampling with replacement (100 pieces) is pretty close to sampling 95% of the genome with replacement. Those are two separate pieces of software which is what leads me to believe the software is written correctly.

Syndicated 2008-12-05 15:21:55 from Zac Brown

Genend Update 2.33421

Still having problems loading full data sets into memory for Bacteria + Archaea genomes. Need to come up with a good way to do this with the 67/80/90% runs. Right now, I can only do it with Archaea.

The results for the run strike me as being somewhat odd. You’ll see below…

Despite having gone over the algorithm repeatedly I’ve been unable to find a fault in it. As near as I can tell its doing exactly what I thought it should be. I thought it was odd that the results for 3-6mers are about the same despite training more or less (training 50% showed almost identical results as well). The oddest thing is that the results drop off after peaking at either 6-mer or 7-mer. Thats the part that makes no sense to me. I’m not sure what to make of it.

Maybe I’m missing something obvious. I’ll switch to something else for a bit and come back to it.

Syndicated 2008-11-20 18:06:10 from Zac Brown

WEX: Devices and Media - SDET

The title of this blog post is the official team I’ll be joining next May at Microsoft as an Intern (and hopefully fulltime after that). It turned out that after my interviews at the beginning of November, each team expressed interest in having me join their teams (I didn’t really think they all would).

The teams I’d chosen to interview with before I flew up were FNO (Find and Organize), CoreUX, and DNM (Devices & Media). Originally my interests in each group were roughly in the aforementioned order. That is I was most interested in working for FNO and least interested in DNM. As I spent more time learning about DNM and what they do, it became apparent that I’d learn more there than I would in any other group.

Each group was interesting in its own ways. FNO has a very young group of developers and is a very high energy group. They own the Explorer and Desktop interfaces with anything that has to do with file manipulation included in that. They also own the indexing service used for desktop search. Had I chosen to work with that group, I probably would have tried to get in on the indexing side of things. Its a lot of coding (what I like) and its at the core of my interest in that group.

CoreUX on the other hand owns the start/taskbar, the window framing, sidebar, and so on. Things that make Windows look like… well Windows. The team members I met with were all very encouraging and a group of really interesting individuals. Their manager, John Cable, was the guy I interviewed with during my first round interviews and was indispensible to me through the whole process in helping me make decisions about my time with Microsoft.

Finally, DNM manages the pipelines that serve up audio/video to the screen and speakers, interfacing with devices like the Zune, cellphones, bluetooth devices and things like the Roku (look it up, its sweet). They are a “foundation” team, meaning that a lot of other groups in WEX build on top of what they provide. For example, CoreUX is in control of Windows Media Player which has to use the media technologies supported/owned by DNM. This type of exposure to different technologies inside Microsoft as well as outside (like Roku) is what attracts me to the team. They get a lot of face time with a lot of products which means there will never be too little for me to learn. 

Since I will be at Microsoft to learn, I figure picking a group like DNM is a good way to learn a lot. Thats not to say I wouldn’t learn anything in the other groups. I just feel that at the point that I am now with my education, my weakest points are in the areas that DNM is focused and in the end would provide me with the most “bang for my/their buck” in my time at Microsoft. Hopefully that time will be a long time as the culture is very attractive.

Syndicated 2008-11-19 19:20:15 from Zac Brown

Things I Learned Today…

I’ve been writing a bioinformatics program to test some Bayesian naive classification of K-mer/oligonucleotides. I started with some code I was given that was in Perl, wrote some in Python and then moved Java. In that time I learned a lot about optimizing Python and Java with respect to string manipulations. 

Today I was working with a program to build k-mer distributions in a format that a SVM (Support Vector Machine) can read and process. This requires building huge strings and putting them all in a file line by line. The files are usually in the area of > 50 MB so they’re fairly sizable.

Doing this process was fine as long as I was using k-mers less than 6 (4^6 = 4096), so lines that are no longer than 4096 entries. I noticed a fair slow down when I built a data set with 7-mers but didn’t think much of it. When I tried with 8-mers a little while ago, it was painfully slow. Turns out doing the following with really big strings is bad joojoo:

String my_line = "";
for (int i = 0; i < 20000000; i++) 

   for (int k = 0; k < i; k++) 
       line = line + i;
   line = line + ” | ”;

Obviously I’m not doing exactly that but you get the idea. Basically your string concatenation starts of really fast but as the string gets bigger and bigger, it will get slower and slower. Though I don’t claim to know the inner workings of the String class, my best guess is that every time you concat a string to the end of another string, the JVM realloc’s (as in the C version) the memory to make room for the added information. I may not be right, but from just thinking about it halfway, thats the best I’ve got. 

To alleviate these situations, this is my solution:
StringBuilder str_bldr = new StringBuilder();
for (int i = 0; i < 20000000; i++)
    for (int k = 0; k < i; k++)
    str_bldr.append(” | ”);
String line = str_bldr.toString();

As you can see above, I’m using a class called StringBuilder. Again, no claim of knowledge, but it probably just acts as a Vector/ArrayList (not sure if its synchronized) and you just append items and the toString just iterates the array and returns a big string.

To most this is probably amateur business but I figure its useful for others to know in case they ever wondered. Even if I am a fairly seasoned programmer, I’ve got new things to learn and so does everyone else.

Syndicated 2008-11-11 21:39:22 from blog.zacbrown.org - just run away, now.

Live Mesh

I’ve been exploring Windows again, partly from a need to refresh my brain on the inner workings of the operating system as well as to take a little time to test out Vista since it was first released. Its certainly better than it was when it came out but definitely not the best I’ve ever seen.

I think the most positive I’ve ever been towards an operating system when it first came out was either Ubuntu 7.10 (Gutsy Gibbon) or Windows XP. Not sure which but both stick out in my mind as exceptionally good operating systems.

To get back to the topic, I have been working on a group project with a close friend (also a future full-time Microsoft employee) and we needed to share some files. He suggested Live Mesh, which at first glance looks a lot like Groove. Except that its free and its got a bit of a different slant to it. It appears to be designed more for ad hoc sharing rather than the way Groove works. 

However, besides being an easy way to share some files, it actually has a pretty impressive setup. In the future it’ll allow you to sync not just your Windows desktop/laptop but also sync to Mac computers as well as mobile devices. This is all fine and good but the most impressive thing I saw was clean integration of Live Mesh with the Windows Explorer interface and conflict resolution. Conflict resolution is something I tend to group with serious revision control software (ie: bazaar, git, mercurial) but Live Mesh has a pretty decent system setup for these problems.

The web interface for Live Mesh itself is fairly decent as well. Its got a Vista look’n'feel to it and is fairly snappy. Don’t envision myself using it much but its worth mentioning if you ever need to link up to files you shared, you can get them that way.

Syndicated 2008-11-10 22:38:01 from blog.zacbrown.org - just run away, now.

Windows Vista and more fun

So recently I’ve managed to pull off getting an internship (and hopefully a job afterwards) with Microsoft in the WEX group. For the uninitiated, WEX is short for Windows Experience and they’re primarily in charge of the “face” of Windows. I’ll be working as an SDET (Software Development Engineer in Test) on Windows 7.

In response to this news as well as a partition of Windows XP that died on me, I installed Windows Vista. I had tried Windows Vista once before during its beta as well as right after its release. I wasn’t impressed then, a lot of it annoyed me so I stuck to Linux and Windows XP. Since Microsoft has already made the transition internally to Vista I figured it’d be a good time to get familiar with it before I get there next summer.

In the past I’ve had problems with not having access to a decent command line interface in Windows. In the past it also wasn’t an issue Microsoft considered very serious. When PowerShell came out I began to use that but still found deficiencies in it since there was no support for tabs and its only an *ok* terminal with respect to options on linux. So I decided this time around I would find a terminal emulator that would give me tabs or I’d write my own. Fortunately someone else wrote one for me so I’m off the hook.

A project called “Console” is on Source Forge by a guy named Marko Bozikovic. It provides a tabbing interface with a nice copy/paste setup thats more in line with the rest of windows rather than the goofy setup provided in “cmd” or PowerShell. It even allows you to specify the console you’d like to use so in my case I use PowerShell. Now Marko doesn’t provide an actual installer so I took it upon myself to use a wizard with NSIS to create a very simple one. Hopefully someone will find them useful as I prefer to have an actual installed program when I can on Windows.

There are two installers, the first one I’ll mention has the MSVCRT dll included. This is a runtime provided by Visual Studio so if in doubt, download this link: Console-2.0-beta141-mvscrt-setup.exe. The other link I have provided does not package the MSVCRT runtime if you do in fact have Visual Studio (Pro or Free) you should already have these, and here’s the link: Console-2.0-beta141-setup.exe.

Now I won’t be keeping this up to date in any sort of consistent capacity. It’ll basically be updated whenever I update it for my own computer. I’ll probably eventually put up an actual page and/or link on my main site.

Other than looking for a console emulator, Vista has been an ok experience. I’m not overly joyed with it but there are nice things about it. Stuff that comes to mind include 1) quick association with access points rather than 30 seconds of associating in linux, 2) better battery life (+1 hour or more), 3) cohesiveness in terms of interface and functionality and 4) a (finally) usable start menu that allows me to search.

I’ll probably intermittently add things in further posts as I start playing more with the operating system.

Syndicated 2008-11-09 19:51:22 from blog.zacbrown.org - just run away, now.

More results, new and improved software

Switched from Python to Java (sigh) to improve speed. Python (besides being dynamic) has worthless threading in comparison to Java. Java version is faster, by a lot, runs on just Archaea go from 4 days to something in the range of 12 hours with Archaea + Bacteria (625 genomes).

Ran into problems with threading but learned a lot while doing it. Will definitely make this process faster. Runs are still a slow process though but not much room for improvement without moving to a cluster.

Loading full relative distributions for 9-mers is currently not possible right now without more consideration of the program. Maybe switching from Hashmap<String, Double> to an array of doubles (double[]) will save some space. Need to investigate that further, we’ll see.

Results for piece sizes 36, 100, and 200 (excluding 8-mers) for 3 through 8-mers as follows:

Still waiting on a run to finish for 200 piece size 8-mers, then will run taxonomic classifier. Unsure of how well that will go in efforts to match data in db to data from files. Not sure the “species” match up between the two in the right way.

Syndicated 2008-11-06 17:24:29 from blog.zacbrown.org - just run away, now.

Old data bad, New data good, Program too slow

So the last set of data posted is definitely incorrect. Found flaws in the scripts’ function to generate relative distributions. Also modified the original identification script to work with classifying organisms.

The data for correct identification below…

The data for phylogenetic classification below…

Full bacterial and bacterial+archaeal analysis will be harder as the current program is too slow. Rewriting parts to make the process faster. Possibly working OCaml to do this.

Syndicated 2008-10-16 17:43:11 from blog.zacbrown.org - just run away, now.

More genomics…

After some misunderstanding, now have a program that does what is needed. Seems slow and memory constraints on loading higher level distributions is difficult (kmer size > 9).

Started a run last night(~18:00) on 625 genomes (50 Archaea, 525 Bacteria), still running. Got no significant results from 3-5-mers, now running on 6-9-mers.

Have a completed run from just doing Archaea, results not so great, around 1.1-1.6% success in identification with 10000 samplings. See graph below:

Syndicated 2008-10-02 18:00:59 from blog.zacbrown.org - just run away, now.

38 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!