Older blog entries for rupert (starting at number 26)

Aaargh! Does anyone know how to configure multiple soundcards under Linux? I have two Creative AWE32 (ISAPnP) cards that I want to use simultaneously, each with a different /dev/dsp (/dev/dsp0, /dev/dsp1, perhaps?) I've looked high and low for this information, and can't find out how to do it. I'm currently running 2.2.14-6.1.1.

Help!

--- Rupert

A very sad day for me. A favourite great uncle of mine, Bill Scammell (obituary), died on September 5th in Adelaide, Australia. He was a great guy, a fine businessman and educator (former Chancellor of Adelaide University), and loved car racing. I have fond memories of driving around Adelaide in his Porsche, and managing to accidentally lock him out of the car radio. He'll be very much missed, and I'm sad that he passed on a mere two weeks before I was going to be in Adelaide to visit him and other relatives.

Many condolences to Pat Scammell, his wife, and my Dad, who was close to him also (though they don't read Advogato).

Rest in peace, Bill.

--- Rupert

I'm hoping that the readers of this entry can provide me with an opinion, regarding a design choice that I have to make before the first release of my car computer software (see previous entries).

I'm torn between providing (1) a limited range of high quality synthesized static voices, or (2) Using a much lower quality (on par with Macintalk) dynamically synthesized voice that will permit a far wider range of responses to be made. This isn't a rhetorical question, and I'd really appreciate hearing opinions (even a couple of sentences), if you could send them to rupe@metro.yak.net.

So close to a first release of this software... it's very exciting, and I'm happy that my Blade Runner car draws closer :-)

Thanks!

--- Rupert

mikeszcz was kind enough to make some comments on my last diary entry, concerning my car computer software, which I'll attempt to address here. I think it'll be at least a week, if not two before I do the first release of the software. It's at the point where it's relatively stable, but has quite a few performance issues. I also want to clean up the code a bit before releasing it into the wild, as it were. The app has its share of unused functions, inscrutable comments, and blocks of commented out code which should die a quick and peaceful death so as to not cause the potential user to lose all confidence in the app or its author... I also want to write up some decent documentation and installation instructions, since getting everything loaded is a bit hairy.

A little background on the project, and in the process, myself. I think that I was born about 150 years too early. I spent quite a bit of time jealously watching the Star Trek crew seamlessly interact with their highly intelligent ship computer. I've also always dreamed of having a vehicle with even a fraction of the capabilities of those seen in movies and TV shows like Blade Runner, Knightrider, Batman, The Fifth Element, and so on. I think every geek has at some point. I also work as a military aircraft restorer for the March Field Air Museum, and have spent quite a bit of time working on the SR-71A Blackbird (#975) there. The desire to incorporate elements of this futuristic technology (yes, I know that the SR-71 was created in the 50's) into a working vehicle seemed like a fun project, so I started working on this voice controlled computer system. I'm happy to see that other folks are pursuing the hardware side of things. Mark's Custom Kits makes beautiful functional Knightrider instrumentation and dash panels.

Having said this, you'll find the first version of software released to be disappointingly sparse. The design is highly modular, however, so adding new functionality should be surprisingly simple to do, and I expect new releases regularly.

If the software is successful (and stable!) enough, I'd like to investigate the possibility of starting a small business, writing custom modules and adding additional AI and personalization functionality for people who want to use and enjoy the system without spending months having to dig through the codebase themselves. The core system would always stay free and open, however.

Comments? Questions? Let me know...

--- Rupert

Today concludes three weeks of intensive late night hacking with the computer that I've installed in my car. When I removed it from the trunk a month ago, its control interface was a minature VT100 terminal. When it's installed next week, control will be solely via speech recognition. In three weeks, I've spent days at a time pounding my head against the monitor, working through tough design problems. I've also learnt a ton about multi-threaded server / client design, data routing, sockets, lock objects, TCP, and the horrors of termios :-)

It's a great stage to be at, and I'm really glad that for the first time, I didn't just come up with losing hacks to get everything working. Having a solid core architecture to build this set of computing services on is going to make expanding it further so much easier!

I've named the project since I wrote last. The system is named Alice (and recognizes her name when spoken). The name was chosen partly because it's short, easily pronounced, and continues the long standing tradition of 'Alice Bots', a popular set of AI chatbots. Finally, Alice as in Alice in Wonderland, since I've been chasing down rabbit holes lately, and finding new worlds within :-)

The architecture I've built has an EDS (event distribution server) at its core, to which clients (speech processing module, MP3 module, LCD control module) connect, and send and recieve data, which is routed appropriately between the connected modules, which then process the data and perform the appropriate I/O.

Arguably the coolest piece of the system is the little speech processor that I've written. The system relies on an external application (CMU's Sphinx) to convert the speech waveform to a text string. The speech processor takes in the text string, and using a simple weighted network algorithm, causes the output of the module (which is passed to the EDS) to become progressively more accurate over time.

Everything's been written in Python so far, apart from the speech synthesizer and recognition systems, which are external (GPLed) applications. I've been programming in Python for about two years, but it took a big project like this to really drive home what an amazingly versatile and capable language it is. The structure of the language really made it easy to experiment, make rapid changes, and keep the code easy to read.

Now to put the system into the car, and drive with it for a few weeks. It'll be exciting to see how well it works, and I look forward to posting more regular diary entries with progress reports.

--- Rupert

26 Jun 2001 (updated 26 Jun 2001 at 08:10 UTC) »

Today was a slow day at work, so I started working on a couple of single frame comic strips, using the Windows paint tool. Slowly, Vinnie the Vulture, and his trusty pet, Dun, a TCP packet, emerged from the chaos, and set out on the first of their adventures. All of the strips are available on the embryonic Vinnie the Vulture page. Just pretend to laugh... the term "comic" is used loosely here, and most of the comic aspects consist of obscure inside jokes, and bad rhyming schemes. Fortunately, the quality of the art makes up for all of that.

Apart from this, I spent the day hunting down bugs, and trying to triage a departing coworker's bug queue into something slightly more manageable. Nothing very exciting, however I did have a nice Indian dinner with my friends Greg and Tague.

In hacking news, the speech recognition component of my car project is almost ready for installation. Most of the Python glue code to interface Sphinx with the LCD server and MP3 player applications is done, so I'm getting excited about the project again. What I really need to be doing is fixing bugs in the current installation (and there are many), but adding new features is much more fun!

First diary entry in a while... I've spent the last two weekends on the road, which has been pretty fun. On June 8, I went to Reno, Nevada for the 14th Blackbird Reunion, and this weekend, ventured down to San Diego to visit an old friend from school. Standing in the line to get my boarding pass for the flight back, I heard my name called, and looked up to see someone who I momentarily didn't recognize. It was my old geometry teacher from high school! We ended up sitting in the same row on the flight home, and had a nice time swapping stories and reminiscing about 'the old days'. He retired in 1997 after 32 years of teaching at the school, and seemed to be enjoying his retirement.

San Diego was great fun. I took Monday off work to spend an additional day down there, which turned out to be a good strategy (taking time off from work is never bad). Mike took me to see the downtown area, and we walked around by Horton Plaza and the El Cortez, before taking a Red Line tram back into the Old Town area. Stopped for a very welcome margarita and snack at a local TGI Friday's, then headed back to his home.

Monday, we drove around some more, and visited the San Diego Aerospace Museum, the Mission, La Jolla, and UCSD. It was fun seeing my brother's alma mater, and we wandered around above Black's Beach for a while, watching the gliders take off over the cliffs.

I don't have any new hacking news to report, apart from the fact that I finally got my 40 GB disk working, but I don't believe that that really counts :-P

Humor me while I rant about telecommunications again for a second...

I fail to understand how people think that war-dialing my work and cellphone numbers think that it's going to encourage me to talk with them, or put me in any better a mood. Leaving a voicemail and waiting for it to be returned isn't good enough for them, apparently. They war- dial your work and cell numbers for a couple of minutes (every 10 seconds or so), then finally work out that *gasp* they can leave voicemail! So they leave voicemail. Most sane people stop at this point, and wait for their call to be returned. However, they're not sane (or polite), so they continue war-dialing, alternating between numbers.

Eventually I just pulled my work phone out of the PBX jack, and powered down my cellphone. I'm half tempted to leave them both this way.

If there are any Nokia engineers out there reading this, why can't you guys build a simple call blocking feature into the firmware of your phones?!

It doesn't seem like it'd be that hard to do... (some pseudo-code to illustrate my point)..

#!/usr/bin/python
 
display_call = 1
ring_style = 'ring'
cidbank = {}

cidbank[goodperson] = {} cidbank[goodperson][number] = 4159240024 cidbank[goodperson][block_call] = 0

cidbank[badperson] = {} cidbank[badperson][number] = 9095551234 cidbank[badperson][block_call] = 1

def process_call(name):

# If they're blocked, don't do anything... # go back to the event handler.

if cidbank[name][block_call] == 1: return

# If we don't know the number, or it's in the db # (and not blocked), ring the phone..

else: ring_phone(display_call, ring_style)



That's all my ranting for the day. Thanks for listening.
30 May 2001 (updated 30 May 2001 at 08:31 UTC) »

Read A.C. Weisbecker's Cosmic Banditos cover to cover tonight, and it's my new favourite book*. Banditos roaming in the realm of the Subatomic, with all of your favourite federal agencies thrown in for good measure :-). A great read!


* I have many favourite books, actually.

Finally got crystal out the door. It's a small server application that allows control of a CrystalFontz LCD screen via a TCP connection.

It's based on the very nice pyCFontz module by Ben Wilson, and uses a simple plaintext protocol for issuing commands (e.g. out 'foo', crlf, cls, etc..). It's a released pre-release version <grin>, so the full suite of control functions isn't in there just yet, but that should be resolved by the time 1.0 comes out. I'm just using the socket module for now, but I'd like to transition over to using the much nicer SocketServer class, which will allow me to handle multiple simultaneous connections, and be easier to maintain.

Eventually I'd like to add in some advanced features such as multiple LCD screen support, the ability to have a client program 'lock' a region on a screen for exclusive use, and have a set of templates in place that'll allow the submission of a list of data objects to the server that'll get pre-formatted and displayed. The protocol will probably change in the next couple of versions, and use enaml style commands, because they provide a better structure.

Not much else going on. Robey was kind enough to contribute some additional MP3View code, that permits the sequential play of songs, so this should be making its way into 2.2, along with LCD display support.

My brother finished Army basic training yesterday, and did amazingly well. Top of his platoon in marksmanship, and got several other distinctions. Congratulations! (although he's not much of a net person, and will probably never read this) :-P

Have a nice Memorial Day everyone!

17 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!