Older blog entries for gnutizen (starting at number 16)

Debian on my desktop

As I said in my last entry, although I've had a Debian on my home LAN for a while, I switched from Windows to Debian for my desktop recently because among other things Debian had better support for my Linksys 802.11b USB wireless adapter. I have learned and realized a lot of things in a short time. This will probably be a long entry.

One general thing I am unhappy with is I just want things to work without much of a hassle like they do on Windows (sometimes, some things, like the Linksys, or not needing the crappy Windows/OEM CD's to "repair" or "reinstall" are more superior on (Debian) GNU/Linux than Windows. But a lot of things just work with Windows easily that don't on Linux, which is a little bit of a hassle.

What I'm thinking of right now is support for my old 3dfx Voodoo 3 graphics card so that I can get more than 1 frames per second in Tux Racer, but that's one I haven't completely tackled yet. I do know it will take me a while to figure out whether the 2.4 kernel supports 3dfx cards decently (or whether I can get the drivers somewhere to do so), and to implement it if I can. In Windows these things just work without so much hassle. Well, that's one I have only started looking at, let me go over things I have looked at.

I am really more of a Solaris person than a Linux person, so I have not done millions of kernel compiles like some Linux people. Thus, every time I have been changed .config, I've been recompiling the entire kernel, although it occurred to me recently I could just do an "M" in stead of "Y" and just add new features as modules. Actually, modules were causing me headaches when I was first trying to get my 802.11b working with Debian (I was attempting to use the horrible atmelwlandriver in stead of the beautiful at76c503a driver), so that kind of turned me off to modules for a little while. It's kind of like chess where some bad experience sets off Pavlovian bells in the head when considering certain moves.

So anyhow, I've been recompiling the kernel for every small change instead of building modules, but that's how it is, I'll probably be doing the module thing more in the future I prefer make xconfig to the other makes, especially the very helpful help button which pops up (menuconfig has this as well, although it doesn't pop up of course). Actually in my opinion, I can use even more user-friendliness than make xconfig, maybe something that looks through my dmesg or something and compiles based on that somewhat automagically? I keep getting SMP in my kernel automatically, which I don't need, yet I had to instruct the config to put a filesystem in for my internal DVD drive, not to mention the USB (and SCSI) stuff I needed to put in (none of my devices being supported within the kernel, I used outside modules for all if them). So the most advanced config that comes with the kernel installs by default an SMP which I don't need, yet is blind to the need of my DVD's player need for the UDF filesystem.

Which brings me to "make xconfig"...ok, I have been using Linux on and off since before 1.0 was released, although anyone reading this obviously is aware I am not a hardcore user. But if these things bug, or worse, confuse me, imagine what it will do for Joe Linux Desktop User or even junior sysadmins (or even senior sysadmins, for some of the kludgier Linux problems). Anyhow, I run "make xconfig" the first time and I forget what the hell the error message that was spit out all those days ago was. Eventually I realize I need Tcl/TK and apt-get install them. OK after doing that, let me show you what it said after I ran "make xconfig" for a second try at it:

Application initialization failed: no display name and no $DISPLAY environment variable
Error in startup script: invalid command name "button"
    while executing
"button .ref"
    (file "scripts/kconfig.tk" line 51)
make: *** [xconfig] Error 1
OK, now what is Joe User who is trying to get his DVD player to work thinking right now, after by some miracle he correctly figured out to apt-get install tcl/tk from whatever the hell make xconfig spit out prior to this? OK, so what do I do now? "DISPLAY=0:0 ; export DISPLAY" of course. Let's try it again (I have trimmed the error message for space considerations:
Application initialization failed: couldn't connect to display "0:0"
Huh? This has worked for me in various situations for the past decade or so. At this point I have to go to Google. I learn for some reason I have to do "DISPLAY=:0 ; export DISPLAY" because for some reason the first 0 which I have been putting there for a decade in various UNIX and X environments now won't work. Fine, I go back and make xconfig again. (errors trimmed)
Xlib: connection to ":0.0" refused by server
Xlib: Invalid MIT-MAGIC-COOKIE-1 key
So I open up another user and type "xhost +" (come on in, boys), try it again and Calloo! Callay! xconfig pops up.

Now you can have two perspectives on this: 1) I am an idiot because I don't fully understand (or remember) the entire X protocol and why I have to remove that 0 nowadays or 2) Even someone with as much UNIX and sysadmin experience as me still had to go to Google to learn that this wouldn't work without me removing the first zero in "DISPLAY=0:0 ; export DISPLAY", never mind needing to know enough to apt-get install TCL/TK, or that a root X window needs an xhost command (as the normal user) and that sort of thing. Windows Control Panel it is not.

This is not the main thing that bugged me, just of one of the many small things that did. I borrowed an HP printer/scanner recently, and went out and get the drivers (the freedom of which I'm not sure, I may have sinned against my Debian system). Anyhow, I set it up with the HP drivers, get printing working, get SANE (a scanner interface), and get that working. Then I come back in a few days and scanning isn't working. Eventually I realize that I have to run "/etc/init.d/hplip start" manually and then xsane will recognize the scanner and I can scan, I guess I forgot this, or it had started on installation last time or something. Did I mention xsane's penchant for crashing after every third scan with a segmentation fault? And that all the GPL OCR programs suck (I guess this is true on Windows as well though, I wonder if the commercial packages for Linux are decent)?

OK now to my DVD (and CD) player. As I said before, kernel default configuration gives me SMP support I don't need, but misses the fact that my dmesg sees I have a DVD player. Anyhow, I compiled a kernel with UDF. I forget if I had iso9660 in my initial kernel or not, it's turned on now anyhow. I never really thought about audio CD's in my CD/DVD player, the first time I put it in without thinking I tried to mount my audio CD as an ISO9660 mount. I went to Google and saw cdplay would play my CD's. I'm not really sure how this works in Linux - on Windows my Winamp automagically plays CDs and MP3s, right now I have XMMS playing mp3's and am using cdplay to play my audio CDs. I've asked XMMS to "play" /dev files in stead of mp3s but it has not complied. Well, the less convenient XMMS/cdplay split *works* even though Windows was better in this respect, and I have bigger fish to fry (getting more than 1 frame per second for my tux racer) that this goes to the bottom of my list - mp3s can play, audio CDs can play, once I get graphics, my zip drive and so forth working, then I can go back and bother to look to see if xmms can play audio cds or if some other package does.

Oh another thing. Debian was launching into GNOME when GNOME was not working initially. I would boot up, gdm would launch, I'd get the selector, but nothing would work - not even the x-term. Very, very annoying. I don't even remember what I did, I think I went into emergency boot mode on my install CD, and linked my gdm init.d script to /dev/null ot something like that. When I was ready to deal with GNOME and GDM I installed apt-get installed x-term, so I could escape from GDM, and eventually got GNOME working.

I have been getting educated on GNOME recent developments (or non-developments more like it). My first indication of how far behind woody is was when I was trying to install the stuff to get my H-P to work. Debian Woody used Python 2.1.3, which was released almost three years ago. H-P wanted something a little more modern, like 2.2 at least. I had no idea how wacked Debian's releases had become when I installed it. I'm not even sure how much of a problem it is, I'm happy with sarge ("testing"), I just wish Debian would be more forthcoming about what is very obviously a problem on it's web page. It's front page says "Getting Started - The latest stable release of Debian is 3.0. The last update to this release was made on January 1st, 2005." Which is a link. Which links me to how to download woody. The page does say "Debian GNU/Linux 3.0 (a.k.a. woody) was released on 19th of July, 2002." which perhaps should have set off alarm bells, but as the front page had had the date 1/1/05 as the last update, I had assumed some kind of Debian stable "base" was from 2002, and that things like post-2.1 python would be in the updates, I didn't know the updates were just security patches and that sort of thing.

Which brings me to the general Debian craziness. Honestly, I am very happy with their position regarding free software, although one can reach a point where one is too zealous - I understand some people want to remove the choice of non-free , which I think is ridiculous. Theodore Tso has talked about splitting Debian, I think what should happen is the 100% purist zealous Debian developers should hopefully continue to contribute to Debian, but also start their own kernel, a completely pure and free kernel "Free Debian" or something like that. Debian needs to navigate between the Scylla of needing to be a "mass movement" and Charybdis of needing to be and promote free software. It needs to be *both* and I hope it can be both, although at certain points it will have to give up users to remain free, and at certain points it will have to make some compromises so it doesn't become an OS no one uses except a handful of die-hards.

I don't want to go too hard on Lunar Knight free software zealots. I'm glad they exist and that *they* do not run a system that tuns unfree software. It gives me great satisfaction knowing Richard Stallman will not run non-free software on his machine. For myself, I want to run free software whenever possible. And I like knowing when the software I'm running, the drivers I'm running and so forth are not free. But I want to have the option, the choice of running them. I run blackdown on my Debian, not Kaffe. When Kaffe is in decent shape, I'll run Kaffe. If I could program better, I would contribute to Kaffe


Which brings me to my project, Gnutizen. For some reason, although it worked well on my other Debian machine, and my Windows machine, now it is breaking all the time. I also just realized I haven't been doing mutex locking on global variables. I guess those bugs are just popping up now for whatever reason. Blimey! Gnutizen is written in C, it is a peer-to-peer application running on the Gnutella protocol which (used to) compile on Linux and Windows (as well as at times past FreeBSD, OpenBSD and Solaris). I am not a good C programmer, especially considering I have spent the past year mostly fixing bugs in the program (doing the minimal possible to keep up with the evolving Gnutella protocol). I can connect to the Gnutella network (initial bootstrapping is very manual, although now that I have leaf support somewhat, it is better), search and download. Or I could when the program didn't crash as often on my last Linux, before the mutex locking bugs showed up on my new system.

Fixing bugs mostly for a year, doing the minimum to keep up with the evolving protocol and so forth is not fun. And now all this thread locking stuff. I would rewrite it all, but my C skill is so poor I don't know if I should do so.

In chess, when two grandmasters play a game, other grandmasters go over the game and note brilliant moves, mistakes and so forth in the game. Grandmasters will never go over my games like that (maybe if I go to the Village and beg them some would), just as great coders will not be going over my current code and telling me what I'm doing wrong (unless I beg them). But is there anything out there that exists like that on the web? There really should be. Like "OK, we're going to take a look at less 2.90 - what it does right, what it does wrong etc." I suppose what I'm looking for is code that is super-commented and meant for someone who doesn't know C that well (as in - we put extern here because so-and-so, we have this in a header file in stead of the c file because of so-and-so.

I suppose what I have to do is just look at well-coded C packages, not that large, well-used and well-coded, just look at them, take advice from people like Linus like "functions should be short" (something I initially had a problem with) and then just sink or swim. I went to NANOG a number of years ago and was impressed, well a number of people impressed me, and Van Jacobson was certainly one of them. So when I first started coding my big C package, I said, I'll use traceroute.c as a guide of sorts, since it is networking code, and because Van Jacobson is a god, I mean he wrote traceroute for heaven's sake (along with TCP/IP header compression and a lot of other stuff). So when I had settled on using traceroute.c as a guide, here is what Van Jacobson somehow had the foresight to tell me:

* Don't use this as a coding example.  I was trying to find a
* routing problem and this code sort-of popped out after 48 hours
* without sleep.  I was amazed it ever compiled, much less ran.

I'm sure he meant that in a flip manner, and I'm sure code he pops out after 48 hours without sleep is better than my best code, but it's funny how he put a warning in the code 15 years ago to not use it as a coding example, when that was exactly what I was going to do. Then again, if you listen to the old-timers, they all say that a lot of the code we use and assume it does weird things for a good reason, actually has things implemented in an odd way because it was written on a PDP or something like that.

Anyway, if anyone has any ideas on how I (or anyone) can improve their C skill, let me know. I guess practice is one way - in retrospect I now realize the decision to add threading was a major one (although I just copied gnut, a model for gnutizen, in that decision, although of course, they locked their mutexes for global variables). I don't even know what the hell gtk-gnutella does, someone on #gtk-gnutella says they use "magic()". I would love to contribute to GTK-Gnutella and perhaps learn from better programmers in the process, but I think my ability is not up to their standards.

Anyhow, I definitely am a believer in do it yourself, sink or swim learning for things like learning C and programming an application, but helpful pointers from smarter people once in a while saves me a lot of time and frustration. People on IRC have been really helpful, actually. I try not to be *too* bothersome.

Anyhow, I know enough C to write a program that connects to Gnutella and can download a file (borrowing a lot of the command-line interface from the GPL'd gnut, as well as some other minor things). I also think I made some good design decisions, like trying to be cross-compatible, and trying to allow flexibility for the UI. But my code has been very buggy, which is no fun, so if anyone knows of any good things on the web on how to program C better, or good examples of C programming (of a reasonable size - Linux and emacs are huge), perhaps you can respond here. I bought The Art of Computer Programming, which is very math heavy, dense and concentrates on assembler. It seems very hardcore to me, and I hope to read through all three of the published books at some point, perhaps something not as intense as that. Like Linus's coding style where he talks about how he feels about comments, variables, functions and so forth. Or I've read Bram Cohen's articles here on some of his ideas. Good example programs are appreciated, as well as articles with advice that addresses the whole shebang, not just how a function should work and so forth.

I have two Intel boxes at home - an old one and a newer one which has more stuff (DVD player, Zip drive, soundcard). The old one ran Debian and the newer one ran Windows 98. Recently, I had a lot of trouble getting my Linksys 802.11b USB wireless adapter to work with Windows 98, while it worked with my Debian 2.4.20 just fine. I had to reinstall Windows from these crappy Compaq Windows 98 recovery CD's and it wiped my C drive (which I expected) and my D drive (to put 3-4 small files). Plus it didn't recognize the ethernet card suddenly. So I said screw it and installed Debian on the newer, nicer machine. As long as I can write a resume in Microsoft Word format, which of course, I can do in GNU/Linux, I'll be fine. Besides, my roommate has a Windows box if I need one.

I've been working on Gnutizen, my Gnutella p2p servent. I recently put leaf functionality in it for CVS, and have added even more although that is not in CVS yet.

I've also been checking out GTK-Gnutella's source code. It makes me realize how much C language I have yet to learn. I would like to help them out if I can. I know the Gnutella protocol fairly well, which may even me out on some level with better C programmers who don't know the protocol.

8 Nov 2004 (updated 8 Nov 2004 at 06:52 UTC) »

So, I finally get my 802.11b wireless USB network adapter working on my Debian. I had made the mistake of trying to use the Atmelwlandriver on Sourceforge. I recompiled my kernel, I did this, I did that, the program wouldn't compile automagically, the instructions were confusing and so on. I then try at76c503a by Joerg Albert and Oliver Kurth and everything works right away. So anyhow it works. And right after that, I try to do a minor change and the networking for it breaks on my Windows box. This thing has the worst drivers for Windows, I have no idea how I ever got it working on either of my Windows boxes.

After a few months of being busy, I'm checking back on Gnutizen. I had to get myself a bootstrap of good hosts which didn't take too long, although some of the web bootstrap hosts going down didn't help. Unfortunately my last CVS up was after I broke the program so I've been looking to see what I did to break it. I seem to have narrowed it down to my attempt to break down a 324 line functions into smaller parts. Of course, a function with 324 lines is far longer than the 48 line maximum that Mr. Torvalds recommends. Which is why I was (and still am) trying to shrink it. I just have to shrink it without breaking things.

9 Jul 2004 (updated 9 Jul 2004 at 05:21 UTC) »

Ah, so Advogato is back up.

I did get my hands on a Linux - I brought back one of my machines and installed Debian on it via 6 or 7 3.5" floppies and an ethernet connection to ADSL. And I ran Gnutizen on it. No major changes for Linux other than an #include <byteswap.h> for endian stuff.

I was reading the Linux Coding Style document that Linus was just mentioning on the Linux kernel mailing list. Gnutizen is my first big C project so I find such things helpful. He says comments should be put at the head of a function explaining what it does (not how it does it - he says that should be obvious from the code). I went through my code and removed most comments inside functions and put them at the head where I described what the function did.

The really big impact was his advice on functions - he says

Functions should be short and sweet, and do just one thing. They should fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24, as we all know), and do one thing and do that well.
Some of my functions were longer than 48 lines - a lot more, and unnecessarily so. I went through and broke up many of my longer functions after reading this, which took a while. In the section on functions he talks about variables.

Another measure of the function is the number of local variables. They shouldn't exceed 5-10, or you're doing something wrong. Re-think the function, and split it into smaller pieces. A human brain can generally easily keep track of about 7 different things, anything more and it gets confused. You know you're brilliant, but maybe you'd like to understand what you did 2 weeks from now.

Breaking down the functions helped me lose these variables, but I am trying to trim the functions down to 7, or at least 10 local variables. And not have global variables unless they're necessary.

Well, I now have a cache for Gnutizen that is not out of date for personal use during development. I ran some tests and it looks like many modern servents are rejecting me, probably because of my lack of QRP. But that is not a problem for me now, and I can deal with other things, like bugs, which are more important currently.

Amazingly, I can compile the program with Microsoft Visual C++ now. The main changes needed: For the Sleep() call to work, I had to include the windows.h header file; I changed a variable define from "char variable[0]" to "char variable[1]"; and I also changed the filename of the cache file.

Also, I fixed two threading problems. In both cases I had one constant variable keeping track of multiple threads, which I changed to a variable for each thread. One was minor and just threw off download progress. The other was more serious as it controlled connections and was causing segmentation violations. I found that with gdb, and it is good that has been fixed.

Different operating systems bring out different errors. I want to get back on Linux and try to clean those up. I've been working on FreeBSD errors lately. I am on an OpenBSD system with few open files allowed as well, which shows me how my constant accessing of disk space is probably not a good thing, a lot of which can be moved to memory. I lost access to my Linux, I have to get that back as I want to work on Linux problems. And Windows has had problems too, although I suspect they are similar to the UNIX problems.

Anyhow, there is some stuff I am cleaning up, like the cached queries. Or so that Gnutizen can open without having a broken pipe (this seems like a new error). I want to get it so that Gnutizen can run on most OS's without encountering any errors. Things seem fine on Windows until I start downloading files. Anyhow, I should clean these things up before doing big expansions because the bigger it gets, the harder it may become to discover bugs.

My development of Gnutizen has been off again, on again. Actually, the only real CVS updates I made since September 2002 when I put it on Sourceforge, were in April/May of last year (2003) when I ported it to Linux and made a hack for the htons() call, ignored SIGPIPE/EPIPE signals (which I believe is normal to do), and also made changes so Microsoft's C/C++ compiler would compile it under Windows, and not just gcc/mingw compilers.

Since then I have been mostly working to focus on fixing problems in the existing application before adding new features.

However, getting other servents to connect to has been more of a problem as time goes on. My servent starts with a cache (an out-of-date one nowadays), and finds hosts only by pong (not 0.6 protocol methods like X-Try or X-Try-Ultrapeers, or web caching, or other methods). It also seems to be rejected by Limewire version 3 servents and Bearshare servents - perhaps because it does not know QRP or something. I've been spending the time I've spending on it in the past weeks just finding some servents to connect it to. That's coming along, I will see how it goes. Then I will try to fix bugs. Hopefully the big clients phasing out people using older protocol stuff is not at the point where I have to add QRP and the like, as I'd prefer to try and get the bugs out first.

Me and my friend are writing a commercial software product (mostly in C), this is the end of week 3 of our venture. Week 1 I spent setting up my network connection and computer mostly. Week 2 I wrote a small program. Week 3 (this week) I wrote a small program as well.

My work desktop is a FreeBSD. It has 2 9 gig disks but I want to use the other disk for a dual boot Linux so I am cramped for space. I like to use BASH so I used the existing bash - /usr/compat/linux/bin/bash. This did some weird stuff like make my uname say "Linux" instead of "FreeBSD" and some other things so I installed a new bash from FreeBSD ports. Maybe this was why Mozilla and Gnome wouldn't compile (or maybe not). I'm currently using Epiphany and Wmaker instead.

I only seriously started programming in C starting last year...my partner knows a bit about C so I have been dealing with some new things these past two weeks like linked libraries (static versus dynamic), linked lists and so forth. In terms of my code readability, he prefers gindent use K&R indentation as opposed to GNU indentation, and he also says I need better variable and function names (which is true).

Argh. Sourceforge's CVS is barfing again.

can't create temporary directory /tmp/cvs-serv27134
No space left on device

And no, that's not on my machine. The last time I tried to CVS my project from Sourceforge months ago, their whole server was down. Then there's the ssh change they made from ssh1 to ssh2 or whatever that took me a day to fix. Actually, I see that popular projects like GAIM are having the same problem as well, so. Bleh. I have a version of the code, but the recent changes are all out of sync and this is a pain.


I've never really MUDed much, but a few weeks ago I went back to a MUD I was on many years ago, Arctic, and now have a level 8 mage. When I went on years ago I wasn't really interested in playing, I knew one of the "immortals" and just went on to chat with her. I was at a bookstore a few weeks ago though and was reading through a book called something like "How to be a game programmer: interviews with all the awesomest game designers" or something along those lines. I've never played Everquest, I don't even have a credit card to pay however much a month even if I wanted to, but it interests me, so I read the interview with the EQ designer. He said they did a few things to prepare their MMORPG, including play a lot of MUDs and write down what they liked and what they didn't like. Like many people, I have a ton of ideas I'd like to do, but have a definite lack of capital, free time as well as programming skill - in fact I'd say those things are intertwined somewhat. I do have an idea of an MMORPG though - to save on capital, among other reasons, it would try to be more p2p based than server based, and it would have a definite game theme which I won't go into currently.

So one thing I've become interested in is MUD clients. I'm interested in ones that are free - free as in both meanings. What I'm interested in are Windows clients, and to some extent command-line UNIX clients, and less so graphical UNIX ones. The main one I've been using is TinyFUGUE, which is a command line UNIX one. It's decent, and I've been using it. I just tried JamochaMUD, a Java one that runs on Windows (and I guess UNIX). The display often becomes scrambled though, so I'm not going to use it. So my search goes on for a good free (both meanings) MUD client.


Well, I guess there are groups using W.A.S.T.E., but it hasn't taken off like I'd expected, which AOL coming down on it didn't help. I'm sure small groups of people are finding it useful, and they're out there. I got somewhere down the list of the client, then got busy, then when I didn't see momentum I tossed in the towel. That "awesomest and successful game designers" book I mentioned before had several people interviewed say that what they really looked for in a designer was someone who didn't just start projects but completed them. Of course, that's not always an axiom, it of course makes sense to abandon a project when it has no utility any more. But that has stuck in my mind and is motivation to look at my Gnutella clone project Gnutizen again. It is true that I have very little free time, but if I can get around that I'll be looking at it again. I have to weed the bugs out, and then add on the non-leech functionality. I did get it to the point where it can download files though so it has reached the first step of acceptability, now I just have to get it to specs of an old Gnutella document which I'm sure no one but me cares anything about any more.


Well I keep hearing about Knuth, Knuth, Knuth so I finally went out and plucked down $150 or whatever for the three books he's written so far. A lot of math. A lot. As I said before, I really have no free time, especially to sit down and concentrate intently for a few hours each day, every day. So Knuth will probably have to stay on my shelf for a while.

Economy/Jobs etc.

Well as I said in diary entries before, I am a systems administrator mainly, who just got seriously interested in programming, and I've been learning by doing mostly. I mean, I already know languages like PERL somewhat, but I've been writing stuff in C. Later on I might pick up C++ and Java. But anyhow I am mainly a systems administrator.
Anyhow I will not go into giving my opinion or trying to convince people or get into a discussion or argument. Especially since to me getting into an argument with someone who's here that is a manager and has a different not only point of view, but different interests, often diametrically opposed to mind. So I'm just speaking now to people who are workers - programmers, admins, and whatnot, who are tired of the rising unemployment rates, and tired of falling industry wages and so forth. And my message is - I'm with you. And also - we can't figth this as individuals. The IT employers - Microsoft, Intel, IBM and so forth have been well-organized and funding things like the evil ITAA for years - and the ITAA has taken that millions in funding and gone to Washington DC and screwed us over. And not just DC - they commissioned phony baloney reports about a lack of IT workers and got it all out in the press to fool the public, to "fool" Washington DC (although I'm sure campaign contributed made that easier) and even to fool some of us. Anyhow, they're organized, and now that we're under attack, now that our industry wage is falling, and of course I'm only talking to fellow workers (excluding any managers or whoever who may disagree), what we have to do is organize and fight back. You've been made about this, you've been wondering what you can do as an individual, and what I'm saying is, the employers and ITAA are not doing it as individuals, they're not fighting the system alone, why should we? So check out the Programmers Guild or even some of the more union-associated ones like WashTech. There's an old slogan "educate, agitate, organize". Before you do that you have to get educated, get agitated and get organized though. And the least important thing is the possibility of some AFL-CIO union coming in and getting collective bargaining contracts for X% of the industry (actually, surprisingly enough, they already probably do have 1-2% of the industry, whether you know it or not). What's important is that people wake up and organize together and educated themselves on issues that effect us, and our wallets. The employers are organized, why aren't we? Doctors and lawyers are organized in the AMA and ABA, if we're "professionals" why aren't we? And please don't mention the IEEE or any junk like that. The IEEE is corporate sponsored for one thing (can you imagine the AMA being corporate sponsored?) and for a second thing the leaders have killed anything good that members have proposed over the past few years. Associations like the IEEE see their job beginning and ending with making sure your skill level increases, and any other interest you have goes out the window. So check out the guild and Washtech, watch the Usenet groups like alt computer consultants for discussions and get involved. Believe me, most of the engineers like yourself have the same kinds of ideas and concerns as you do - and the way you would like to see things not going is the same as theirs.

It shows how screwed up American society is that this often has a "debate". Could you imagine IBM, Intel and Microsoft management having debates over whether to fund the ITAA with arguments like "if we're such a good company, we don't need someone in Washington DC looking out for our interests". There's a reason these people are running the world, and Farscape watching dorks full of hubris about their own ability are getting the shaft. But as I said several times, my message is not directed to those people but to those who already agree and hopefully I will be a little nudge that helps push them along, sometimes people want to be invited to the party before coming.

9 Jun 2003 (updated 28 Sep 2003 at 23:33 UTC) »

Well, those guys at Nullsoft have released another p2p application, W.A.S.T.E., and I've been checking it out. They released the C++ code for it, but I don't know C++, so I've begun hacking out the protocol in C.

The documentation says that, when you have a potential host to connect to, WASTE connects, and sends 40 bytes to the other host, which consists of (if there is no network name), 16 random bytes, as well as 24 more bytes which is a blowfish of the random bytes with a 20-byte SHA-1 of my public key plus 4 pad bytes. If the remote host knows my public key, it sends a 40 byte response.

I used Ethereal to sniff a send to a host running WASTE that has my key to get a 40 byte send. I sent that send to the host and also get a 40 byte response. In the C++ code, I see that g_pubkeyhash is the 20 bit public key hash encrypted, and that WASTE uses (a possibly modified version of) Paul Kocher's Blowfish code. So I'm messing around with this for now. The source I'm reading is in C++, a language I don't know, so I guess I can be forgiven for not totally understanding it. I often wonder how much faster I would be at writing code if I had years of experience under my belt. Much faster, I guess.

7 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!