- options have to be implemented carefully.
I phoned one hour to do support for a project i didn't work on for more than one year. I still use that software, but i'm not feeling well about how things look out.
- too much old cruft to work around stupidities in other software, which mostly have been fixed a long time ago.
- too many configuration variables. I didn't
count them, but the number may be near 100.
i'd add the number of possible command line switches and
configuration options in secondary configuration files then
number might be near 200.
Needless to say: i don't know them all. And i'm not sure that i know what those switches do i still remember.
- too many hidden dependencies.
- bloat is evil
Speaking of old software: i did the same mistake at least twice in the old BBS times, in the BBS software and the message gateway. Inexperience? Featuritis (the disease of adding features which aren't really needed)? Possibly both.
It's astonishing hard to bring the code back into shape. To remove a feature someone might depend upon is not an easy decision.
Bloat also tends to make the source nearly unmaintainable.
- too much is too much
i've never stopped supporting on of my projects before, but the time will come that i will have to. The BBS software, the gateway, lrzsz, utftpd, ftpcopy, plus a number of libraries, plus running a BBS, plus a new project, a news server, plus some minor things.
We officially started working on the BBS around 1992-05-27 (did some tools before that time). Developing and running it has been fun. There are still a few BBS running this software, and quite a lot of users. But it isn't fun anymore. It was teamwork, but now i'm doing it alone. And it seems people just want to use and talk about development, but don't help. Oh yes, they all know enough to decide what language has to be used because any other is just broken.
But anyway, it's not an easy decision: If i want to continue running the BBS i'll have to either continue doing at least some software maintainance or have to switch software to the only alternative solution for that BBS network (which more urgently needs even more development). I would have to give up both, and then i also could stopping to work on the gateway software. Oh well, to work on this one still is fun, but i don't feel this will get any priority soon.
- Speaking about TFTP: Let's suppose inetd sees 5 or 10
requests coming in every second. It forks a tftpd
then has to wait until that one has read the packet,
again and exited (the freshly forked process continues on
another port). Then inetd forks the next tftpd. Nothing
really problematic? I thought so,
There actually is a problem. tftpd does a getpwnam(). The machine i did some tests on today had a few thousand users in /etc/passwd, and the high numbers, like nobody, come at the end. The getpwnam took quite a bit of time, about 0.1 seconds. The whole select / recvfrom(PEEK) / fork / host.allow / exec / recvfrom / getpwnam / fork / exit() cycle took almost 0.2 seconds.
This got interesting when a whole room full of equipment needing TFTP access was booted: 15 machines requested images or configuration files in the same second. And quite a number of them seem to have a timeout of one or two seconds (which is stupid). That was not all, some machines took longer to boot. About 50 machines requested their configuration within 10 seconds.
Temporary "solution": set[ug]id(hard-coded-number), IP filter instead of hosts.allow.
long-time solution: inetd "wait" mode has to die. inetd should read the packet, create a pipe and feed the packet to it's child through the pipe (it also can set some environment variables contining IP addresses, port numbers and hosts names).
btw: utftpd supports a "user.group" notation, meaning it does an extra getgrpnam if this is used. No wonder that it was even slower and got more problems. It also supports numbers, but they weren't used.
btw: it wasn't too funny to debug that problem on a machine in japan. Doing useful work with 200 ms turnaround time can't be called recovery.
In addition i played around with xmodem today. I'm trying to
kind of design for a X/Y/Zmodem library, which does the
state machine stuff internally. I want the library to
I/O, and return to the caller as soon as this is done.
then does whatever it want's too, finally calls select or
and returns into the library as soon as there is
happening on one of the file descriptors.
I'd bet an euro that the state machine for zmodem will be very "interesting", so i played with xmodem.
One question remains: why?. The answer may be pretty simple: "because it can be done" (is this really a good answer?).