The patch is now where it should have been:
klogd_lose.patch.
Sorry.
The patch is now where it should have been:
klogd_lose.patch.
Sorry.
I looked at a number of trouble ticket and bug tracking
systems in the last two days. To put it mildly: they all
lack.
I'm astonished that the majority of them, especially the
newer
ones, are web-based. Hey, people, get a clue:
And all i've seen seem to be a bit unflexible. I'm not going
to
introduce different task-systems for company hotline
(non-software customers), software customer support and
developers internal stuff. My cow-orkers would kill me
before
i finish the sentence, i think.
In other words: the basic design has to be very, very
simple,
but extensible.
There is nothing wrong with having a web interface (aside
from
the simple fact that i will not use it if i don't need to
use it), but
being forced to use one? Heck, no.
And you can't easiely put something else about a web
interface,
but you can create some kind of web interface for almost
everything
without too much work.
I also went through my personal todo list:
ftpcopy ftp://re.mo.te/ ./re.mo.te/Not using -print0 is also possible. Files with \n inside the file name are then just not deleted. (i suspect that the are impossible to download anyway).
(cd re.mo.te ; find . -type f -print0) | xargs -0
ftpdelete ftp://re.mo.te/
1. update table where key = x; 2. if (ok) 3. delete from table where key = x;Bug? No, delete-trigger. No single word about it in the code, of course. Ugh.
itp: I think you'll be surprised by the number of people who neither use or like GNOME (or KDE). Not everybody needs the value added by those toolkits.
mjs: You forgot to take some things into account, too. A better procedure may be to:
Finished the xmodem sending. It works. It actually
is even
a little bit faster than lsx from the lrzsz suite. Why? I
didn't think
about performance ...
Error handling is quite simple: count the number of errors
per
block. If it reaches a fixed number then abort. Good enough?
At least not worse than lsx.
Thought about doing a fallback from 1024 to 128 just a
little
bit too late. Well, no other xmodem implementation i'm aware
of can do it, too, but that is no excuse. I feel a bit
stupid. But on the other hand:
it's a prototype, and if there is a reason to prototype
something
then it's to show stupidities early, right?
Anyway, it's time to stop dealing with computers for today.
There actually is a problem. tftpd does a
getpwnam().
The machine i did some tests on today had a few thousand
users in /etc/passwd, and the high numbers, like
nobody,
come at the end. The getpwnam took quite a bit of time,
about
0.1 seconds. The whole select / recvfrom(PEEK) / fork /
host.allow / exec / recvfrom / getpwnam / fork / exit()
cycle
took almost 0.2 seconds.
This got interesting when a whole room full of equipment
needing
TFTP access was booted: 15 machines requested
images or configuration files in the same second. And
quite
a number of them seem to have a timeout of one or two
seconds (which is stupid). That was not all, some
machines
took longer to boot. About 50 machines requested their
configuration within 10 seconds.
Temporary "solution": set[ug]id(hard-coded-number), IP
filter
instead of hosts.allow.
long-time solution: inetd "wait" mode has to die. inetd should read the packet, create a pipe and feed the packet to it's child through the pipe (it also can set some environment variables contining IP addresses, port numbers and hosts names).
btw: utftpd supports a "user.group" notation, meaning it does an extra getgrpnam if this is used. No wonder that it was even slower and got more problems. It also supports numbers, but they weren't used.
btw: it wasn't too funny to debug that problem on a machine in japan. Doing useful work with 200 ms turnaround time can't be called recovery.
In addition i played around with xmodem today. I'm trying to
find some
kind of design for a X/Y/Zmodem library, which does the
state machine stuff internally. I want the library to
just to
I/O, and return to the caller as soon as this is done.
The caller
then does whatever it want's too, finally calls select or
poll,
and returns into the library as soon as there is
something
happening on one of the file descriptors.
I'd bet an euro that the state machine for zmodem will
be very "interesting", so i played with xmodem.
One question remains: why?. The answer may be
pretty simple: "because it can be done" (is this really
a good answer?).
Things got worse. Really worse. Three problems more
and not even one solved. I left for the weekend after
only ten
hours, took Tronn with me and brought him to the railway
station,
which really wasn't a bright idea: He couldn't stop talking
about
work.
I then went to a lake, to swim a round. This time
something was OK: i expected and found the water to be cold,
but not too cold. Fine: i'm feeling well now.
I'll have to decide whether ftpcopy shall learn to delete files it copied from remote. I dislike to do this since ftp is fundamentally insecure, but this is the most often requested feature. Which is quite strange.
lrzsz hurts (?) again: in the last months i got a bunch of
reports that it
doesn't seem to work over telnet connections anymore (
it used to work for exactly the same people it doesn't work
anymore for). I'm quite sure that people just need to tell
their
telnet clients to properly disable certain misbehaviours,
but
maybe i'm wrong.
I also got reasonable requests for more useful
logging and status display. The problem here is that i don't
want to break things / change behaviour before i do the
rewrite
(which may never happen. I wanted to start that for more
than
one year now, and i don't see that i'll get the time soon).
I think
i should better keep it stable ...
This needs more
thinking.
Well, i suppose the answer is that i need the time for the
other thing
i'm working on.
And that i need to remember that ftpcopy
is
meant to mirror something and not meant to be the last word
in overfeatured FTP clients. What about another client with
the
ability to delete if a file has a certain size and
modification time?
Some shell script then could decide to delete files after
ftpcopy
ran ... and the script might be even smarter than ftpcopy
ever
could be. I guess that's more like UNIX.
This diary might even turn out to be useful. Interesting
idea, indeed.
Thanks, Raph.
Two horrible days at work. I found a problem 4 days ago, and i thought it would be a major one. But i found a different thing 36 hours ago, and the first one now looks very small in comparision. Two quite simple mistakes, but the software can't recover from that automatically ...
I'm tired - two long days, and the last night i only slept for about 4 hours, thinking about code for a few additional hours, which didn't make me feel good (especially as it was that code). Let's guess: at the end of the day i'm even more tired, and have done a large step towards a coke addiction.
On the bright side: the code in question is not my code. I just happen to not be on vacation. Bad luck, i guess. Or is that piece of software really that bad that it needs hand-holding and band-aiding at any time, not only during the developers vacations? [i'll possibly think different as soon as i'm feeling better again, but i must admit that a bit of flaming makes me feel better now]
In summary: this might cost me more than a week of time. It
will
delay the update/rewrite of my software for about two weeks.
I'm behind schedule anyway (schedule? Forget about it, there
is
not time-table with the slightest connection to reality for
that project. Take version 1, using developers 1,2,3,
protocol X, languages A,B,C. make version 2, doing
everything right this
time, using developers 1,2,3,4,5
plus a few people with additional ideas and opinions,
protocol Y
and languages A,B,D. Change Y to Z long months after the
project
should have been finished. Oh yes, 1,2,3 are quite busy
doing other
stuff. 4 and 5 are quite new and at least 4 of the 5 aren't
good
teamworkers. Second system syndrome alert. Death march
alert.
And i don't even get the time i need to work on that!
It's time to ask for my interim report, i think.
I'm really tired.
Yesterday i somehow managed to get about an hour to work on
the string
hashing library. Surprisingly enough one hour was enough to
do a
few space optimizations and package the whole stuff.
I sent Fefe a notice about it, he might
be interested to include it into his libdjb project.
Today, on my way to work: well, i now have the dynamic
string
hashing library i wanted to write for about a year. Fine,
but what
about a fixed size record hashing library? The string length
makes for about 8 bytes of overhead per record now.
Let's see whether there is a year between idea and
realization
this time.
Later. Fefe answered, he is interested, and is also interested in the getopt clone. This means i have to clean it up ... and update the documentation, which might be a good thing.
A completely unrelated note: advogato lacks a spell checker. Ok, that's not fair: improper spelling is my fault.
While i admit that the integrated development approach has it's value (especially for learners - the Pure-C IDE, back in the "old days" ;-) was quite helpful): i haven't seen in single one which doesn't limit you in some way. In almost every larger project you need to do something special which doesn't fit in the rules or hit some internal limits of the IDE.
With the toolbox approach you just exchange one tool which doesn't fit (obvious example: there a quite a number of make implementations and replacements). This doesn't really hurt. In the integrated world you have to bite the bullet: either use a workaround or replace it. The later is quite expensive since you may not only have to learn a new editor but may lose version control history or any other valuable feature of the old development environment. The former gets expensive over time.
In the business world the decision is easy: an integrated
suite costs
1000$, plus 1000$ per year of maintainance, plus say 3000$
training. That's it. (yeah, i know, that's not the full
truth).
The toolbox? 10 different programs? Can you calculate the
cost
and maintaince of that? And wait - part's of the
system may be exchanged? We need a policy against that
...
Management thinks different, and management decides. In the
open
source world there's a different kind of management, and
decisions are based on technical reasons or experience
instead of policy.
Not to mention platform independence, of course.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!