A Modest Proposal, or not
Posted 18 Nov 2003 at 22:45 UTC by ncm
With spammers now installing backdoor mail-transfer agents and
DDOS apparatus on thousands of compromised machines around the
world, any hope of staunching spam at the source, by identifying
their ISPs or IP addresses, is gone. Perhaps the only remaining technical solution is to disable the compromised hosts --
as well as those soon to be compromised.
Must we become part of the larger network security problem if we
are to control spam?
Arguably, putting a vulnerable host on the internet is itself a
hostile act. When anyone can take control of a machine
and use it to attack other hosts on the net, it should be the
owner's responsibility to keep it off the net, keep it firewalled,
or fix it. When the owner is demonstrably lax in that responsibility,
and cannot be contacted, is it not society's right and responsibility
to act to mitigate the threat? Most owners, even if contacted,
will not act as long as their own internet usage is not too badly
Unfortunately, just disabling the already-compromised hosts
does not suffice. Once spammers recognize that this is going on,
they will begin to close the security holes that they used to get
control of the hosts they use. It will be necessary to strike
first, to deny spammers access to vulnerable hosts.
Also unfortunately, it does not suffice to close the
vulnerability that the spammer would have used.
Vulnerable hosts tend to have multiple vulnerabilities, and
spammers are likely to use other ports of entry. Furthermore,
owners re-install their operating system frequently (in response
to virus load or corruption by proprietary installers), wiping out security improvements.
Sadly, what would be necessary is to identify all storage devices and
wipe them all clean. Many owners would re-install from backups, to
the extent they have them, but if they put the hosts on the
internet they would find them wiped clean again. People would
eventually become reluctant to put a vulnerable host on the open
Can this approach really work? It would certainly get the
vulnerable hosts tucked behind firewalls, in short order, and
add pressure to switch to more secure software for internet
hosts. That would be a good thing, albeit at a high cost.
The cost could only be justified if it sufficed to deny
spammers access to mail-transfer-agent farms. But would it?
Sadly, no. While the White Hats can, like the spammers, scan
the net for vulnerable hosts, and install their own code to
clean up, spammers can also install their backdoors by other
means, such as via booby-trapped e-mail and web-pages
exploiting holes in common e-mail clients and web browsers.
They can install their backdoors in (apparently) regular programs,
and persuade the gullible to install them normally. They can break
into web sites where people download firmware updates and
"freeware", and backdoor the installers. (Indeed, the hosts that
spammers control now might have been taken over by these
means, and not by exploiting internet-service vulnerabilities.)
Where remotely erasing a host that is an attractive nuisance may be
ethically defensible, spamming to distribute booby-trapped binaries,
or links to booby-trapped web sites, clearly is not. One might
argue that anyone using vulnerable client software or installing a
suspect program is equally as culpable as those putting vulnerable
hosts directly on the net, but it requires a strictly deceptive act --
persuading a person to do what they would not, if they knew your
intent -- to install the fix.
The end result of this approach, therefore, would be that the
spammers would use their unethical means to infect hosts that are
(otherwise) safely firewalled off from the wild net, but which
nonetheless continue to distribute spam on their behalf. While
worms would find the net a less hospitable place, and many more
people would find enough reason to switch to secure software, the
spammers would hardly be inconvenienced.
* * *
This exercise is not meant to advocate any action. Rather, it
establishes an outpost. If even this extreme measure -- actively
wiping out vulnerable hosts -- cannot slow spam, what lesser
(technical) measure can we hope will succeed?
Preventing Receipt, posted 19 Nov 2003 at 15:30 UTC by ncm »
(Joao Miguel Neves, jneves, wrote this,
Spam exists because of three things:
1) There are people willing to send it.
2) There is a way to distribute it.
3) People receive it.
The reason for 1) is money. And that incentive doesn't disappear as long
as one person in 10 000 falls for the spam...
Stopping 2) doesn't work, as you point out. And legislation is useless
here because of the use of illegal practices...
So the only workable solution seems to be attacking 3). The solution,
from what I've seen, seems to be something like TMDA with its
challenge/response mechanisms. Ways to circumvent this challenges might
be devised, but I think that we can learn how to avoid that from
registration processes like msn or yahoo.
This will only work in large-scale if ISPs start using these mechanisms
as a standard. But with spam becoming a large problem as it is, I'm
betting that spam-free ISPs will start appearing until the bigs ones are
forced to implement the same mechanisms.
Challenge/response doesn't work, however, because the only way to stop automatic challenge responses is to make the challenge something that can't be programmatically interpreted. In most cases, this means making an image that can't be easily scanned/parsed. However, not everyone uses a visual display - many users *can't* use a visual display. You can't just include an audio version, because some of those users may not be able to use that, either. In the end, the only thing that can always be given to the legitimate user no matter their needs is something programmatically interpreted, like text, which can be sent to the appropriate output device (speech synthesizer, braille device, video display, whatever is appropriate). But then, it can be interpreted by scripts and such, so spammers can handle it.
Probably the only thing that *will* work, and is still programmatically interpreted, are cryptographically signed messages, which the signatures matched against a "real person" database. I.e., something like the user must register their personal, verifiable information with some (trusted) central authority, who then allows them to generate/revoke signatures, which are attached to mail. Spammers could only ever use their own personal signature, which they wouldn't want to do, of course. If these signatures are stored in secured ways (TCPA chips, for example) we wouldn't even have to worry about clueless casual users and their lack of security knowledge and/or insecure OSes letting their private keys get stolen, so spammers shouldn't even be able to forge the signature.
The problem with this system, of course, is that it destroys anonymity. There are lots of truly legitimate uses for anonymous email (political dissent in certain states, as one example), so a good deal of users may find they can't just filter out unsigned mail (or, at least, they keep a copy of the unsigned mail in their SPAM folder or something) so spammers will still have a medium to send mail. The only good thing is we'd probably have no false negatives, just the false positives from anonymous mails.
The system also requires that a *trusted* central authority be built, that all mail clients support GPG signing (or whatever other method) including (where possible) TCPA support for security, that mail clients now by default check the trusted central authority's keyring for new mails, and so on. It probably wouldn't be hard at all to build Open Source/Free Software client utilities - its just that central host that's hard to manage. And no, a community run "web of trust" does *not* work - look at how many people on Advogato get ratings they don't at all deserve, for example. (Why the heck am I at Journeyer level? I'm not part of any important Free Software project...) It might work for a small number of users (especially when you can report spam, and the signature used in it is then revoked) but it still would not be ideal. The central authority would have to be similar to how you get SSL certificates now, but it would also have to manage to be very cheap (ideally, free) since everyone and their cousin would need a key for using email at all.
TCPA, posted 21 Nov 2003 at 17:25 UTC by werner »
, since ages people are talking about creating databases of real users, commonly referred to as PKI, and have never succeeded.
A Fritz chip won't help either (it is not useful for any ethical purpose) because if a spammer can compromise a machine he will also be able to compromise one of the applications making use of the TPM and thus taking over the mail sending facility. Signing a program won't help against vulnerabilties in the signed code.
As long as we're referring to well known, strongly worded tracts...
Why do you care about some centralized database of people? All you have to care about is your own personal database. If you want to share that database with a few trusted friends, or the whole world, do so. If they find it useful, they can use your database too.
The problem is not PK, it's PKI.
Such a database is not particularly invasive of privacy because you can create a brand new id on the fly. It won't have the same reputation as a previous ID, but it also won't be attached to all the same things, and so won't be considered the same person in the world.
These ideas and principles are what I'm adhering to in my CAKE project.
the measures being taken to stop spam, in razor and pyzor, which are automatically used by spamassassin,
are pretty good. the integration between exim4-daemon-heavy (see packages.debian.org) and spamassassin, is also pretty good even though the maintainer of exim4 makes life difficult for people wishing to install and integrate these two packages.
spamassassin also integrates well with other mail transports.
razor and pyzor are basically real-time distributed spam detection systems. they receive reports from trusted individuals who distinguish spam from non-spam and the information is collated and then handed out to anyone who asks, in order to classify messages. all very cool.
it's also the only way.
the author of the exim4 / spamassassin integration also recommends that you consider installing spamassassin so that it detects at SMTP time rather than after the message has been delivered. the reason for this is that typically spam is delivered in batches to hundreds of recipients. it is unlikely that legitimate messages are being sent to you if there are hundreds of recipients, therefore the decision can be taken to tell the SMTP sender to ONLY submit one message at a time.
and also to "hang" the TCP connection for one minute, two minutes, three minutes etc. etc. so that they have to make more and more TCP connections, waste more and more resources and, hopefully, the compromised host (particularly if it's a windows machine) will run out of resources and keel over.
in other words, having identified that a host is being used to send spam (without permission) it becomes possible to slow the machine down and/or report it via the razor / pyzor system etc. etc.
regarding classification of users / hosts, we know that PKI is seriously stretched beyond acceptable bounds into paranoia where the PKI trusted server needs to be video-taped 24 hours a day, locked in a vault, etc. etc. in order to be trusted: who's going to pay for such _just_ in order to guard against spam?
it just so happens that there is a distributable trust system already around, it's called trust metrics (advogato.org is running a centralised trust metric system but it's easily distributable) and there's also something called keynote which is a way of evaluating digitally signed permissions heuristics. the pieces of the puzzle needed are always there :)
it's not so much the _people_ that need to be classified as the hosts themselves and also the distributed spam servers running pyzor and razor. basically, what we have at the moment works very well (it's just that a lot of people don't _use_ spamassassin with pyzor and razor). however, as the usage expands, the load on the pyzor and razor servers will increase, consequently the number of pyzor / razor servers will also need to increase. the vetting of people _running_ such systems will end up becoming lax, and at _that_ point it will become necessary to run a distributed trust metric / keynote system, put some heuristics in that will better verify the spam reports coming in [e.g. only accept reports that come in from two or more pyzor/razor reporters].
so, the system that is in place at present is, in my opinion, the only sensible approach to take, and it also works. i've yet to see an alternative approach that isn't fraught with legal or impractical issues.
re: real-time, posted 25 Nov 2003 at 22:56 UTC by robocoder »
razor/pyzor doesn't necessarily need human reporters ... a honeypot email address could suffice. (In fact, the same address could also be used to automatically collect samples for bayesian-filter training, or assembling some corpus for testing content-based filtering rules.)
I'm surprised no one has said this yet, but why not just disconnect all Windows machines? I'm not anti-Windows here, but Microsoft isn't known for their security minded way of thinking; the majority of crap that is ejected onto the internet from a compromised host are from Windows machines, and the majority of total crap (from compromised hosts or not) are from Windows machines too.
If you want to stop spam, however, convince the Geneva War Crimes Board to make it a war crime to bulk send spam (>100 = Bulk?). Yes, I know its an impossible task, but it atleast would be entertaining to try.