Virus Design
Posted 25 Mar 2002 at 13:28 UTC by Fare
What would it take to develop a really nasty virus that could take over
the world of Free Software based systems? I tried to answer that
question in the following article: Design Ideas
for a Future Computer Virus. In short -- a lot. We're not going to
have the same problems as proprietary systems anytime soon.
I had precedingly written an article (in French)
"Les virus
informatiques comme sous-produits des logiciels exclusifs" about
why proprietary software was so much more prone to virus
infection than free software (I'm looking for a translator to English -
wink wink) - because of software distribution problems directly related
to the proprietary software model.
Now, I wanted to explore what it would take to make a really successful
virus for systems without proprietary barriers to secure software
distribution. Endorsing my hidden Evil Overlord personality,
I thus tried to lay down the general design of an efficient virus.
Problem is, implementing this design is not an easy task:
If someone ever has the means to implement it,
this person could be much more successful (and safer)
with a honest constructive activities.
Hum. Maybe I should have tried Badvogato.
hmm, posted 25 Mar 2002 at 16:49 UTC by timcw »
(Apprentice)
I think it's somewhat of a stretch to say free software based systems
are better at deterring viruses than proprietary. I think it is
basically a matter of complexity. When you say proprietary I am sure
you mean "Windows" (or maybe MacOS, though I know nothing of Apple
computers). Do SunOS, Solaris, and other *ix OSes have viruses? I
don't believe it is impossible to have a virus adapt to many different
environments (hard, yes--impossible, no). Take the original internet
worm, for example. IIRC, it used a number of different systems and had
knowledge of quite a few different exploits.
I believe there are no viruses written for free software systems because
1.) complexity, and 2.) user base. The differences between various
Linux distributions alone makes it hard for even vendors to
coordinate software. Virus writers not only have factors such as which
distribution/systems to target, but also how to break the traditional
Unix security methods (none of which are a direct result of the "free
software method of producing software"). The fragmented user base of
the free software world is not really a good target when so many people
still use Windows, which is relatively the same throughout versions.
One reason I believe Windows gets most viruses (besides the amount of
users it has) is because there is a struggle of complexity. How can
Windows be secure while at the same time remaining simple to use for
just about anyone? Does greater security = less simplicity? I tend to
think so. There is only so much you can hide from the user until system
security becomes a matter of which assumptions the system holds (i.e. it
is a safe assumption from a cracker's perspective that if the user is
on a *ix, then there are plaintext passwords, unless the user
installs something like shadowing) It is the differences between
systems that make them secure.
Hopefully I am making sense, but if not I present you this example: I
have a hard drive which contains a Red Hat Linux from '97 or so (my old
system I used). It still boots fine. My "new" system is currently
running a Slackware. Both of these systems have heavy user
modifications. One day I accidentally booted my Red Hat drive. What
happened? I could not use it. I remember my passwords, but I
did not remember how my text editors were setup, or my other
modifications. I hit backspace and got weird characters on the screen;
not what I was expecting. All of these subtle differences create a
barrier to entry. The less differences, the less of a barrier. If you
see any Windows machine, more likely that not you will know how to use
it instantly.
If by "original internet worm" you mean the Morris Worm, it
only had knowledge of a single hole in sendmail. This was still enough
to bring a large portion of the internet at the time to its knees.
Back on topic, the problem with virus spread in any population is that
of sameness. In the same way that a pig virus (for example) often won't
infect humans, a computer virus targeted at windows will not normally
infect a unix box. That doesn't mean that there aren't unix viruses.
Look at the lion worm,
which was targeted at Linux installations with certain versions of
bind. While I haven't studied this matter in detail, it seems that it
isn't so much about being proprietary or open, but rather how common
your system is. (In 1988, I'm guessing that 99% of servers on the
internet were running a similar version of sendmail, so sendmail is a
likely target for spreading a virus all over the internet.)
Sorry, I wrote that too quickly. Morris propagated through a hole in
sendmail, a stack overflow in finger, and by cracking passwd's and
using rsh. I must have confused my incidents... :)
(The code was also portable across VAX and BSD 4.2/4.3.)
Epidemiology, posted 25 Mar 2002 at 23:40 UTC by thorfinn »
(Journeyer)
It's well known amongst biologists that monocultures (eg, wheat fields) are far more vulnerable to plagues that affect a large proportion of the population than diverse ecosystems (eg, a permaculture farm, or a temperate rainforest).
timcw is correct in that the real factor that's going on is the complexity... this is also made fairly clear in fare's article, that it's the complexity of the environment that makes the task of the virus writer much more difficult.
One of the strengths of the free software/open source world is that there is much diversity... just because a particular system is running bind8.00, for example, is no guarantee that it is infectable in the same way, since it might be running under FreeBSD, NetBSD, OpenBSD, Linux, BeOS, MacOSX, Solaris, Irix, SunOS (unlikely, but possible), AIX, hell, I suspect even Win32, and plenty plenty more.
This is also a unix vs win32 philosophical difference... Unix applications are typically small, and tasked with a simple job or two, and the behaviour of the system as a whole is an emergent phenomenon, arising "as if by magic" out of the interaction between these individually simple parts.
One of the classic indicators of emergence and complexity is that the system is robust by nature, because it is much harder to perturb a whole lot of different things than it is to perturb one big thing.
Virus writers have to target a few things at most - to write a virus that had as much embedded "world knowledge" as fareis suggesting, would entail writing a virus that had so many side-effects that it would be quickly detectable in the complex ecosystem that goes to make up any unix system, no matter how cleverly it tried to "hide".
And that's the real reason that viruses are far less common in the free-software/unix world... the complexity and lack of monoculture means that viruses cannot infect a wide population of systems, because they cannot adapt to every different combination of circumstance.
The Morris worm attacked Sun-3 boxes as well as Vaxes, and in fact
it delivered a specialized payload to infect each one.
Nonsense, posted 28 Mar 2002 at 17:58 UTC by ncm »
(Master)
It would be quite easy to come up with a devastating virus/worm
for Unix systems, and the Unix security features wouldn't help much.
(Same applies to NT, of course.) What protects us is mostly that
most script kiddies who might attack are not mentally equipped, the
target is less attractive, and Microsoft and the NSA haven't yet
bothered to put the resources into it.
Pick any hole in some package shipped on Red Hat 7.2; that gives
you a big enough target population, and only a fraction of RH users
have patched known holes. Crack into a web site that is popular
among the target population. Using doctored output from the web
server, exploit a proprietary browser or browser-plugin hole (e.g.
Netscape 4 and Flash) to install a worm. It can query your web
server to discover other targets, and it can scan any LAN it
finds itself on. Have your worm crack into all the target machines.
Given a population of compromised accounts and machines, you can
use the access you gain by sniffing passwords to place trojan versions
of RPMs, which are always installed with root privileges, on popular
repositories for download by others. Since none of this need
attract any attention, you could end up in control of millions of
machines, with no one the wiser. Who knows, maybe it's already been
done!
Sure this isn't much like your typical MSWord macro virus, but it's
still easier than working. There is no justification for complacency
about Unix security.
Not Nonsense..., posted 2 Apr 2002 at 07:46 UTC by thorfinn »
(Journeyer)
I do agree with ncm, that we shouldn't get complacent. However, that's orthogonal to the point.
The "Open Source"/"Free Software" community isn't "RedHat 7.2". Yes, RedHat 7.2 is pretty damn popular, and there isn't anything magic stopping someone from writing a virus/worm/thingy that exploits RedHat security holes (or Debian security holes, or Slackware security holes, or SuSE holes, or FreeBSD holes...) and does something nasty. Not at all!
All that's being said is that if someone does that, so what? It's not going to affect all those other systems out there... and it is a lot harder for a virus to do something like infect the whole system simply because a random user is reading their email.
Nobody's saying it's impossible. What's being said is that the Free Software/Open Source environment is more hostile to viruses than the Corporate Monopoly environment. That's all. And that's definitely true.
That's not a reason for individuals to become complacent - far from it - the diversity present in the Free Software/Open Source world is maintained by individuals who aren't complacent! There are a lot more non-complacent individuals by proportion in the FS/OS world, when compared to the proportion in the non-free world, though.