Trojans: No More

Posted 1 Aug 2002 at 18:26 UTC by Capzilla Share This

I am sick of trojan horses. Recently there seems to be a wave of trojans getting into open source projects and while they are often discovered and resolved quickly, there is still a significant time of compromise.

That's why I intend to write a tool that will automate auditing of changes to source repositories such as CVS and tarballs. This will aid system administrators, packagers and end users who are not satisfied with any of the existing tools.

While you can generally trust official releases and package/installer tools that work with distributed signatures, you cannot guarantee there is no compromise.

The only way to be 100% safe is to verify and audit every line of code, on your own machine. This might be extreme and will still not be the ultimate tool, but it should provide significant aid to end users and packagers to automate audits.

Read more on the Capistro web page.


Refinement and Thoughts, posted 1 Aug 2002 at 19:49 UTC by neil » (Master)

Well, after a good IRC discussion with Capzilla on this idea, here is my analysis: The idea is that nobody can be trusted, so the actual end-received data must be examined. Personally auditing line by line a large release like KDE 3.0.2 would be impossible, so what we should be able to do is do an automated check. Further, people working on a large organized project like KDE should be able to regularly monitor their latest sources for bad stuff.

What bad stuff? That's the trick. :-) Capistro will need a database of possibly bad things, like a spam database.

Where bad stuff? Malicious things can be put into data files, configure checks, headers, source files, interfaces, anything. So everything needs to be checked. How do we know what to check? Only automake knows every file that gets installed, and every dependency thereof. So Capistro must be integrated into the automake itself, checking its own inputs for bad code, then checking the dependency trees.

How do we catch the bad stuff? Capistro will spit out warning logs, and the user browses the log after running it. Then the user goes and checks every warning, and decides which are ok or not. Innocents uses of sockets or whatever can be put into a suppressions file, using a concept from Valgrind.

This could be a potent tool for monitoring one set of people - the people who write the code. For single-developer projects, this lets the users watch the cvs or releases for problems. For projects with multiple developers, this allows developers to watch each other, too.

This would not be a tool to directly attack the problem of vulnerable download servers and mirrors, and this would not guard against malicious creators of source and binary packagers. As long as Capistro stays focused enough to remember that, I think it could be quite successful.

If admins can do it, so can users, posted 1 Aug 2002 at 19:52 UTC by Capzilla » (Master)

Of course, if developers/admins can automate auditing of CVS code and tarballs (pre-release), there should be no implementation limitation on allowing end users of source code to do the same. Binary packages are indeed out of the scope of this.

Users, posted 1 Aug 2002 at 20:11 UTC by neil » (Master)

And users can use anoncvs. Look - you can either do the checks all at once, or you can break it up. It should be a lot easier to just have all the checks together, and that means automake. You can't put this off until make... a recent trojan lived in configure! So by the time you ran make, it was all over already.

Perhaps a community based Capistro server/audit reporting site would be useful., posted 1 Aug 2002 at 21:26 UTC by mirwin » (Master)

It seems to me that auditing every line of source code on your machines personally is way too time intensive even for most developers. Perhaps I am biased by having more experience (professionally, not in open source) as a project manager than as a core developer. Effort hours add up fast in redundant or parallel activities.

IMHO You are correct that this trojan risk is a large and growing problem. If your project included a way to report, receive and apply audit results from a community of interested auditors it would be of even greater use to everybody.

For example: I am currently in the newbie learning phase as a potential or future developer. I have plenty of time intensive things to do just getting open source software tools installed on a system, learning new languages, some computer science, and beginning to read the source of projects I find interesting. This perequisite to effective development along with some open content contribution at wikipedia.com currently absorbs all the time I currently choose to apply.

So I am at large risk to trojan problems.

One way to approach this would be for me to stick with major distributions and then use Capistro or the equivalent to audit code of projects I actively get involved with. The audit would tie in nicely with reading and understanding the core code prior to attempting development of useful incremental capability, but would increase the time necessary due to the need to examine every line of every file instead of just the interfaces and files perceived as impinging on my intended efforts.

I suggest an alternate approach for consideration: some sort of automated tie in from Capistro to a central audit results database that is community driven/supported. It seems to me that if Capistro makes it easy for audit results along with the source file(s) audited to be registered with a community site that many of the best auditors would likely contribute their results. The trust metric could be seeded initially by the communities mutual self assessment of code auditing skills and then be driven by a feedback loop structured around the results submitted. When a 6th auditor discovers a trojan and it is found to be in the code submitted by 5 previous auditors then clearly skill level of the 5 auditors missing the trojan should be updated according to some automated algorythm. Some sort of confidence index dependent on the reported skills of the sum of the auditors could be developed for each file, package, and application for which auditors have reported results.

Personally I would not consider code audited by 4 or 5 randomly self selected Advogato Masters absolutely full proof but I know that it would be much, much safer than any code I "audited" myself.

If such a tool and centralized audit results existed as an independent site(s), then a new opportunity for neophytes to begin contributing would emerge. Upon joining a project one could conduct an audit (following some specific published methods and/or looking for specific known problems as well as innovations) of the existing stable code and report the methods used and the results to Capistro Central (or Mirror n) in a standardized submission form. The comparison with other reported results would begin building an audit skills rating in comparison to other community participants. Code known to be audited by tens or hundreds of other neophytes would also seem fairly safe compared to my own personal auditing skills. The knowledge that checking in audit results places my public rating as an auditor at some risk should also assist with applying some rigour and disciplined approach for any personal audits for which I intend to report back results.

If such a community proved useful in distributing the workload of audits and propagating the benefits of avoiding problems by running predominantly audited code then it could save a lot time for individuals who would otherwise have to audit all the code on their machines. Perhaps the community server assets and bandwidth could be paid for by sell of certified CDs as audits are completed on popular distributions.

Regarding the problem of trusting only the code audited at the receiving machine. Perhaps not all the risk of subversion of trusted code could be eliminated by running/publishing checksums/signatures or multiple downloads from independent trusted mirrors (split the audit community into 3 to 5 segments each responsible for one mirror and source set, install after local automated bitwise compare of the certified downloads) for local comparison could be eliminated. However, if it eliminates much (90%) or most (99.99%) of the risk, increases the speed of propagation of corrections, and takes only a fraction of the effort (to participate in the community's distributed workload) of personal audits, then it could be of great utility to users/developers who simply do not have enough time to audit all of the code they use.

..., posted 2 Aug 2002 at 01:29 UTC by tk » (Observer)

It is argued that noone has time to keep track of all CVS commits to a repository [...] But I don't understand. Don't the people who have CVS commit access try to look through the patches they get before checking them in?

Commits are potentially a single point of failure., posted 2 Aug 2002 at 08:14 UTC by mirwin » (Master)

I am sure most people who have CVS commit access review most of the code in patches before they check them in. How easy is it for a script kiddy or cracker to slip something by a busy expert hacker integrating complex software? Personally I assume that Capistro can be designed to effectively catch most script kiddies using known approaches. IMHO It will not slow down serious innovative crackers much, surely they will have their own latest copy of Capistro on their development machines for beta testing prior to field test of up and coming exploits. How tough is it for a cracker to establish an internet handle sufficiently to gain commit access to a potential vector package?

Consider for example that Debian (distribution I intend to try out next) has over 8,000 packages. How many tools, packages, applications etc. does an active user or developer require on their desktop for optimum productivity? A trojan requires only one effective vector to be catastrophic. Therefore each of these sets of source files is a potential problem.

How many of these can be personally audited effectively on the receiving machine and still have time to accomplish useful new work? How many of these packages have an expert hacker maintaining the package?

How many expert hackers does it take to reliably detect an innovative expert cracker?

Currently many eyeballs are typically applied to widely distributed packages after they are installed and in use on many production/development desktops! Debian at least has a stable branch and a test branch. Presumably most stable packages have seen many desktops even if the source has not been seen by many eyeballs in its most recent version. The commercial distributions may be a higher risk due to fewer eyeballs.

As I understand it, Capistro is proposed to semi-automate detection of potential problems prior to installation and subsequent activation on the receiving desktop. If an effective way of using the proposed tool to distribute security audit workloads and reliably document that many eyeballs have indeed reviewed the code at appropriate points in the development process could be implemented then it might accomplish a couple of things:

1. Increase the effectiveness of the many eyeballs argument with prospective new users and organizations. Nothing like specific numbers to solidify an argument in a presentation to uncommitted or neutral evaluators.

2. Lower the bar for productive participation for newcomers learning the culture, process, tools, etc. No doubt it is a lot of fun submitting patches that are discarded until suitable quality code is discovered. The rejection email undoubtedly has useful comments and constructive criticism for the struggling neophyte coder.

Personally I find it intensely satisfying to make a few tweaks at Wikipedia, The Free Encyclopedia, in an idle hour or two, because I know it is tracked and counted. Any colleague who chooses to can see my meager but growing contributions. Even if no colleague ever sees it, I review it once in a while for the personal satisfaction accrued from productive accomplishment. Perhaps by reporting and tracking audits and results; code reading and auditing can be incrementally increased and encouraged in community neophytes. Essentially this provides a method of contribution that is guarranteed to be accepted, the neophyte submits the audit results and is automatically graded against all other auditors and the real world environment as information accrues regarding the audited code. Success or failure in detecting the unacceptable code or trojans is entirely in the auditor's hands and skills. Contrast this with submitting patches to benevolent visionaries. Empowerment can be extremely motivating. A place for struggling neophytes to get a little self and community recognition might go a long way towards growing free/open communities a bit faster.

3. Increase the reliability of even already reliable distributions. Much harder to get from excellent to outstanding than from bad to poor.

4. Provide a formal feedback loop from the many eyeballs to the original coder. If the code has been read well enough to check for trojans then very little extra effort would be involved in a quick assessment of style, readability, etc. Suggestions or annotations could be automatically routed back to the originator of the package, the coders, or merely placed in online public records as part of the audit results. Perhaps automated community support for code reviews, optimization, or correct path tracing would soon follow. Alternatively private email could be utilized if a feedback email address is included in the comments or check in form.

Monitoring CVS, posted 2 Aug 2002 at 14:09 UTC by neil » (Master)

Yes, people who make commits to cvs know what they're putting in, but you guys seem to assume that everyone on a large project trusts everyone else. But think of KDE with hundreds of committers, or even a shop like Microsoft, who must have hundreds with write access to their repositories. A way to regularly scan the repository for suspicous things is valuable in those cases.

I thought we were talking about trojans here, posted 2 Aug 2002 at 15:13 UTC by hacker » (Master)

I'm seeing discussion here about automagically auditing code resident in a project's CVS repository. This is not the problem with the recent trojans of late. As a project maintainer on several projects, and a contributor to others, and an anonymous user on yet others, I can say that most of this is never going to fly.

  • Developers always audit code, even their own. If they don't, it's their own fault for allowing broken/bad code to slip into releases. I know, I've done it, and it's caused me to be even more meticulous about auditing.

    Each developer should be trusted enough to make the right decisions about their commits, and not to inject trojan'esque code into the cvs repository where it can leak into a release. If you don't trust your developers, don't give them commit access. Have them feed all patches through you (the maintainer).

    NOTE: The recent OpenSSH trojan was not put there by any of the OpenSSH developers, maintainers, or people with CVS commit access to the OpenSSH repository. It was a modified tarball of sources which was then uploaded to the OpenSSH mirror servers, but the trojaned code did not come from the OpenSSH cvs repository.

  • If you allow anonymous commit, and do not audit your code, that's not something an automated script can find out (i.e. it's a carbon problem, not a silicon problem). Besides, if I know how the script uses heuristics to find the "bad code", I can work around it. What about just implementing inline-asm functions? Obfuscate it? It's 100% possible to flaw scanners intended to do automated detection.
  • What about using proper debugging techniques? valgrind, splint/lclint, and others can help here.

  • Securing your release server from potential attack is also important, and out of scope of an automated "checker-of-bad-code". Again, we're into carbon problems and not silicon problems. The same goes for the CVS repository. Secure that, and even if someone were to get a commit into your repository, you can always revert it before releasing.

  • Sign and md5 your files, and make sure that they are checked again after they reach the webserver/ftp site. You can also automate a process to validate that the file is never changed, and if it is, the owner gets an email about it. "File foo.bar.gz has changed and no longer matches the signature file.tar.gz.sign."

This is not rocket science. It's brought awareness to the fact that many people don't check their own code, don't check patches, and don't sufficiently secure their services and servers.

Has anyone realized that the person(s) who perpetrated the trojan could have made it much worse for users? They intentionally did not update the md5 checksum on the trojaned file. If they had, it may be weeks or months before it was caught.

Just to sum up, there is not one single case that I'm aware of where a trojan was perpetrated by inserting code into the project's CVS repository, or where trojaned code was put there by developers of the project itself. Please refrain from the FUD when dealing with these assertions.

Trojans, posted 2 Aug 2002 at 19:18 UTC by neil » (Master)

No, Capistro won't solve the problem of download servers and mirrors being attacked. But Capistro allows one to just grab code from a cvs directly, scan it, and not have to worry about packagers and download servers. Why worry about securing against coders *and* packagers *and* mirrors when you can just go straight to the original and scan there?

pointless, impractical, posted 2 Aug 2002 at 21:47 UTC by habes » (Journeyer)

This is an extremely vapourous article. You begin by claiming there is a growing problem of trojans in open source projects without giving a single example of it happenning. You then propose a tool that will supposedly solve this supposedly growing problem without providing any analysis of what the problem is or how this tool can solve it.

The recent OpenSSH trojan is literally the only example of a trojan attacking an open-source project I've ever heard of. Given 20 years of BSD, 20 years of GNU, 10 years of Linux, and 5 years of an extremely active open-source/free-software community, I'd say that's a pretty good track record. I am not at all convinced that trojans in free software is a growing problem. The details of the OpenSSH trojan are not even available yet, so this incident cannot yet provide any useful information into the problem. It's impossible to design a tool to solve a problem you don't understand.

It's futile to start throwing around solutions until you've analyzed the problem. At what stage have trojans been introduced? Who has introduced them? What kind of access did they have to the project's servers? So your tool will allow people to monitor every change to CVS and keep track of what has been approved. If you're a project leader and you don't trust your developers not to inject trojans, why do they have commit access? If you're an end user and you don't trust the authors of the software you are using, why are you using it?

Additionally, the idea of verifying every line of code is not remotely practical. If we're talking about huge projects, there is no way one person is going to understand every line thoroughly enough to rule out the possibility of a trojan, let alone an end-user. Seriously, do a "test run" of your idea by generating a diff between two consecutive versions of OpenSSH and tell me if you trust the changes.

A much better idea that would be useful for more than the fringe case of trojan horses would be a mechanism integrated into CVS (or subversion) that wouldn't actually commit changes unless they were reviewed by at least one person (other than the committer). This would also allow the reviewer to catch bugs or badly written excerpts, and would guarantee that at least two people understand every part of the codebase.

Audited Source != Perfect Security, posted 3 Aug 2002 at 19:29 UTC by bgeiger » (Journeyer)

I admit I'm just nitpicking here, but...

Capzilla wrote: The only way to be 100% safe is to verify and audit every line of code, on your own machine.

Even this drastic step can't guarantee 100% security. Case in point: Ken Thompson's classic back door using login and cc.

For those of you who haven't heard of this (and don't care to read the articles), here's the Cliff's Notes. Ken Thompson added some code to the compiler that recognized that it was compiling login, and would add code to allow a certain login/password. He also added code to let the compiler know it was compiling itself, and add the new code in. Once he had compiled the compiler from the original source, the back door remained in place, with absolutely nothing in the source to suggest its existance. No matter how well the code was audited, the back door would still remain undetected. As Ken Thompson himself noted in his classic article, "Reflections on Trusting Trust":

The moral is obvious. You can't trust code that you did not totally create yourself. No amount of source-level verification or scrutiny will protect you from using untrusted code.

Anyway, getting back to the issue at hand, hacker speaks the truth: it wasn't the code in the CVS repository that was trojaned, but the released tarballs. Things could have been a lot worse if the MD5 checksum lists had been updated as well. I'm personally partial to using public-key signatures for verification. It's simply too easy to replace the MD5 checksum of the original with that of the new, crocked file; with a system like PGP, you'd need the secret key to do so.

Basically, instead of focusing exclusively on the source code, we need to also look at our release procedures. Tarballs and MD5 just don't cut it anymore.

Not the perfect solution, I know that, posted 3 Aug 2002 at 22:06 UTC by Capzilla » (Master)

I think I even stated that once or twice myself in the page I setup. No, a tool like Capistro won't fix all the problems in the world. Never claimed it would. But I maintain it would be a valuable aid in auditing code and could be used as one of many quality assurance steps. I'm heading to San Francisco tomorrow, so if you feel like it, come to LWCE next week and flame me in person at the KDE booth. ;-)

Not the perfect solution, I know that, posted 3 Aug 2002 at 22:08 UTC by Capzilla » (Master)

I think I even stated that once or twice myself in the page I setup. No, a tool like Capistro won't fix all the problems in the world. Never claimed it would. But I maintain it would be a valuable aid in auditing code and could be used as one of many quality assurance steps. I'm heading to San Francisco tomorrow, so if you feel like it, come to LWCE next week and flame me in person at the KDE booth. ;-)

Bah, posted 3 Aug 2002 at 22:09 UTC by Capzilla » (Master)

Why is Advogato so damn slow lately? Getting browser timeouts and all, sorry for the double post.

Is it a solution in the first place?, posted 4 Aug 2002 at 04:48 UTC by tk » (Observer)

The issue about Capistro is not whether it's a perfect solution, but whether it's a solution in the first place.

No, it doesn't solve the problem in the news, posted 4 Aug 2002 at 04:58 UTC by neil » (Master)

Capistro doesn't prevent people from tampering with packages, no. The fact that Capistro didn't solve the previously hyped problem doesn't make it useless, though. "Security" is a process of identifying threats and dealing with them, not a process of reaction. If you wish to address the issue of people tampering with packages, do so. But don't just dismiss Rob's plan because he's not addressing your pet concern. Capistro *could* let people look for trojans put into code by anyone - whether a malicious third party or a malicious author of the software. Authors can also use it to check for malicious fellow authors. Some people here seem worried about packages being modified; these people trust the authors but not anyone else. Rob is proposing something that would be a practical line of defense for people who don't even want to trust the authors! That's the point here.

I wish I could edit replies..., posted 4 Aug 2002 at 04:59 UTC by neil » (Master)

I forget to add paragraph markup on this site about 75% of the time...

I still think some are missing the point.., posted 4 Aug 2002 at 06:12 UTC by hacker » (Master)

Capistro *could* let people look for trojans put into code by anyone - whether a malicious third party or a malicious author of the software.

neil, there is not one single case in the history of Linux or Open Source that I can think of, where the source to a package was trojaned by anything (if there is, feel free to point them out and correct me).

Now, you'll quote me the recent OpenSSH trojan, and that is completely different, because that was additional source put there by external people. Are you saying that Capistro should be run hourly, on released tarballs of sources available for download? Surely you can't mean running Capistro on sources not yet released, or sources sitting on a developer's cvs repository.

I still fail to see the point of Capistro. Another "hole" tracking tool? Something to check for "bad voodoo" code?

Developers aren't putting the trojans into the releases, they are getting there externally, and as I've said before, nothing can automatically detect that and solve it, unless you check the state of every file every second, against a known, locked-down, read-only, internal-only master file, and then remove/delete/quarantine the public copy when it has been seen to have changed (someone uploads a new (trojaned?) version).

Authors can also use it to check for malicious fellow authors.

Again, not a silicon problem. This is called Project Management, and if your developers are putting in bad code, cut them off, plain and simple. If it's a reputable project (Apache, SSH, PHP), then you can bet that the developer who put the trojaned code in there (if it was deemed to be intentional) will be ostracized by the community at large.

Rob is proposing something that would be a practical line of defense for people who don't even want to trust the authors! That's the point here.

Again, as habes pointed out, if you don't trust the author, why are you using his software?

..., posted 4 Aug 2002 at 06:16 UTC by tk » (Observer)

A "pet concern" is a concern which matters greatly to certain people, even if it doesn't have much relation with the real world. I don't have any vested interests in this issue, either way.

Maybe there are some authors who shouldn't be trusted. Maybe Linus Torvalds is really aiming at World Domination in the literal sense. But if that's the case, I'm more inclined to bet my money on a project like Proof-Carrying Code. PCC is at least based on a sound theoretical foundation, while Capistro doesn't seem to be based on anything -- not even empirical experience.

I have to agree with habes, this is starting to look really vaporous.

Some links regarding md5 checksums and misc. perceived security problems or risks., posted 4 Aug 2002 at 07:11 UTC by mirwin » (Master)

Apparently a few years back, trojans were a growing problem for some proprietary versions of unix.

md5 RSA Data Systems reference source code and its use

This link provides a pretty clear procedure for manually checking source files using available tools.

Apparently there has been some previous interest in the types of services or tools under discussion as per this thread. In the thread it is queried as to whether there are public databases available with md5 checksums for various packages, a respondent answers "tripwire" and mentions the possibility of pooling databases. From later research: tripwire, this is a commercial package cited as expensive in some of the articles below.

Layman's explanation here provides a link to the original publishing of the algorythm and explanation from its inventer in RFC1321

Regarding vulnerability, this newsgroup FAQ, http://www.linuxsecurity.com/docs/colsfaq.html#7.1 does not seem to think it is negligible. It provides links to tripwire and two free equivalents citing them as useful in checking binaries. I suppose lurking at comp.os.linux.security for a while, reviewing their archives, and then asking a few judicious questions might provide useful information for anyone considering participation in Capistro or development/use of an equivalent or related type of tool . Might also be able to get a feel for whether there is any demand for these types of services or tools applicable to source (in addition to binary) in the broader user community.

This site, http://www.insecure.org/sploits_linux.html lists some things that sound like they would compile into the system from source code. It has not been updated for a couple of years.

OpenPGP, posted 4 Aug 2002 at 07:13 UTC by adulau » (Journeyer)

A major part of the problem could be resolved, if for example, the MD5/SHA1 hash of the distributed files are signed by the developer (or team) itself. This is not a panacea but with a good management of the OpenPGP[1] signature and the extensive use of the "web-of-trust", this could solve the trivial files replacement (including hash replacement).

What does it mean for the developer ?

  • To extensively use GnuPG[2]/OpenPGP. (to propagade fingerprint in mailing-list)
  • To have his key signed by other people.
  • To create with each distribution a clearsign of the MD5 files

    gpg --output signed-MD5.files --clearsign MD5.files
What does it mean for the user ?

  • To check the key of the developer/team (cf. the extensive use of fingerprint in mailing-list)

    gpg --verify signed-MD5.files

It's time to have an extensive use of cryptography (did you remember the dream of John Gilmore ?) and a good key management.

[1] http://www.ietf.org/rfc/rfc2440.txt
[2] http://www.gnupg.org/

Some questions begged., posted 4 Aug 2002 at 07:46 UTC by mirwin » (Master)

The discussion has been quite informative for me and so it may be useful to some others not familar with these issues as well.

To me it seems one question begged of Capistro based upon the information provided in the discussion (and links) so far is:

How can I trust Capistro (to improve my local security and not be installing trojans) unless I first write (or audit) my own trusted compiler, assembler, bootloader, secure minimal kernal and then use the securely compiled Capistro to audit the original Capistro source?

If I am too lazy, ignorant, or busy to accomplish the above to secure my Capistro environment, then how do I establish that use of Capistro provides a net reduced risk vs. a net increased risk for various types of applications such as:

1. Hobby machine. 2. My development work environment. 3. My employer/clients' private production processes. 4. Embedded traffic light controllers and network. 5. A Boeing 747 or train switchyard panel. 6. A nuclear reactor control panels.

as appropriate?

In other words, skipping the FUD issue and the matter of obviously trustworthy developers, how can it be shown (hopefully quantitatively for my boss, clients, public regulators, juries, etc.) that use of Capistro is better due diligence than remaining fat, dumb and happy (FDH), ideologically confident that multiple eyeballs will eventually spot and announce any problems to the world? (Simultaneously to the boss, clients, and ambulance chasers ... Ouch!)

The dilemma is obvious. Every day, in every way, everyone on this planet is exposed to ("trusts") software that they did not personally write. It is impractical to tell people to get off the planet if they are concerned about being injured by embedded software they did not personally develop. A better basis for risk management is necessary and appropriate.

tk Do you agree with habes that the warm fuzzy of "many eyeballs" could be usefully documented by requiring two maintainers prior to commit? Would this improve (reduced risks) the source distributed or merely document typical current practice?

If substantial improvement is possible, via this simple change in commit procedure, what is the implication for the "many eyeballs" relative quality (closed proprietary vs. open/free source) argument that has been in use many moons and extensively hyped to the public?

Using software from someonoe you don't trust, posted 5 Aug 2002 at 05:22 UTC by neil » (Master)

A person might write useful software, but later decide to use that software as a launching pad for some other purpose, either on his own or under pressure from someone else (possibly an employer or a government).

the value of trust, posted 5 Aug 2002 at 11:13 UTC by jul » (Master)

At first I think that if Capistro existed it would be interesting though I have no idea of the ratio of the cost of effort regarding the number of true alarms raised.

Then, if I try so summarize part of what has been said so far it would be

  • this is not a new problem, and a known/accepted method for lowering the number of trojan is to use systematicaly MD5 csum/GPG authentification
  • most of us would trust code coming from well known members of the community

There is a way to have both in a single tool.

In fact MD5+GPG offers a way to have everyone's contribution being related to a well known ID. It is as if we where rating the trust we have in members in our community higher than numerical solution. That is what I call for fun "the value of trust" (it is ironic since it is the motto of verisign I do not trust). So maybe we should also verify/grant the level of trust given to a key. Just think of code introduced with a known GPG signature, maybe then it could be a good idea people revoke their trust in this key (either the owner is a trojan writer or he did not protect his signature properly which is equaly dangerous).

In fact we can create through the level of confidence of the GPG signature a trust metric like in advogato, and could use it to give a death penalty to every one who is known to have tried to compromise our source code.

If giving a trust metric to GPG signature can help us improving the trust in our code, it must not make us reluctant to accept code from new contributors, and it will not prevent another trick like the Ken Thompson's cc/login. However we can use it to be able to relate someone's ID to each of his contributions and eventualy if he made a trick to track his other contributions to check them. Thus we could deny him to contribute in any open source project if this was a generalized behaviour.

There is a concern with this method : what about anonymity/ we would track ourselves better than any other commercial society would?

Other trojans, posted 5 Aug 2002 at 22:34 UTC by imp » (Master)

The recent OpenSSH trojan is literally the only example of a trojan attacking an open-source project I've ever heard of.

You are wrong. There have been many many other trojans over the years. They don't get the press release that this does, but they have happened before. Often times things like the FreeBSD ports system finds them because an MD5 is stored independent of the attacker.

I don't recall the exact details of them in the past, so I won't post them here. However, they have happened. Rather than post possibly inaccurate claims, I'll leave it at this. I was the FreeBSD security officer for a while and there were a couple incidents where things were trojaned at the ftp site while I was active.

Also, given the number of bugs in host operating systems, I'm surprised that we've not seen more of these stealth attempts. I guess the script kiddies have been unmotivated up until this point to cause havoc.

Still no details, imp?, posted 9 Aug 2002 at 05:56 UTC by nelsonrn » (Master)

I still haven't seen *any* details of these trojans. Not one. How is it possible to trojan code stored in CVS? Obviously you can either break into CVS or use existing permission. If you break in, then obviously CVS is insecure and needs to be replaced by something secure. If you use existing permission, then someone's name is going onto the trojan. That someone can be barred from having write access. Problem solved. I don't see any route for major trojans, nor do I see any lack of control over a problem, should it actually exist.

Yer solving a non-problem, Capzilla -russ

i get spam from myself these days, posted 11 Aug 2002 at 15:16 UTC by bdodson » (Journeyer)

it's true; i get spam emails from myself. i think this is at least in part because spam filters often let emails through to my mailbox if they appear to be from me. in general, i think spam filters have not helped me; they have just forced the spammers to get more innovative to the point where some once-decent spam blocking services have become completely ineffectual.

are script kiddies capable of getting more innovative? well, some script kiddies are just crackers in training; those ones are capable, and would welcome the challenge. it would certainly help them to obfuscate their efforts if they knew exactly what makes those efforts easily detectable.

i would rather distributors just employ proper security measures, and package maintainers just continue to take proper precautions when they commit changes (reviewing patches is still easier than writing them yourself), and use checksums to help you confirm that what they packaged is what you got. but users need to be educated to check the checksums; otherwise they're useless.

more constructive approaches, posted 13 Aug 2002 at 05:14 UTC by mbp » (Master)

I am sure the proposal is well meant, but I think there are some damning problems:

  1. It is far from clear that there is a "typical" trojan. Viruses and worms can be easily detected by signatures because the same worm occurs in many emails, but each trojan can be carefully written from scratch for each penetration.

  2. Auditing to find unintentional security holes is really hard. (Have you ever found one? You should try.) Finding holes that have been intentionally hidden is much harder again. Finding holes inserted by the program author would be extremely difficult: you would have to be as smart as them, and know the program equally well.

  3. I'm sure trojans are annoying, but they're vastly outnumbered by regular security holes.

  4. Most trojans to date have been caused by people breaking into a central distribution machine. Since your scheme has a central point of failure it's not much better.

I know of a few open source programs that have hidden features that have apparently never been found by outside auditors. The ones I have in mind are harmless, but they could equally well be malicious. At least one was put in as an experiment to see how well audits were done.

Here are some ideas that are more likely to be useful:

  1. Get RPM or dpkg to a state where they will only install properly-signed packages. At the moment there are several problems: for example, last time I checked, RPM would happily install unsigned packages. You also need a reasonable way to get keys to users.

  2. Use privilege restriction (SELinux or GRSecurity) to make fewer packages security-critical.

  3. Use something like Aegis that can enforce code reviews.

  4. Start a project to audit the code of an existing project. At least this will give you an indication of how hard it is.

Seriously, I think you will both produce something less vaporous, and also understand the problem better, if you try to find some problems by hand first before you write a tool.

Heuristic scanners are probably not 'good enough', posted 27 Aug 2002 at 05:18 UTC by Mysidia » (Journeyer)

Solely heuristic-based scanners won't be a remotely reliable means of detecting trojans in the long run (except for finding previously _known_ trojans). They may actually be more harmful in the long run in terms of how secure software users think they are compared to how sound their practices actually are in terms of security.

That is, in the worst case, users employ the scanners that are not particularly reliable at detecting trojans, and, feeling safe with their 'protection' fail to follow the ordinary security procedures such as verifying package signatures, examining the packages' contents, carefully before configuring it, etc.

The reason heurestic scans are likely to be ineffective, is that those who would plant trojans can get a copy of the scanner just as easily as as anyone else can: through trial-and-error they will find ways around the limited capacities of a simple scanner.

There are methods that could likely be used to hide trojans in ways that scanners wouldn't be able to pick them up (that is, without generating frequent false positives on legitimate scripts) or actually executing up to and part of the possible trojan code. For example: a trojan's payload could be stored and encoded in a custom manner to be parsed and run by a stub somewhere (ie: an eval 'rot 13< asdf' in a Makefile or a yacc parser implementing a custom "TrojanScript" language) that to a scanner would be indistinguishable from.

Perhaps a scanner could be more effective if combined with some sort of "safe execution environment" to test the software in, an extension of file permissions and the POSIX "Capabilities": that is, a sandbox basically of some sort (not quite as isolated as a chroot) where untrusted programs live and the user whose id it runs as has full control of what the program can see and do -- a place where file reads/writes (ie: could only see machine-wide public files and ones the user designated for that program), syscalls, etc would be controlled by a "supervising script" and detection of trojan-like patterns (ie: connecting to a remote host without being asked to, listening on a port, spawning other programs) would be intercepted based on some notion of "trust parameters" for the software program set by the user and constructed in a manner similar to firewall rules.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page