Keeping users safe from themselves

Posted 9 Nov 2000 at 13:46 UTC by matt Share This

The Register today has a bit on Microsoft introducing a configurable trusted execution environment in Whistler, including the capability to restrict the user from running any application that is not digitally signed. Does this help with worms and virii? How could free software do it better?

To be truthful, I don't really care to go spelunking through Microsoft's site or what not for more details about this feature, so readers will have to settle for my uninformed interpretation. Obviously there is a lot that the Register article leaves out. I'm not trying to debate whether or not Whistler will stop or strongly mitigate the threat of worms and virii -- this is entirely the wrong forum for that.

What I do want to talk about is about ways to protect users from themselves. Let's face it -- as GNOME gains acceptance (especially in the corporate world through Solaris), administrators are going to need to be able to keep the great unwashed masses from hurting themselves. Having an operating system that does not require the user to have unrestricted rights to the hardware is a great first step: it can prevent malicious code from destroying the installation. But the most destructive worms don't need that kind of access. The administrator grants the user permission to read and write important data, and by extension, any malicious code (be it binary or script or whatever).

Helix Code claims to not have this problem in their Evolution FAQ because the user would have to save the app, mark it executable, and then run it. This is definitely a step in the right direction; I can think of several Windows-based worms that would never been more than a minor inconvenience if only the document view interface was divorced from the application run interface. But does Evolution even address the problem of a scriptable application used to view an untrusted document? The Word macro virii spread this way. Even the digital signature requirement wouldn't stop them -- I'm sure Word would be digitally signed.

My solution to this problem (well, the one that doesn't sacrifice years of operating system design and associated code, at least) would be to implement a document view interface that had both "trusted" and "untrusted" actions for each file type. A simple, unscriptable file like a JPEG image would have only one action. A document that would run in an application that could potentially permit it to do anything to any system resource except itself would have different trusted and untrusted actions. Any application author who even dreamed of scripting would need to take care to implement an untrusted execution option. When said application was installed, it would register both actions.

If there was ever a bug in the application that allowed for traditionally trusted actions in an untrusted environment, it would be clear who needed to fix the bug and it would be easy to fix the bug. Here free software can really shine, as well -- if an author won't fix his code, someone else can in short order. Compare this to waiting for a non-free software vendor to analyze your bug and then decide that it really isn't a problem after all, or it can't be fixed "because it will break legacy use". Then it's your decision which one you want to implement.

Free software has here a great chance to keep the torch held high for high standards in security. Let's not drop it on the ground!

file types, posted 9 Nov 2000 at 15:55 UTC by lkcl » (Master)

security by file type is insufficient. one of the main viruses recently was called "virusname.txt.exe" [or .vbs - whatever, i don't really care] which of course hit anyone who decided to enable [by default] "known file extension type hiding". security by dynamically examining the contents of the file is also insufficient.

the funny thing is, i can think of no sane reason why any ordinary user would want to have remotely downloaded code [scripts or otherwise] executed direct from their email program. any user that wants [or any developer that thinks that users want] to do this is just asking for trouble.

i think the helixcode people have the right idea: nothing is executable until the user says it is. only the people with the intelligence to work out how, or to follow instructions, to make a program executable deserve to execute remotely downloaded code. so that sounds biased or something: i am reminded of dirty rotten scoundrels: the jackal says, "why is there a wine cork on the end of his fork?", said just as steve martin attempts to stab himself in the eyepatch.

more soundbites: if you write a program to be idiot-proof, only idiots will use it...

Re: file types, posted 9 Nov 2000 at 16:32 UTC by matt » (Journeyer)

Well, the script.txt.vbs appearing as a text file was a "fool the user" proposition, and didn't take advantage of any technical flaw per se. The application run interface, which Windows uses from everything to running FORMAT.EXE to viewing a text file, knew that it was supposed to be executed by the Windows scripting engine.

I think what you're missing here is the fact that the distinction between executable code and content is far from black-and-white. Right now, in the Windows world, the delineation is very fuzzy. Files that are supposed to be documents are loaded with executable scripts. I have not really looked much at the latest round of offerings for UNIX-based systems, but I've heard rumblings of at least plans (if not implementations) to make documents scriptable and embeddable and the whole nine yards. (Actually, embeddable documents opens up a whole new can of worms -- how do we know whether said embedded document is trustworthy or not?) There is obviously utility in it, or it wouldn't be implemented. There should be a simple way to identify to the application that the code that comes with this document should not be trusted. The application can run it restricted, or not run it at all. Those with "intelligence", as you put it, could theoretically override this if they needed to.

None of what I have said should be construed as saying that chmod +x is a "bad" idea, it certainly is not. Executable code and scripts that are separate from a document (read: those that can be executed with a simple chmod +x) can be protected as safely as Helix Code proposes. OTOH, I think someone who declares that to be the be-all and end-all of preventing the spread of worms is guilty of tunnel vision.

The idiot soundbite is an oft-repeated one. I think it's irresponsible in the context of good software design.

People seem to miss the point...., posted 9 Nov 2000 at 19:04 UTC by danwang » (Master)

This feature of the new MS OS will not only increase security. It will give Microsoft better control over what software runs on the OS, which will let them fight piracy by refusing to run copies of MS office which have been modified by hackers to foil copy protection and licensing restrictions associated with the software.

Would you run an OS that wouldn't run just any program? This feature will protect MS's profit margins against large scale piracy. It is a "security feature" in that it protects the profit margins of MS. The real security holes are macro viruii, and I don't see how this approach really solves that problem, and there are less restricitive approaches like "sandboxing" that would not require signing of binaries.

Uninformed banter, posted 9 Nov 2000 at 19:19 UTC by yakk » (Master)

I was about to flame the posters above for their uninformed banter, till I realised that I was probably no more. My understanding (from discussions with some friends of mine who follow Microsoft's developments closer than I do) is that the features described here is in fact a sandbox - implemented at the OS level. Kind of like chroot + capabilities + more. Its a very very sensible feature. I believe even FreeBSD has a similar feature. I don't doubt that we'll see it in Linux before very long.

The use of code signing is quite cool too - its the neatest, most secure way for trusted apps to break out of the sandbox. And to those who're complaining that it might stop users from pirating Microsoft software - why do you care? As far as I'm concerned pirating Microsoft's software is worse than buying it. Fight the good cause - write some free software :-)

In terms of GNOME's security model, as danwang pointed out, this won't help with macro viruses. Most of the recent IE holes have been due to holes in components that can be embedded in IE. As Bonobo takes hold (with the release of Nautilus and Evolution) we're going to have to be very careful and do audits on all our components.

code signing is goofy, posted 10 Nov 2000 at 16:35 UTC by graydon » (Master)

knowing something came from someone you "trust" is meaningless unless you "trust" their ability as a security analyst.

there are 2 perfectly acceptable technologies for solving this: capabilities and proof carrying code. both have been implemented, both work, both are bulletproof if implemented correctly. neither have had widespread adoption of correct implementations, and even if they did we have no reason to suspect marketing and product development would not circumvent them, or even that users would employ them.

users all choose the same password. they disable security features because it seems convenient to turn it off. consider if you will the amount of email you get as a result of worms, vs. the amount of email you get as a result of humans forwarding you chain mail. I don't really think it's possible or desirable to protect users from themselves. they want to hose their machine, let them. what we need to focus on is keeping everything else safe from users: everything whose failure effects the public (or an organization).

the RBL was a brilliant example of this: a collection of smart people concerned for their common good banding together to seal the common area off from the bozotic nature of the larger world. we should work on more things of this sort. if it takes redesigning some protocols, so be it.

Keep users safe from thesemselves - bad idea, posted 10 Nov 2000 at 16:45 UTC by strlen » (Journeyer)

The reason I use UNIX is that it's an OS I wish to use, I can do whatever I wish with it. Users need to have a clue large enough to decide what the trust and what they don't trust. And this is changing in the UNIX world. For example, I was told that Red Hat's rm has builtin hard wired feature not to be able to remove /. But what If I wanted to wipe a disk when selling a machine? Let me decide what I think is safe, myself as root of course. That's what user levels are good for, I can always protect myself from script kiddies, trojan horses and viruses by doing most work as a regular user, but if I know what I want to do with the system, I can always su or login as root. Please, let's no go further.

LD_PRELOAD is one answer..., posted 10 Nov 2000 at 19:09 UTC by Ricdude » (Journeyer)

By using LD_PRELOAD hackery, one can supply additional checks to the standard system calls. This overridded execution environment can check for events like "opening files in current directory", "opening files in parent directories", "opening files in system directories (/etc, /usr, /tmp)", "open network sockets" and override the standard behaviour to provide options like "allow open read-only", "disallow any attempt to open", "confirm all file access operations". This would allow "toys" like ELFBOWL.EXE to execute in a well-confined environment that would not allow them the ability to access anything they shouldn't, for example: email contact lists, /etc/passwd, ~/.someprogrc, etc.

My day job, posted 10 Nov 2000 at 22:55 UTC by kroah » (Master)

I have been working on things like this for Linux at my day job. Check out SubDomain and Cryptomark

Both of these run on Linux and sound just like what Microsoft is proposing.

SubDomain is available right now (well, it's going to be on cds that will be passed out at Comdex next week, the source should be up on the web site in a few days.). While Cryptomark will be a while before it escapes our labs (fun DARPA rules, combined with loads of other company mess...)

LD_PRELOAD is not an option for sandboxing, posted 12 Nov 2000 at 04:17 UTC by rbrady » (Journeyer)

Additional checks in the C runtime hooked in with LD_PRELOAD can be trivially overcome by programs just making the syscalls directly. (load up the registers as apporpriatem, call int 0x80 or whatever it is on your platform).

The only place this can be implemented securely is the kernel.

Real capabilities, posted 12 Nov 2000 at 14:56 UTC by listen » (Journeyer)

This kind of stuff needs to be done with real capabilities.. eg those used in eros (

Unix and its imitators (NT) all have an inherently flawed model : Every program you run is trusted as much as YOU! The assumption should be: Every program you run is as trusted as the person who wrote it. You might not even trust them at all!

This means the following bits of unix are evil: The file system (or at least the global name space). Most programs don't need this. They need just one file, so you pass a cap (read fd) to this file. Other progs should be given customised namespaces.

Sockets. The arbitrary limit that anyone can bind to ports above 1024 is a bit mad, and anyone can connect to any ports on another machine...

Signals. You can communicate with any other process that the user is running...

Etc, etc. List goes on...

Lots of bandaids for this general rubbishness have been made, eg tripwire, privilege bits (wrongly called capabilities by POSIX), plan 9 seperated namespaces (though these are useful for other stuff) chroot, wierdo ptrace things that check system calls.

Anyway the only real solution is to throw it all away and start again. Argh! Thats going to take a long time....don't look at me! Don't get me wrong, I love unix, but there are some fundamental flawed assumptions that were made (that everyone trusted any code they ran). Ah well, I imagine we will be stuck making bandaids for this for quite a while. It does mean abandoning POSIX for stuff we want to be really secure... though it should be possible to emulate a POSIX env on top of a cap system.


right on, posted 18 Nov 2000 at 13:26 UTC by etoffi » (Apprentice)

i actually "designed" a system like this a couple of years ago. it seems that microshaft steals all of my ideas. i had the same idea as .NET two years back, and i had the application sandboxing idea also about two years back (although i did get it from eros and keykos, i didnt understand them as completely as i should have).

also i agree with listen in that the only solution is a complete rewrite. i havent used it yet, but i believe that plan9 has the capability <snicker> to be better than unix.

my basic point is that all applications should be limited to only the capabilites and files(!!) allotted to them. i think the unix way might be a little off in respect to this tactic, at least in comparison to the macosx way. an app on mac10 is a directory grouped with resources (graphics and menus, i guess), meta-info and the actual info. this type of meta-info could be expanded to include the virtual namespace information and capability descriptors/keys that a user allots to a program.

Re: etoffi, posted 21 Nov 2000 at 19:02 UTC by listen » (Journeyer)

Hm, files should not be a seperate kind of object, just a different thing you have a cap to. Ie everything you use you have a cap to. There does not need to be a concept of file in the kernel, though it might be found convenient for speed.

Plan9 does allow you to make independent namespaces, but this doesn't fix the big issue. Unless you move everything in the POSIX system into that namespace. (This is partially true in Plan9. eg sockets etc are in the namespace). Linux is getting plan9 namespaces in 2.5 anyway.

Also, MacOSX doesn't do anything better than Unix here. The stuff you point out is just their package management system, provides no extra security whatsoever.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page