NCAP for Software

Posted 4 Jan 2004 at 13:58 UTC by kilmo Share This

For years the car industry enjoyed easy-to-pass crash tests, which led for a "secure" car. In recent years the NCAP (new car assessment program) forced the car industry to produce secure car. Maybe it is time our community will create a similar unbiased software assessment program.

For years our cars were claimed to be secure by the car industry. Those claims were based on tests done by the manufacturers. As a response the NCAP (new car assessment program) started an unbiased crash tests. At starters the car industry claimed those tests to be fictional, unprofessional, and other mumbo jumbo that you hear whenever you claim the truth.

Now let us examine the software industry. We have "secure" programs and applications. We have "secure" operating systems. "How do you know it is secure?" you ask them, and they answer: "We did internal checks, code reviews, etc.". See the similarity?

That is why we (the consumers) must built a software NCAP for our purposes. We shall give stars of security, based on various creteria (see how NCAP gives away stars) that we need to decide upon. For example, we can give 1/2 star if the program memory was checked by some tools (for example -- valgrind), another 1/2 star if it is an open source, etc.

At the beginning I believe that the software industry will reject the findings. After awhile, it will have to conform to the new software NCAP method, just like the car industry today. Thus, as the NCAP brought out much more secure cars out of the car industry, I hope that this initiative would bring out much more secure software.


I like it, posted 4 Jan 2004 at 16:27 UTC by Omnifarious » (Journeyer)

I like that much better than attempting to use the liability system to punish people who make bad software.

The liability system is problematic because it's hard to define what is 'bad'. At one time, having Internet daemons with buffer overflow exploits would've been an understandable mistake. It isn't anymore. The liability system doesn't have a really good way of dealing with this reality.

This system does. It's much more of a carrot approach. It encourages a steady increase in the security of systems. Once everybody starts getting at least 4 stars, it's time to raise the bar some so the metric still has discrimantory power.

Sounds easy on the surface..., posted 4 Jan 2004 at 19:02 UTC by apenwarr » (Master)

I think this is a good idea, but it would probably be a ton of work and you'd have to slog through a lot of flamewars to do it. Let's see, I think there should be points for having automated unit testing, and more points of those unit tests are checked with 'gcov' to make sure the test coverage is reasonably high (or points simply vary depending on the overall coverage).

But... aren't some programming languages inherently less good/secure/etc? Maybe or maybe not, but someone will bring it up, and then you'll have to argue. And how many points is each thing worth? Etc.

On the other hand, any objective measurement system like this can only help, even if every "point" is just a single star: if stuff is easy, everyone will just do it. The harder "stars" will mostly only be reached after the easy ones have been tackled for a particular project.

So, off you go then... :)

ISO 9001, posted 5 Jan 2004 at 11:28 UTC by lkcl » (Master)

hm, it sounds like a feasible way to achieve this would be to make it part of the ISO 9001 software / tickit standards.

now, i realise that some people absolutely hate iso 9001: i believe that those companies who have seen it fail to produce better quality software have missed the point entirely because they don't have the procedures in place that iso 9001 is there to verify.

plus, it's perfectly acceptable under iso 9001 to say "this project started before we began iso 9001, therefore we're not going to apply iso 9001 standards to it except for this statement of course".

the whole point is to say what you are going to do, do it, and then prove that you've done it. if that involves saying "we're going to do this as a botch-job rush project", then you do a botch-job rush project, THAT'S OKAY!!!

... just don't expect your customers to buy that product if it requires iso 9001 certification :)

so, with that out of the way, i believe that it would be good to have security auditing as part of iso 9001.

then it would be possible for customers to approach suppliers and say "please show me your security audit reports and if you haven't got any i don't want to buy your product i will go to someone else who has".

this threat alone will make companies think twice about not getting security audit compliance.

About the Stars, posted 5 Jan 2004 at 11:28 UTC by kilmo » (Journeyer)

The concept is use something like the NCAP. Each program must stand various tests to get those stars.

Now I am not suggesting that I will establish such thing (sorry, no can do... have too much work as is), but as a genral concept "someone should do that".

The problem with ISO 9001, posted 5 Jan 2004 at 19:20 UTC by jbuck » (Master)

ISO 9001 requires that the organization follow documented procedures and that it can demonstrate that it does, but the organization gets to write its own procedures, which can be as strong or as weak as desired. For this reason, simply saying "My organization is ISO 9001 certified" is almost meaningless. If the procedures you were audited on are well designed, it could mean quite a lot. If, however, the organization is only ISO certified because customers demanded it, and top management asked middle management for the cheapest possible implementation, it's worthless.

An ISO 9001-style security audit requirement would probably let the organization write its own rules for security audits and write them to be as loose and as easily met as possible. There would need to be minimum standards.

some points, posted 6 Jan 2004 at 12:37 UTC by nixnut » (Journeyer)

Cars are very much a standard product. That is, pretty much every car that you can drive on the road is build pretty much the same way and used for the same things. So the safety requirements can be easily standardized for these cars. This does not hold for software. The design and purpose of diverse pieces of software differ considerably and therefore also the requirents to satisfy security. For coding practices it is possible to define standards that (when followed) lead to a defined level of code quality. But even then it's very very hard to guarantee absence of logical errors (the bugs that arise from choosing the wrong algorithm).

Furthermore, and this is a very important point, security is much more than just high quality software. For a business the business processes are what count. Security, for businesses, means securing the processes (in order of importance for the organization). Properly defining tasks and responsibilities of employees and implementing and exercizing control for this, usually has much more impact than software quality.

My guess is that different programming paradigma's will develop. One is verifiable code, for software that is used for highly critical tasks. Being able to formaly verify code, to mathematically proof its correctness is of paramount importance.
Another is that some software will be more organic. Meaning that it behaves not always correct, but only often enough to satisfy the need. Systems of these kind are programmed based on the idea that they may and probably will fail, but they are designed in such a way that they fail gracefully and that the systems the interact with will be able to deal with that.

perhaps test based on standard interfaces, posted 8 Jan 2004 at 00:43 UTC by BitchKapoor » (Apprentice)

Cars are very much a standard product... This does not hold for software. The design and purpose of diverse pieces of software differ considerably and therefore also the requirents to satisfy security.
While I fully agree with what you say, I think this reflects part of the problem with a lot of software. All too often, standard interfaces are not used, either because (1) there aren't any (known) for the application domain, or (2) those which exist are either too complicated or too simple to be useful. This leads to a class of bugs which arise from trying to interface programs which make different interface assumptions. So while it would be impossible to specify one static standard for testing all software against, requirements for standard interfaces which have desirable safety properties and application testing capabilities would be a step in the right direction. Well-examined reference code could be provided with a very liberal license for commonly used interfaces, along with test suites. Note that by this I mean semantic interfaces, not just the syntatic notion of interfaces as in Java or CORBA.
For example, an HTTP parser module might do whatever is necessary to strip out meta-characters which might be mis-interpreted in certain contexts, but additional provide test probes to trace where user-provided data goes, e.g. to validate that if the application asks the parser not to perform certain safety checks, that the application does not then use the data in a dangerous way. If the application writer does not even do anything useful with these test probes, then a potential customer might automatically conclude that they get "zero stars," or try to improve the software so that it is meaningfully testable against the suite for that interface.
As time goes on and vulnerabilities are discovered when multiple interfaces are used in conjunection with eachother, (1) these interfaces will need a way to be evolved, including when already integrated into existing software, and (2) test suites will need to cover cases of multiple interfaces occuring in the same component (where a component may be defined as either atomic or the composition of multiple components).

Library interface problems, posted 8 Jan 2004 at 15:39 UTC by Omnifarious » (Journeyer)

Actually, though this is a bit off topic, my biggest beef with various libraries is this:

They demand to be attached directly to some source of IO that they will not let go of until the library state is at one of a very small number of well defined points. There is no way to tell these libraries "Here, have a little bit of data, do what you can with it, then save your state so you can get more data later.". It's harder to write a library that does that, so nobody does it. It's so incredibly irritating.

I end discarding about 70% to 80% of libraries I might otherwise consider using due to this one single problem.

I avoid writing libraries that do that. If there's a chance that the library will have to wait on some external source for a piece of data, the library has a way of getting out, and being re-entered when that piece of data is available. It's just good design to do it that way.

I should flesh this out and make it a top-level article.

library pre-emption, posted 8 Jan 2004 at 23:32 UTC by BitchKapoor » (Apprentice)

They demand to be attached directly to some source of IO that they will not let go of until the library state is at one of a very small number of well defined points.

I think I know what you mean. I believe this is directly correlated with the overall rather weak support for multithreading in prevalent programming languages and programming styles. If the library operated as an autonomous thread which interacted with you and the data source/sink asynchronously, I think most of the problems you describe would go away. The library's inputs and outputs should probably also be parameterized, allowing "anything that respects this (semantic) interface" to be plugged in as a source or sink, rather than being hard-wired, e.g. to interact with the local filesystem (I suppose this would be a limited form of black-box reflection).

Congrats kilmo!, posted 9 Jan 2004 at 19:23 UTC by shlomif » (Master)

kilmo: congrats for your first (IIRC) Advogato editorial. It is a bit thought provoking, but I don't have a good opinion on it now. As a general rule, I think methodologies for more secure software (and more high-quality software in general) can be applied regardless if there's some security verification tests. Plus, such tests can usually be pretty hard to develop correctly.

Re: library pre-emption, posted 13 Jan 2004 at 15:40 UTC by pphaneuf » (Journeyer)

My multithreading alarm is going off... :-)

I don't think multithreading would save us here. Multithreading is like a big power tool, very useful and indispensible when you need it, but you don't want to do everything (like filing your nails) with one!

See Xlib or libpq (PostgreSQL client library) for examples of how it can be done without threads. You can get the file descriptor easily, put it in a select() (or other event notification mechanism) and react accordingly.

As much as it pains me to say this, Windows has something in the right direction, IMHO: the message pump. They managed to put big mistakes (there is a function pointer in a timer event message, which will be followed immediately, but the message can come from any process!) and bugs (they have a fantastic DNS resolving API, but it has several internal limitations that make it suck), but it's still a pretty good idea. You can register interest in events, and have a function called "secretly" when they arrive, no matter who called the GetMessage() function (or whatever it is). That function just has to be called "once in a while" and stuff happens, with the application in control of its blockingness.

But don't despair, being Microsoft, they are managing to mess it up and are multithreading everywhere, so that Intel will stay nice and happy.

real-time message pump graphical "OS" thing, posted 13 Jan 2004 at 22:23 UTC by lkcl » (Master)

i worked for Cedar Audio 7 years ago. at the time, nt was way too overboard, windows sucked the life out of a machine, taking control of the hard drive (and all interrupts) at inconvenient moments when the DSP board needed to say "hey, got some data here!)

so the programs were effectively written as a windows-like OS, taking control of interrupts, having its own stack allocator, and doing its own VGA 640x480 screen driver. all in c++ and assembler.

the core of the OS was a message pump - like windows. the key difference was that we added a "priority" to the message queue.

this made a critical difference in the useability of the OS: it allowed the screen writing part to prioritise window drawing by putting the background at a higher priority than the border than the text; it allowed the mouse/keyboard events to be handled at a much higher priority than drawing but the sending of events to the DSP board to be in the middle.

in this way, wiggling the mouse changed the volume but the drawing on the screen lagged but didn't interfere. something very important when running on 486 dx 25 systems.

utterly cool: absolutely loved it.

i wonder if i can ask them if they could release it as open source: the code was retired 5 years ago.

anyway, the point is that it made writing applications - and using the class libraries - absolutely trivial. dials, sliders, ganging locks and switches, graphs, dialog boxes, all possible to plug together and still get an application rolling in at under 400k.

no darn threads, either.

Threads as security hazard, posted 14 Jan 2004 at 18:42 UTC by kilmo » (Journeyer)

I suspect that threads are almost a disaster waiting to happen (speaking from security point of view). haing various processes, ensures that if one is borken, the rest are un affected (up to interaction with it). Multithread is different - break one thread - and you broke them all.

I once heard Moshe Zadka claiming that threads are not necessary for performance, and the way python deals with multi tasking is emulation of multithreading in one process. Might be me misunderstanding him, or don't remembering it perfectly. Still, there should be no large efficiency loses (up to point of respawning, and a bit of interaction). In many cases those loses are negligible with respect to the security they offer. My 2 centavo.

Definition of "thread", posted 15 Jan 2004 at 01:35 UTC by BitchKapoor » (Apprentice)

I think we are having a misunderstanding on the definition of the word "thread." In my work, a thread does not have to be allowed to write to arbitrary data shared with another thread in an unsychronized fashion. Nor does it have to be a heavy-weight OS thread. So my definition is inclusive of multiple processes, mult-tasking emulation, message pumps, and select(). Perhaps I should rephrase this as simply concurrency.

Software peer review, posted 15 Jan 2004 at 21:09 UTC by slamb » (Journeyer)

For open source software, I think a lot of this can be helped out by peer review. Large, well-known projects get it a lot of it, but there are a lot of smaller projects that just aren't reviewed by anyone but the author and the few users who aren't necessarily qualified to evaluate it.

The Sardonix project attempts to do this for security. It would be nice if more people knew about it, and people audited some smaller projects. Particularly web stuff (PHP stuff, CGI/mod_(perl|python), servlets, etc.), where there's a lot of insecure code out there, and an audit is almost certain to turn up a vulnerability to be fixed.

I don't think there's any site out there to encourage general peer review of smaller projects, though. It would be nice if there were. How often have you looked at a dozen projects, then realized 11 of them had made a bad fundamental design decision or used poor coding throughout the whole project? And then not said anything because you have no idea if the coder wants to hear that kind of criticism or not? And then...wondered how many people had thought the same thing about your code.

It would be nice if there were a well-known site where (brave) coders could put their projects up for review, essentially saying "I want to hear whatever constructive criticism you have, even if you think there are very basic flaws." I think it'd help me be a better programmer, at any rate.

Absolutely., posted 16 Jan 2004 at 11:07 UTC by Stevey » (Master)

slamb - Peer review of software is a great thing, I've not seen the Sardonix link for a while.

I've been auditing some Debian software for a while now and finding security bugs.

It has been my intention to open this up more in the future and try to invite others to join in, but it seems to be a subject which doesn't interest many people despite the benefits being obvious to me at least.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page