Why is so much software so bloated?
Posted 17 Dec 2000 at 21:23 UTC by Fefe
To me, it is sickening to see how software gets slower faster than
hardware gets faster. And nobody appears to have any feeling of guilt
about it. People are happy to use bloat technologies, noone knows how
to write lean and fast software any more. Why is that so?
I don't even have to point at Mozilla, GNOME or KDE, everyone who tested
those knows how excruciatingly bloated those are. Holy wars abound
which one is less bug-ridden and cause less crashes. Sometimes, I wish
back the times where people had to work with a few kilobytes of RAM.
Back then, software bloat simply caused applications not to work.
Today, we have workarounds like good virtual memory systems where unused
bloat is not loaded from disk, but that just solves the symptom, not the
Why is today's software quality so bad? Why do users accept software
when using it is like wading through a tar pit? And if someone sets out
to write a new piece of software, the first thing they do is reuse
bloat-ridden monster components from others. "Hey, they bloat is not
coming from me", I hear them say. People even get away with calling
something like gtkhtml or galeon "light-weight". Aren't they seeing the
megs upon megs of Xlib, Xt and Gtk bloat those apps are carring around?
I don't get it.
Does anyone have an idea why this is so and what to do about it? The
humble beginnings of my attempt to fix the situation can be found at /proj/diet libc and
utils. I am not convinced this will help anyone but me, though.
People are actually spending money to get even more bloat on their new
hard disks they bought because the old one couldn't hold the old bloat.
The typical Linux distribution today eats much more disk space than the
typical Windows installation. Sigh.
Get over it., posted 17 Dec 2000 at 22:10 UTC by egnor »
You're on some kind of holy crusade. Why? Consider the tradeoffs:
- Developer time packing more features into less space. (Make sure to
include the cost of ongoing maintenance.)
- User convenience from having more features available.
- User time spent waiting for slow software.
- User cost and hassle upgrading hardware.
You would trade off #1 and #2 in exchange for #3 and #4. Other people
make different choices. This isn't a black and white issue. The way to
resolve these tradeoffs is exactly the way we're resolving them now, by
offering a spectrum of choices and letting the user community at large
"vote with their feet". Are you saying there aren't enough choices? (I
disagree.) Are you saying developers aren't responding to the way users
are "voting"? (Again, I disagree.)
Personally, I find Mozilla too slow, so I don't use it. (The Mozilla
team is hardly unconcerned with performance; reducing bloat is their
primary objective now.) Netscape 4 is "fast enough" for me. If I
something that was even faster and had even fewer features, I could use
"links". The spectrum is fully populated.
(Oh, you want software that's fully functional *and* fast *and* small
*and* available right now? I see.)
You apparently prefer software that's faster but less featureful, and
that's your choice to make. For example, you choose to remove I18N from
"diet libc". International users would probably make a different
choice. Are you complaining that users at large prefer a different
Arguably, the time of skilled developers is one of the scarcest
resources in computing right now. Hardware is cheap, software is often
free, but developers to work on that software are few and far between.
In that context, bloat makes sense. If a good VM can save the
programmer a few hours shaving a few bytes off every last data structure
in their program, that's a few hours that can be spent on something more
Your argument seems to hinge on esthetics. That's great for stirring up
flamewars, but it's not an argument anyone can resolve. Get pragmatic.
Yeah, I had an 8-bit computer once, too. It's a different world now.
Tradeoffs, posted 18 Dec 2000 at 03:48 UTC by hp »
AOL to egnor. Absolutely. There are tradeoffs involved;
and developer time is usually the thing in shortest supply, so you can't
trade it off for other benefits. When I look at code written by big
anti-bloat advocates, I usually see code that no one but the author can
work on, that takes forever to get the bugs out of, and is just in
general painful to maintain and fragile to use. And that's why you don't
see those people writing huge complex software systems such as a
desktop. Because this approach to code does not scale up.
The bottlenecks for software today are almost always developer time
and bugs. Both of those are greatly reduced by programming at a higher
level. As long as software is fast enough to be nicely usable, end users
do not care what top has to say. Nor should they care.
I'm sure as soon as someone writes a significant, full-featured end
user desktop with the low-level coding style of a C library,
and it is robust and unbuggy, then everyone will be impressed and users
will flock to it. But, it hasn't happened yet. And there are good
reasons why it hasn't.
You would probably get a kick out of the thread I started on Slashdot
with my post Programmers Make Computers Slower Year by Year in
the article Netscape 6
vs. 4.7.x. (The discussion has been archived so I can't link to my
original post or the following thread, but use your browser's Find
command to look for the subject or my slashdot username "goingware").
The comparison review that was the subject of the article found that the
latest netscape was a lot bigger and slower than, not just Netscape
4.7.x, but the Mozilla from which the latest Netscape was based.
Even while our friends at Intel, Motorola and IBM do the most amazing
things to speed up computer hardware (and don't forget our
friends at Adaptec with the blazing 29160 SCSI Ultra160 Host
Bus Adapter), programmers consistently work harder year after year
to steal from the end user the gains that they might
otherwise have from purchasing new hardware.
This leads to the ridiculous situation that an old computer
runs slower and slower as new software is loaded on it, until you
have to buy a new one just to run at all.
It's not just that you have the perception that your computer
of old is running slower than the new computers because it was less
zippy when you bought it, but because the regressive
performance dehancements of operating systems and bloated applications
really do make your computers run slower...
There is no excuse for this. New features should not come at the expense
of performance, and each new release of both operating
systems and applications should be both faster and take up
less space, not more. If substantial new features have been added then
there may be cause for a little more code size but certainly
not what we see in practice, such as what was listed in the Netscape 6
My post was quite controversial and a lot of people thought that it was
meant as a troll - it was moderated down several times as such - but I
meant it in all seriousness and I've been going around saying this
widely on the net and in person (advocating for lean code within
companies) for years (and it was moderated up several times as well).
Fast performance and small code size should always be a design
objective in any software project.
A professor, whose name I forget, said something like
Automation is a way to do a task almost correctly, but faster and
Maybe bloat is a way to make software almost as usable, but with more
I think you make some good points, and I think that people (non-software
people in particular) tend to underestimate the costs of bloat. In the
end, though, the our job is to weigh the various competing factors and
strike a good balance. Personally, I try to write lean code and fight
hard against unneeded features, but remember: as simple as reasonable
and no simpler. (to misquote that what's-his-name :-)
P.S. No one really runs static binaries anymore, though. A null,
dynamically linked program is under 3KB.
It's often said that it is preferable to have bloated code than obscure,
tightly written code that no one but the author can maintain.
But I assert that the very most beautifully architected code is both
lean, fast, and easily maintainable. People who write obfuscated code
other than to win contests just aren't very good programmers, whether or
not their code runs fast.
I think what is important is to design in your leanness at the
architectural level, and code with efficiency in your consciousness, but
not in minute detail.
For one thing, I think there is not enough use made of libraries in most
Libraries should be used both at the system level and
within a development organization, not just to save code space and
development time, but to provide a focal point for optimization. If you
code your application to a library (whether the library be a class
library, traditional subroutine library, or C++ generic programming
template library), improvements made to the library steadily improve all
applications, either when they are recompiled (for templates), relinked
(for static libs), or the libraries recompiled and redistributed
independentely of the apps (for shared libraries).
Also if multiple, independent application developers are making use of
libraries they will tend towards more general architecture and get more
eyes on them than the code internal to a single application will.
An important principle here, whether in libraries or user (application)
code is the Extreme
Programming practice called Refactor
Simply put, this is recognizing when common code is repeated in two
different places and putting it in a single place by making a subroutine
out of it. Sometimes it is well to refactor at a higher level by
breaking a large monolithic class into several smaller ones, usually
through composition but sometimes through inheritance too.
What I'd like to suggest to anyone who's written a software product is
that you don't stop at getting all the features implemented and working
correctly, but examine your code globally to determine how it could be
written better. Consider that when you ship a bug-fix release you spend
as much time refactoring your program as actually fixing bugs - you're
likely to find that your refactoring fixes a lot of problems for you as
well, as was my
experience with refactoring some code that interfaced with an XML
library in a recent project.
And one last note about libraries - when you're writing a new
application, it is very valuable to try to isolate parts of the program
into self-contained modules and to package these modules into libraries
which are built separately from your application. What you want is to
write a lot of modules which do not depend on any other modules other
than the standard libraries, or at the next level, on any modules but
other modules in the same library.
This kind of thing is discussed in John Lakos' book Large
Scale Software Design. It depends not just on how your code calls
subroutines in other modules (which will force those modules to be
linked in) but also how they include header files from other modules
(this forces other modules to be provided for compilation - you have to
be concerned about the physical design of your source code.)
If you're writing a closed-source app, this will make your program
easier to debug and test when you consider it entirely by itself. If
you write more than one application, you can rapidly build a reusable
technology base for use within your company - of code that is tested and
reusable. And if you're writing Free Software, you can provide the
source to the library for others to consider, and they can either use
your library as-is, or combine it with their own libraries, and we'll
all be better off for it.
discusses the importance of libraries in his chapters on software design
in part IV of The C++
Programming Language. (Table of
Contents) As I recall the first of the design chapters (chapter 23)
would be good reading for anyone coding in any language, and the next
would be pretty reasonable for anyone using any object-oriented
Basically what Stroustrup says is that when you are considering how to
implement a program, you should:
- Find existing code that will suit your purpose
- Modify existing code slightly so that the new code suits your
- To the extent you can't find reusable code, write new reusable code
that meets your needs
- Only when it is not possible to use or write reusable code do you
write code specific to the task at hand
Stroustrup observes (and so do I) that this is not the way things are
commonly done in most software development organizations. This is a
management problem - programmers are rewarded for the new code they
write, rather than the money they save their organization by avoiding
having to write code at all, and programmers themselves are typically
more interested in writing code that visibly does something that
you can demonstrate to any user rather than writing utility libraries
where your face may not show up in lights for having written it.
Now, we all know that the whole point of Free Software is to enable us
to have the source code to modify and customize to our own needs. But
actually most free software does not serve this end very well at all; at
the best you can
make small modifications to an application to make a special version of
it, or to port it to a new platform, but can you lift out big chunks of
the source code wholesale and retarget it to an entirely unforseen
Yes, there are free software libraries - look at what's provided with
GTK+ and the Gnome libraries for example, and the ZooLib cross-platform
application framework which I contributed to bringing to open source
release - but most of the source code written in application form is not
really reusable outside of the application itself.
You could, but this can only be done easily if the original author
architected and implemented well to make the classes or subroutines
reusable in themselves without requiring the application as a whole.
The shining exception to this really, is GTK+. As I understand it this
was originally written to be an application framework for use by one
program, the GIMP, but it
was architected in such a way that it can be used outside of the
original program, and is cross-platform besides as evidenced by GTK+ for BeOS, GTK+ for MacOS and wxWindows/GTK.
Cleverer than I am, posted 18 Dec 2000 at 13:25 UTC by dan »
To the extent you can't find reusable code, write new reusable code that
meets your needs
If you can write new reusable code and see it actually reused often
enough for the extra development time to pay off, you're a better
programmer than I am. Probably a practising clairvoyant, too.
"Reusable" code shouldn't be considered reusable until it's been reused.
What happened to "plan to throw one away"?
Here are some of my hypothesis why I think software is so
huge and complicated:
- Languages were used to create frameworks that isolate the
environment, rather than extending it.
- Inability of a language to extend itself dynamically at the syntax
and semantics level.
- Lack of stack-based objects capable of lexing and parsing textual
- Languages were used independently or singularly without regard to
the machine and operating system.
- Runaway abstraction.
- Uncontrolled coupling.
- Low-tech library management.
Though I have no proofs to back them up. I'll just leave them as-is for
That's my two cents. Thanks.
If you can write new reusable code and see it actually reused often
enough for the
extra development time to pay off, you're a better
programmer than I am.
Probably a practising clairvoyant, too.
It's really not so difficult to write code that is at least moderately
useful. The main thing is that you have to be aware of some basic
principles and to actually be conscious while you're working and to
It helps to try over some period of time and in fact I've been writing
little homemade libraries everywhere I've worked since Working Software in 1990 - and I admit
the libraries I wrote back then were pretty cheesy compared to what I do
now, and what I do now pale in comparison to what Andy Green spent about
the same time in creating ZooLib. It is partially
somethin you must learn from experience, but what I'm saying is that
it's worth trying and you can get rewarding results
Yes, it is very difficult to design a really well-designed library that
is well-architected enough to serve a diverse range of purposes and is
also implemented well enough so that it is interrupt-safe and does not
leak resources or cause deadlocks (if your language supports exceptions
and your program is threaded). For a discussion of the difficulty of
writing exception-safe templates, there's a good chapter in More
C++ Gems edited by Robert C. Martin. (The review I linked to isn't
all that positive but I found the book very worthwhile myself,
particularly the chapter on Large Scale Software Design by John Lakos,
who also wrote a book on the same subject).
A problem with templates in particular, which are always meant to be
reusable, is that while the template itself might not throw an
exception, you have no way of knowing whether a function in a type that
it's instantiated with will throw. You have the same problem using
callbacks from libraries.
But I digress. Writing reusable code at the simplest level is often a
matter of ensuring that a single subroutine can run on its own
without having to link in the whole rest of the program. Scan through
the source of your favorite program and find a single subroutine that
looks like it will serve a useful purpose. Now link that routine's
source file into a new program and try calling it. How hard is it to
get the subroutine to compile and link - do you need just that one
source file from the original program or hundreds of them?
There's a couple of simple principles. One is to parameterize things in
such a way that a given routine will be of more general purpose. This
might make it a little less efficient than a hard-coded specialty
function but may increase overall efficiency because you will need fewer
functions overall in your program and by making better use of both the
cache and virtual memory your program will load and execute faster.
It's hard to think of a really good example, but consider this cheezy
long AddFour( long toWhat )
return toWhat + 4;
long AddFive( long toWhat )
return toWhat + 5;
Now those two functions look really lame sitting right next to each
other but in a big program they may be independently written at widely
separated source files, perhaps by different people who were unaware of
each other's work. It's not as stupid as you might think to have such a
function as things like this get used for stuff like accessors into
packed data blocks like SCSI scanner commands.
But later someone takes a global look at the code and refactors it into
the obvious single function:
long AddOffset( long toWhat, long offset )
return toWhat + offset;
Cheezy examples aside, this first general principle in reusability is,
rather than hardcoding a function to serve one's single immediate need,
think about how it can be parameterized to serve several user's
needs. What's most important is that you must consider this at the
time the function (or class) is originally designed as it is hard
to go back and restructure code to take out specialization and
Don't get hung up on it, just have an awareness of it, most of us have
some sense of what constitutes good code and this kind of reusability
should be part of that sense.
The next is to avoid hardcoding types that are likely to only be used in
one program into the parameters or local variables of a function (or
member variables of a class). One really good way to do this is to make
good choices about what should be in base vs. concrete derived classes
and always refer to base classes when you can.
This usually means, in a language like C++, that you cannot hold an
object by value when you possess one but instead must use a pointer or
reference (other languages don't have a choice). In C++ if you hold the
value of a base class and copy or initialize it from a derived class,
you'll "slice" off the derived class personality - not just the member
variables but the virtual functions. Thus one gotcha is that you must
make the right choices about how you store your data - I've been slowly
working on an article to address this called Pointers, References
Another good idea is to avoid having a library routine make subroutine
calls into specific named functions that are not part of the same
library. What you want to do is have a layering of your source code
structure where the lowest layers depend only on the standard library,
then the next layers depend only on the next layer down, and so on, with
the functions at the top having the most dependence (all the way down)
but are the fewest in number.
One way to do this in a normal C subroutine that needs to call another
function is to pass in a pointer to a function as a parameter. That way
the client code can determine what code is called by the library
routine. This is used to great effect by the standard C library routine
qsort(), which quicksorts an array based on a comparison function. It's
simple enough to hardcode a quicksort for an integer array, or a float
array, but to quicksort anything you have to pass in a pointer to a
comparison function. It makes it a little harder to use the
function but it saves you having to rewrite it all the time. You could
even make this more general and have pointers to functions to access the
elements by index and swap them, and then you can sort any data
structure, not just an array.
In object oriented programming this is handled again by using base
classes and calling their member functions. If you pass in a derived
class you can override a member function and change its behaviour to
whatever you desire (only if they're virtual in C++). But if you pass
in a class that is pretty hardwired to be of use only to the one given
application, then your routine is not likely to be reusable.
There's more to it but I think if you simply maintain an awareness of
these simple practices and try to do them from time to time you'll save
time in the long run in writing your programs. You'll also find it
easier to design and implement your programs as you'll be concentrating
on whatever is abstractly essential to the problem at hand when you're
writing a function.
I have a friend who codes pretty much the exact opposite of what I've
suggested here. Every function is hardwired to serve one purpose and
one purpose only. If he has a need that is similar to, but different
from a previously coded function, he copies and pastes the original
source to a new location and modifies it until it suits that new,
It happens that this same guy took all the headers for one large
commercial product and copied and pasted them into about three humongous
header files "so they'd be all in one place and it'd be easy to find the
definitions for things". No accident that when I ported his product to
a new platform I broke his headers up into many little headers, often
with only one struct declaration or prototype per header file, because
the big headers had portable and platform-specific code all mixed up.
This guy's code is a nightmare to maintain. I think he shows genius in
what he's managed to get running - he's got a lot of shipping products
to his credit - but what pain he subjects himself to because he won't go
to a little extra trouble to structure his code with good style. A lot
of the work I've done for him has been to do the kinds of things I
describe above to his work, parameterization and such.
A problem with templates in particular, which are always
meant to be reusable, is that while the template itself might not throw
an exception, you have no way of knowing whether a function in a type
that it's instantiated with will throw. You have the same problem using
callbacks from libraries.
I solved a similar problem when I wrote a Visitor framework for a
class library of mine. One problem with a Visitor is that it may need
to stop suddenly and return immediately when it encounters a problem, or
discovers it's done and wants to not visit anything else for efficiency
reasons. I solved the by having a placeholder exception the Visitor
could throw. I made all the functions involved in implementing the
Visitor specify this placeholder exception in their throw clause.
I would imagine a similar thing could be implemented for a template
library. Perhaps a single placeholder exception for the entire library.
The main problem this causes is that things that use the library need
to be adapted to throw this new exception instead of what they threw
Another thing you could do is have a standard exception that was
thrown by the template library whenever it caught in exception in a
catch (...) clause. I can't remember if you can do this or
not, but it might be possible to make the new exception contain the
original exception. If not, at least the new exception provides a way
to catch the error without it turning into an uncaught exception.
I don't have C++ Gems, so maybe these techniques were discussed
think what you propose is a good idea - that is, have all the routines
in a library catch all exceptions from functions they call and throw
only standardized exceptions that are declared in a throws clause.
I'm not entirely sure why, but More C++ Gems would seem to discourage
this kind of thing, though. In that article on template exception
safety, it states that one of the design principles is that libraries
should not impose error handling policies, and so what they guarantee is
that if an exception is thrown within a library, it won't be caught at
all (or will be rethrown), it's just that there will be no resource
leaks and all of the objects will remain in an error-free state (can
your member functions throw exceptions anywhere with the classes
remaining useful and not cause bugs?)
I can see the point of libraries not imposing policies to some extent,
for example some libraries display error message alerts and this makes
them pretty useless for automated processing or makes localization
difficult, and sometimes makes the program unusable if you get cascading
error alerts in really badly designed error handling. But I don't
personally see anything wrong with catching an exception and handling it
in some graceful way - maybe one of the features of the library is that
you can be sure it throws no exceptions at all, for use by code that
doesn't want to handle exceptions, or to make exception handling less
Using throws in C++ is tricky because you have to be really sure you do
catch all the exceptions that might be thrown inside you, or else your
program will terminate. This is really a drag because lots of legacy
code was written before exceptions were provided in C++ but is used in
code with exceptions; also it's generally the case that lots of C++
programmers aren't real careful about exceptions - I'm just beginning to
get a grasp of them.
The one thing I really do like about Java is that exceptions were
designed in from the beginning and functions must either catch all
checked exceptions from functions they call, or declare that they throw
them. You can't have the case where a function neither catches an
exception nor declares it, and it makes it much easier to keep things
In C++ the situation really is a mess in general so about the best you
can do is use catch(...) and just deal with it in some generic (and
probably not very helpful) way.
Yes, you can include an exception as a member variable of another
exception. I've done this. An exception is just an object that is a
class instance like any other - the magic is happening to the object
that is actually being thrown, which is behind the scenes in the C++
There's a couple of gotchas. When a C++ exception is thrown, it is
destroyed "by value" - that is, you want to say "throw foo()" rather
than say "throw new foo" because in the second place the pointer won't
be passed to delete.
In general you want to catch a C++ exception by reference so you get any
derived class behaviour without slicing. But if you keep it as a member
variable in an exception you throw on again, you want to copy it by
value rather than keeping a reference to the original because the
original will be destroyed once you throw again. Since you use the copy
constructor of the type you're declared to catch, you'll slice off any
derived class behavior and member variables if the object that was
actually thrown was a derived class - you have no way of knowing that in
The only way around that is to have exceptions with a clone() method in
their base class and then you can clone the real derivced class, but
then this limits the kind of exceptions you can do this with.
This is less of a problem in Java because you prevent the exception from
being garbage collected by keeping a reference to it. Another win for
Java I suppose.
When we say "exceptions", we mean two different animals - there is the
behaviour of the exception, which is an abnormal return, unwinding the
stack until you find an exception handler, and then there is the data
item of the exception itself. I think it is really useful to explore
exceptions as data objects.
For example, they can have member variables. Commonly this is used for
little more than storing an error code or an error message string or a
reference to another data object that caused the exception. But they
can be arbitrarily complex data structures - can you think of a way it
would be useful to throw the root of a binary tree? (I can't offhand,
but just to stimulate your imagination)
Exceptions don't have to be created right when they are thrown. You
could make them up ahead of time and store them in a pool and pick them
out and throw them. You could store an array with one each of every
different type of exception your program might throw and when it comes
time to throw something, use a random index into this array and throw
what you find there, just to be weird.
Those two might not be very realistic examples but I have used
exceptions as data items to very good effect, to simulate the behaviour
of throwing exceptions between threads in a Java program.
In this case I wrote a communications program in which the low-level
reads and writes were handled by separate threads processing a queue.
The clients to these processes were running in another thread, so for
example communication wouldn't block the UI. When communication
completed a function would be called in the client object, much like an
event handler in AWT or Swing.
In normal processing, a write would result in an event that simply
reported success, while a read would report success and include a vector
with the bytes that were read in.
If an exception was thrown during the read or write (this could be
caused by a timeout, protocol error, or data being passed into the
communications processor that was in the wrong format, as well as any
kind of exception caused by normal Java functions), the exception would
be caught and then an error event handler would be called, with the
exception itself being passed as a parameter.
I figured, hey, all the information you know about the error is in the
exception, why not report the error by providing the exception as
This worked really well, because the class with the event handlers would
generally be able to deal with policy decisions like what to do in the
event of a timeout, one just had to handle the case that exceptions
can't be thrown outside of a thread - they can't, but they can be caught
and passed to another thread.
Well this has maybe strayed a little off the original topic.
I generally prefer working in C++, but there are some things that I
think Java has definitely done better, exceptions being primary among
Hey, you're not the only person who can write to provoke a reaction :-)
I thought I'd get a response like that. I know that. I still
think you're understating the amount of work involved in design,
coding for all anticipated requirements, testing, and documenting the
API to the point that anybody else (or yourself, six months later) can
come along and use it. My criterion for "reusable" is that somebody
else can save time by using it. How much of the stuff in, say, CPAN,
does that actually hold for?
If what you meant was "software which may some day become the basis
for reusable code", then I agree. If what you meant was "software
which I have already written three variations on and can see how to
usefully abstract" then again I agree. But writing new
"reusable" code without knowing several situations in which it will be
used is just like launching a company without any idea who the
customers will be. It's possible, but it's not a risk-minimising
Incidentally, if you want to see how well-acquainted Lisp programmers
are with abstracting and parametrizing their programs, run don't walk
to the nearest copy of "On Lisp" by Paul Graham, and read chapter 16
(and, to be honest, most of the rest of the book too). If you thought
that passing function pointers to qsort was a neat idea, this
stuff will make your head spin.