Inheritance: A friend?

Posted 10 Jan 2001 at 03:46 UTC by nymia Share This

In programming language term, Inheritance provides a way of grouping methods and properties together into classes, making the work of the architect/designer a lot simpler because it allows the architect to see the application in an ordered arrangement. Its use also helps the programmer since it provides a way of breaking complexity into pieces. These pieces then, form an order that have relation and meaning with some of the other pieces. Morever, it allows the programmer to work closely as possible to the business problem itself, leaving all the non-business problems the responsibility of the language.

On the other hand, from my point of view, inheritance doesn't seem to be the solution all architects and programmers want it to be as it could be a problem in itself when used the wrong way.

Here are 5 allegations presented before the fact why I think inheritance has flaws:

Allegation #1: Solidifier, Hardener

Inheritance is good for building hierarchies and relations, but, it doesn't lend a helping hand when major features are needed to be inserted into the structure, simply because it has a solidifying or hardening effect on classes, as they tend to adhere to each other as more classes get added into the fold.

Allegation #2: Distribution Nightmare

It could turn out that inheritance will not be that much effective because it only provides a way of aiding syntax, semantics and quality assurance to the detriment of maintenance or software configuration. Moreover, inheritance is only effective for an application or set of applications to a certain point where putting additional features would cause more harm than good.

Inheritance wasn't designed to handle issues of distribution and it doesn't behave very well when a base class gets modified, resulting to a modified memory layout. As future updates are released containing the modification, it then causes errors when old versions try to jump into what they previously thought of as the entry point for a given method.

Allegation #3: Locked-In Thinking

At the outset, inheritance gives the designer perfect control as more classes are continually added into the set, eventually molding a hierarchy or web of related classes. Once the application reaches the first release then that's the only time the locked-in thinking starts to kick in. The designers and programmers now must think within the structure they created.

Allegation #4: Destructive Takeover

As more and more applications are written, the base classes become critical and too important not to be left out in the cold. It then begins to mandate itself upwardly, placing itself alongside the base services of the operating system. As its influence increases, it then overshadows the system services, effectively clobbering some to oblivion and taking some OS services as its prize.

In Unix, this is a gross violation of its philosophy as no small program would takeover another small program.

Allegation #5: Clannish, tribal behavior

Inheritance has an outright negative effect, it makes the applications think and behave as if they're a part of a clan or tribe. Anything outside its perimeter are considered heathens and these outsiders are then forced to bow down and worship its base classes or suffer a destructive takeover.

This behavior again violates the sound principles of Unix. In Unix, all small programs cooperate with one another, they think and behave as a distinct entity entirely free to relate with other small programs.

Is it really worth it?

Inheritance seems to favor only the language and not the configuration aspect of the application. I still have some reservations about its implementation as to why it was used for factoring out methods into group of classes are considered ``good-tasting'' for modern-day programming tasks.

It is my intention to draw out your responses and criticisms to my allegations against inheritance. Are these 5 items sound? or flawed? If flawed, why?


Flawed, posted 10 Jan 2001 at 04:42 UTC by dancer » (Journeyer)

Inheritance is not a solution. Inheritance is a method, a spice, a flavour, a tool, a description.

Treating inheritance as a unilateral solution leads to the cases you list above. So, don't do it. Inheritance, like a hammer, is a good tool when it's the tool you need, and a bad tool when you need something else (eg: a screwdriver or pliers).

And besides, if we judge something by how badly it is misused, well, hell would be perl, right? General note: I am not singling out or bagging perl here. I am instead using a particular category of humour in a particular role to provide underscoring to a part of my proposition. Are we all good with that?

Therefore, it would be foolish to say that inheritance is bad or wrong. Sometimes it is, sometimes it isn't. You could say the same about a lot of things. Being wrong when misapplied does not impune that which is applied but only those who misapply it.

inheritence is a conflation, posted 10 Jan 2001 at 06:07 UTC by graydon » (Master)

inheritence confuses OO programming by conflating subtyping and implementation reuse. you can untangle them if you give it a good solid theoretical backing, and then you're fine (Ocaml, O'haskell, Eiffel, Sather, PolyTOIL, Theta, Omega, Cecil). you can use both tools as appropriate. inheritence (a la Java / C++) messes things up by forcing you to use both when you only want one, like a spork.

if you stick to public inheritence of pure virtual classes, compiler firewalls and delegation for your reuse, you can untangle the issues in C++. Java objects are all boxed anyway, but stick to interface implementation.

Inheritance: My Friend., posted 10 Jan 2001 at 09:52 UTC by ali » (Apprentice)

Most stuff I write somehow deals with mathematical structures. If I imagine I'd have to write it without using inheritence I really have to shudder... :) No, seriously: Some of your points are essentially true. As I'm used to C++ I of course think that it's all the fault of other things - the OS, the developers who wrote some library I want to use in C, or whatever. But, true, there are problems. But it's not as simple as you wrote it.

Allegation #1: Solidifier, Hardener

You can avoid the effect, but it might as well happen. Regardless what you program and what language you use, as the application grows there are a lot of dependencies between things you didn't intend to be relying on each other. But that has nothing to do with inheritence (or OO in general), it's rather a question of good program design.

Any program that has the urge to grow must divide it's task into small components, and provide a good and controlled interface between them. Then the effect won't happen. But, of course, if a programmer derives his "coffee" class from "water bottle", because he wants to re-use the code for "drink", "refill" and "buyNew" he's doomed. This frequently happens when, for example, C programmers try to use C++ for something but don't spend the time to learn what they're doing - they think of objects as "code groups" and derive stuff without any sense of OO. Perhaps they'll invent the class "Drinkable" and derive "coffee" and "water" from it. But when they need "Car", which also needs code for "refill" and "buyNew", they probably derive it somehow from those, too.

Allegation #2: Distribution Nightmare

As a new version of the base class library arrives, you would have to recompile your application (Given C++, not Java), so it's basically true. But what's the point? That always has to be done if something you use changes it's memory layout, even in C. It does happen, in C, if some structure gets new fields, as you compiled fixed sizeof() values into your code. It does also happen when functions get new arguments. Now, with C++, it additionally happens when base classes (which are memory structures) get new virtual functions. It does not happen when base classes get new non-virtual functions. So, the situation is exactly the same as in other languages.

Allegation #3: Locked-In Thinking

True. Whenever you release something, you have to think within the released structure, and a major redesign causes problems. That is true for inheritance, GUI design, wearing clothes, legislation, and having a girlfriend. I can't see the point.

Allegation #4: Destructive Takeover

This is true for Unix, for example. You will always use such a takeover when programming languages have to interact, like when your C++ application runs on a C operating system. It has nothing to do with inheritence. Writing C++ code under BeOS or Qt (which nowadays is like a complete OS) doesn't lead to that. It's just the fact that you have to use wrappers to keep your design ideas intact, and the more you use, the more has to be wrapped. You could try to write a C program under BeOS, and sooner or later you'll start to wrap the OS.

And, it's no "gross violation of (Unix's) philosophy" - it's the same as fopen() wraps open() and shadows the ability to set file permissions.

Allegation #5: Clannish, tribal behavior

This, again, is true for all languages. It's the same as perl re-implements everything. C is the most frequently used languages, so it needs to "takeover" only a few things, but everything else has to take over everything that doesn't fit to it's design concepts.

This behavior does not violate the sound principles of Unix. Okay, well, it does. But that happens all the time. It happened when the FSF rewrote all shell tools (for the existing grep's didn't fit the "software must be free!" idea and were considered heathens), and when Gnome and KDE were invented (for the existing applications didn't fit the "all software must look same!" idea).

Is it really worth it?

IMHO it is. There are a lot of situations where I just need to use inheritance. To give a real-life example: Just think of any game where there is a "world" and stuff in this world. All such stuff is somewhere, it has a position. And, there is a display module somewhere in the game which needs to show everything. So far, this applies to 80% of all games. Now, you have to choices:

You can write it without using inheritance. Now, your display code has to deal with all things separatly. So, you have 200 lists of all thing types, one list for persons, one for rabbits, and so on. You cannot use a single list with structures that contain positions for all thing types, since then you would have reinvented inheritance. (If you doubt it, it would encourage you to look into the Descent source code. They partly used inheritance in C, and partly used this 200-lists-approach.)

You write it using inheritence. Suddenly everything is easy. Everything in the game world is derived from a base class that offers positions. The display module is simple, the thing classes are simple - you just need to implement a non-movable thing once, and all other such things don't even need to care about it.

But, this only works if you have a relativly "closed" application. If you write a program that operates close to the OS, you should use the OS's language and features. So, for Unix, you wouldn't use C++. You wouldn't try to write a linux kernel module in perl either (wouldn't you? :)).

Inheritance: a tool., posted 10 Jan 2001 at 14:59 UTC by duff » (Journeyer)

dancer and ali hit the nail square on the head. Inheritance is useful but only when used appropriately. I think part of the problem with inheritance is that people try to use it for everything. Like when you learn a new programming language and you try to use it to do everything under the sun. Eventually you realize what things it's good at and what things it sucks at.

Perhaps the real problem lies with education. When I was learning about object oriented programming in the late 1980s, I essentially had to struggle through trial and error to learn the concepts because there were few references then. Now there are lots of OOP references, but it's hard to pick out the good from the bad when you don't know what "good" is. So how is the average programmer to know when to use inheritance when his reference books don't help him figure out when it is appropriate?

Anyway ... there are several books out there that explain and give good examples of when to use which OOP tool (like inheritance). The only one that springs to mind right off is "Design Patterns". Perhaps other Advogates will chime in with some other good references.

Used appropriately, posted 10 Jan 2001 at 17:32 UTC by Stevey » (Master)

 I agree that inheritance is useful, especially when used appropriately.

 One of the interesting distinctions in the use of inheritance is the difference between derived classes, and derived interfaces.

 Having derived interfaces can be a very clean way of using inheritance ..

Inheriting interfaces vs inheriting implementation, posted 10 Jan 2001 at 19:13 UTC by pphaneuf » (Journeyer)

<shameless plug>

If what you're interested in is inheriting interfaces only, check out XPLC. ;-)

</shameless plug>

A more serious comment now: inheriting implementation induces a tight coupling between the parent and the child classes. While this might not be a problem, you have to keep this in mind when you are about to do it. Again, it's a matter of using these appropriately, as others have said.

Desire, posted 10 Jan 2001 at 19:43 UTC by hanwen » (Journeyer)

inheritance doesn't seem to be the solution all architects and programmers want it to be

In general, things are not what people want them to be. They just are what they are. That's general property of things, and that's causing lots of trouble for people in all kinds of contexts.

(ok, I'll get off my Zen-soapbox now. Love & Peace to y'all)

Suggest Composition When You Might Use Inheritance, posted 10 Jan 2001 at 20:04 UTC by goingware » (Master)

If you're designing some new object-oriented code and you're contemplating setting up an inheritance heirarchy, I suggest you try playing around with composition instead.

Often what you do want is a base class, so you can have polymorphism, but the base classes hold members that give it its behavior, and these members are assigned either by parameters to the constructor, or setting accessors or some such thing.

(and note that you can write object-oriented code in most any language, it's just that some support the concepts in the language but in C you have to do it manually - but lots of people do it, GTK for example).

An advantage of design by composition rather than inheritance is that it is more flexible, so for example you can change the behavior of something at runtime by changing one of the members it is composed of (this is one of the patterns in design patterns, where they say an object appears to change its class at runtime).

Another kind of flexibility is that it is much more extensible - because the base classes are designed in the first place to be constructed from parts, it is much easier for a later programmer, one who is not working with the original designer, to write new classes that extend the behavior of the old ones.

It also makes things less monolithic. If the parts your classes are composed of are well-designed, they can be used in other contexts.

Composition is used to great effect in ZooLib, a cross-platform application framework. The base class of the button widgets is ZUIButton, but I don't think there are any (or not many) concrete subclasses of ZUIButton. Radio buttons, pushbuttons, checkboxes and so on are all determined by members of ZUIButton.

Also, ZUIButton's can be rendered in a platform-appropriate way, and once you've got a pushbutton, you can attach a renderer to it that knows how pushbuttons are rendered for Mac OS, for Windows, for Linux and so on.

I'm afraid I cannot describe in a really clear way what I'm talking about. But if your first inclination is to say "I'll try inheritance", I'll suggest that is a warning sign and say "Why don't you at least consider composition?"

Inheritance good... fire bad., posted 10 Jan 2001 at 21:53 UTC by burtonator » (Master)

Allegation #1: Solidifier, Hardner

Not true. How does an OO application differ from an application written in C or LISP? If you want to add major functionality you need to think about your OO design (it is called refactoring). You have to do the same thing in other languages as well. You need to rename functions, move source files around, change data structures. etc. Nothing different here.

Allegation #2: Distribution Nightmare

Again.. not true. It is called package management. RPM and DEB solve these problems, it doesn't matter what language or OO vs non-OO. If you change something in a base class and a child relies on it you need to make sure you versions are correct. Package management solves this. If you need to have two versions at the same time use LD_LIBRARY_PATH.

Allegation #3: Locked-in Thinking

Again. not true. Even non-OO based application have the same problems. There are great books dedicated to solving application rework... Refactoring by Martin Fowler being a great example. If you needed to rework Linux you still need to do the same type of thinking.

The point is that lock-in is something that a human does and you can be trained to avoid it.

Allegation #4: Desctructive Takeover

... not sure what you mean but I think it is just that Base classes become important. Yes. Which is why you design you base classes correctly. Also... just because Project A has a certain set of base classes doesn't mean that Project B has to use them.

Allegation: #5: Clannish, tribal behavior

No way.. I would actually argue that this clannish/religious behavior (EX: Windows sucks... Linux rulez) behavior is actually improved with an OO design. Example. Using a good (XPCOM) component system (which can be viewed as OO even if the language is not) can actually improve on the situation. A C++ developer could use an object written in JavaScript or a Java developer could use an object written in C++. Each would black box the object and use it for its features (the language comes into play here because C might be faster and tighter to the kernel, might be more Java objects because it is a safe/easier language, etc).

Is it really worth it?

YES :)

Is it worth it?, posted 11 Jan 2001 at 01:07 UTC by Pseudonym » (Journeyer)

nymia, have you ever read "Design Patterns" by the Gang of Four (Erich Gamma et al)? I used to think inheritance was more trouble than it was worth until I read that book. Other friends and colleagues also describe it as an eye-opening book. The first chapter contained more Zen moments than I've ever had in any other book.

I always realised that the C++ class model was a tool like any other and had to be used in a disciplined manner otherwise chaos could ensue. My general feeling was that a class was a glorified module or namespace, that is, if Wirth's assertion was true that data structures + algorithms = programs, classes were a way of putting data structures and algorithms in the same place and thus generally increasing the level of niceness in the universe but otherwise not really buying you much.

Boy was I wrong.

What inheritance gives your software is flexibility in a way that people hardened by Unix philosophies (as I was) truly cannot comprehend until they've seen it done. To paraphrase an old adage, when all you know is Unix and C, every software flexibility issue looks like a DSO or a pipe. Now I think in terms of strategies, roles, commands, mementos, mediators and abstract factories and my software, IMO, is all the better for it.

Read the book. You'll never think of software design the same way again.

Disclaimer: No, I'm not on the Gang of Four's payroll. I'm just a fan.

While we're all here, posted 11 Jan 2001 at 02:28 UTC by dancer » (Journeyer)

duff: ali I think presented the more cogent representation, out of the two of us.

BTW, I'd like to thank nymia for bringing the discussion up in the first place. While I - personally - disagree with the proposition, it's generated a lot of interesting reading and views. Thanks nymia. Well done. Thankfully - as yet - nobody's really gone off the rails and ranted. If we can just keep it that way....

Re: Inheritance, posted 11 Jan 2001 at 03:02 UTC by nymia » (Master)

Almost of the replies made sense. However, let us focus the argumentations of inheritance on the basis of how it works, how it is understood and implemented. I'm almost sure that a lot of architects and programmers understand the condition upon which inheritance operates and this condition themselves don't readily show and identify itself quickly. It's only on the later stage when most of the code have been committed and released when things start to become hairy.

About the rule inheritance tries to impose, they are about permanence. Inheritance is not a very flexible tool as it forces an architect to see the future and predict what kind structure the application will it be having. Moreover, inheritance imposes the rule that once an interface is created it will stay there for as long the structure lives. This permanence is true for all types of inheritance since permanence is not in the syntax or semantics of the language but it is imprinted on the memory layout of the application. Once this permanence is broken, whether concrete or abstract, the cracks show up almost immediately.

My point is that inheritance obligates an architect to come up with a way of dealing with permanence. And dealing with permanence at the groundbreak of a project looks a bit risky on my part.

Would an architect who is aware how inheritance behaves bet his application on it? Knowing that one of the critical success factor lie on a memory layout that doesn't guarantee permanence? IMO, I would be very careful about the use of inheritance.

Read _Refactoring_!, posted 11 Jan 2001 at 18:09 UTC by jmason » (Master)

It's a great book -- and it illustrates good OO design very well. In particular it deals with the problems bad inheritance usage causes. As another poster said -- (to paraphrase) "forget inheritance, use interfaces".

Inheritance only works sometimes... and in my experience, quite infrequently. Well-designed abstract interfaces (java: interfaces, C++: pure virtual classes) work a lot better IMHO.

Permanence, posted 11 Jan 2001 at 20:33 UTC by DrCode » (Journeyer)

Yes, to some extent, inheritance assumes a large degree of permanence. But at some point in a software project, just as in the design of a building, one has to make some hard decisions that aren't easily undone. It just means you have to design carefully.

And although it's hard to rearrange a hierarchy in the later stages of a project, it isn't hard to add functionality. For example, a compiler might have an Expression class, from which are derived Binary_expression, Unary_expression, Variable_expression, Int_expression. It might also have a Statement class, with descendents If_statement, While_statement, etc.

Suppose later on, you decide that, for debugging, you need to be able to print a list of objects that might be either Statement's or Expression's. You could create an interface (ala Java):

class Printable{
virtual void print(ostream& out) = 0;
};

Then, derive Expression and Statement from Printable, and you can create a "list<Printable *>" that can hold any of the above objects.

Predicting the future, posted 11 Jan 2001 at 22:06 UTC by apenwarr » (Master)

nymia said: Inheritance is not a very flexible tool as it forces an architect to see the future and predict what kind structure the application will be having.

I think that's an interesting comment, because it makes one assumption clear: you assume that the system architect doesn't know what structure the application will have. However, it is easier to predict the future if you're the one making it.

Certainly, it's hard to decide on an architecture and even harder to decide on one that will work, and this is particularly true in "interesting" projects that you haven't done before. But if you really haven't got any idea how the system will look, it's not time to start building it yet.

I think there's a certain amount of confusion in this discussion, since there seem to only be two views: inheritence is bad, or inheritence is good. Actually, inheritence (single and multiple) is a superset of most of the proposed alternatives, both good and bad. For example, Java's "interface inheritence" stuff is really just a special case of multiple inheritence (as far as I can tell, one of the only non-evil cases of multiple inheritence, but still).

The main complaint people have about inheritence seems to be the creation of deep hierarchies. For example, a chicken is a special case of a noun, which is a special case of a word, which is a list of letters, and so on. It gets messy, because there are lots of ways to classify a chicken, and choosing one can be very restrictive. (And using more than one can be really confusing.)

If we "flatten" the hierarchy using interfaces, that problem mostly goes away. (Just say that a chicken can be stored in a list, or you can make a sentence out of it, or a sandwich.)

Certain types of hierarchies do work really well, though, and those are the ones you have to plan. The most common example of this is a GUI system: a a graphical button is really just a special Button, which is a Widget (something that does something when clicked on), which is a Window, which is really just a collection of pixels with an Area and an ability to draw itself.

If we tried to redefine this in terms of interfaces, we would get something like: a graphical button is an object that can draw itself, do something when clicked, and has an area. But that doesn't let you easily reuse the implementation of the Button, Widget, or Window classes, most of which will be the same as the implementation of your graphical button.

The real question is how often does such a nice, well-defined example appear in real life, and is the code-sharing between objects always worth it? In the case of a chicken, I can't see that it really has much in common with most other nouns, so there probably isn't much code sharing. In that case, the extra complexity of having the hierarchy (and the need to rearrange everything if you change just one thing) may not be worth it. In that case, does it actually save time to duplicate the shared code between objects, just to eliminate the dependency?

I don't know.

Refactoring, posted 11 Jan 2001 at 22:36 UTC by Pseudonym » (Journeyer)

I must agree with jmason's advice to read "Refactoring". It addresses nymia's accusation that inheritance locks in an application to a certain structure, therefore OO architects must be clairvoyant. Refactoring is a disciplined approach to iterative software design, which gives you licence to modify class structures as you need to. IMO, some people find this hard to accept because it contradicts the accepted wisdom (Fred Brooks) that you are going to write a throw-away copy. Refactoring argues that your throw-away copy can evolve by mutation into your final copy, if you use object-oriented design in an appropriately disciplined manner.

Design Patterns and Refactoring are in some sense complementary. Design Patterns tell you that if you need to solve this sort of problem, experience shows that this sort of structure will probably reduce your need for redesign later, or make the redesign easier. Refactoring tells you how to go about the redesign.

Replies, posted 14 Jan 2001 at 08:20 UTC by nymia » (Master)

Interesting replies, good points too.

About the misuse of inheritance, yes, I totally agree with them. One of the things I see a lot and also guilty of doing are building tall, narrow base classes, just like what Ali mentioned. That's why I posted this article because I believe inheritance has caused me a lot of trouble.

Perhaps it is my programming habit of C, Pascal, COBOL and Assembly has led me to dislike inheritance. From my point of view, I see object communication and interfacing in the form boxes, wires and signals. Not with the "is-a" and "has-a" system of organizing classes.

Anyway, it's really nice reading your defense/rebuttal about inheritance. They made a lot of sense especially Design Patterns which I think is a better way of showing OO artifacts are a must for architects and programmers. I will surely buy that book.

I still have a couple of evidence supporting my allegations that inheritance is really flawed though. I'm still thinking whether if I'm going to post it or not? Let me know if it's still appropriate, otherwise, I'll just leave my allegations unsupported.

It's a powerful tool and with that comes responsibility, posted 15 Jan 2001 at 05:24 UTC by Nelson » (Journeyer)

Thanks, this was an interesting topic to think about.

Allegation #1: Solidifier, Hardener

I disagree with this. I disagree with the idea that inheritance is somehow a bad thing to begin with. Hierarchy is the one and only technique the human mind has for dealing with complexity. Software is fundamentally deals with the complex and so logically hierarchy is something that it needs, the only issue of debate is where and how. Inheritance is a design and coding tool, perhaps hierarchy needs to stay out of the realm of code? You can't design software without hierarchy though.

If you pick a bad hierarchy then inheritance may be a solidifier. That's not new or a reason to not use it. If you design software poorly then it is hard to extend, that rule applies regardless of the use of inheritance.

Allegation #2: Distribution Nightmare

This is bunk. If you bump a structure (like the Linux kernel did to the TCP/IP skbuf between 2.2.10 and 2.2.14 which broke some network drivers) then you can break code unless it is recompiled. I can appreciate the attempts to not break existing code, I've done more than a fair share of debugging to determine exactly which versions of MFC and which NT fix-packs my code works with, I know the headaches. This is solved with proper versioning and distribution. If you have an object framework and you're going to add to classes such that it may break compatibility then you should bump the version. As distasteful as it may be to have 2 or 3 copies of the same library on a system, disk space is cheap and you can avoid breaking code by doing it.

To look at Linux again, we went from a.out to elf and we changed C libraries and I happen to think both were forward steps in the correct direction. Things were broke in the process though. I still think it was the right thing to do. Sometimes you have to break things to fix things, that doesn't mean it shouldn't be done, I think it is worse to try and hammer something in to something it isn't.

If there is anything that makes software difficult to develop and expensive it is the ``stone software'' philosophy that once something is done it is done forever and cannot be changed. You would not believe some of the horrendous hacks I have seen by major software companies to avoid producing a new version of a library to save a few K of disk space. I'm not in favor of bloat but when you start cutting corners to favor a legacy you are fighting a losing battle.

Allegation #3: Locked-In Thinking

This one I'll give you. I think it is very difficult to build a proper hierarchy for a very large project. A good one that comes to mind is the taxonomy of life, I doubt most of us would have used it or a similar one to describe species if the job was left up to us. It happens to be good and with a fairly minimal understanding of it you can pretty readily classify species in it. It is clear that a lot of thought and effort went in to it, it wasn't something drawn up on a napkin over lunch.

Part of this is how object modeling and object oriented programming have been peddled as ways you should write software. Speed of development is one of the big advantages, it's unfortunate because that speed doesn't really happen until the problem is understood intimately and you have designed or built up a good framework for solving it. Many people try to move really quickly without understanding the problem space, they start to build hierarchies that aren't good enough and then they are trapped. Good OOD is hard work, just like software design in any other paradigm.

Allegation #4: Destructive Takeover

Can you come up with an example of this? I want to see the obit. For a syscall that was replaced by a foundation class. I think this could be spun a few different ways. There are legions of little perl programs that need perl and some perl modules to run and aren't good little C programs compiled into object code that rely directly on OS services. Is perl usurping power and taking over? If they are taking over a lot of people might think it's a worthy revolution in some respects.

As your project grows in size the foundation is naturally going to grow in importance. GNOME uses GTK+, various graphics libraries, glib, libxml and various other libraries, they are critical to GNOME. I don't see those libraries replacing system services so much as abstracting them in a way that is beneficial to GNOME and other applications. The glib gthread isn't going to replace clone() or fork () or even POSIX pthreads. Are you arguing if favor of not making large applications or large sets of integrated smaller applications? That really doesn't have much to do with inheritance. Software generally grows in complexity. Allegation #5: Clannish, tribal behavior

I can't say that I've ever noticed this before. Didn't Brooks say something about OOP in The Silver Bullet Revisited? Something to the extent that OOP has been taking a bad rap because it is treated as a language thing and not a design thing and that he still believes it is a silver bullet. (Pretty big thing for him to say) I agree that inheritance can be a bad thing, I've also seen some completely amazing things done with it. It's a tool for implementing a design, if the design is bad then the tool isn't going to make it good. If unskilled people use the tool then the product may not be very good. And like all other tools, if you start solving the problem with it before you understand it then you're not going to have the best solution very often, if you ever actually solve the problems. Coming up was a good hierarchy is difficult, we were doing OOD/P for years and even decades before the "Patterns" revolution and just reading it isn't going to turn just anyone in to a top notch architect.

Inheritance isn't evil. :-), posted 26 Jan 2001 at 04:29 UTC by Omnifarious » (Journeyer)

I love inheritance as a tool. I use it almost exclusively for subtyping relationships, and not for borrowing behavior from another class. Composition is a much better tool for borrowing behavior.

As an example, Unix secretly makes heavy use of inheritance, and here's how:

What do you expect to be able to do with the result of an open call in Unix? Well, it depends on what you opened. All files support being opened and closed. Some support being read from or written to, or both. Some support seeks. Some a certain ioctl interface.

A file descriptor is like a pointer to a base class. You know you can close it. If it's a character device, you're likely to be able to read and write to it. If it's a pipe, it'll act a lot like a caracter device. If it's a socket, it'll be a lot like a pipe, except you can also get it's 'name' and do a number of other extra things to it. If it's a block device, you can do random seeks, but your reads and writes may be constrained to be a particular block size. If it's a file, it'll act a lot like a block device, except the size won't be constrained and you can seek past the end.

If you think about it, it's just a big set of subtyping relationships that progressively refine the meanings of operations and add new operations as you go down the tree. Inheritance. It's a big part of what gives Unix such a powerfully consistent I/O model. It's the reason why you didn't have to modify 'cat' to handle the case when someone was remotely connected over a network.

Yeah, defining a few key concepts at the top of your hierarchy will be constraining, to an extent, but it's also extremely liberating in other, more important ways.

The trick is, I think, to divide your program up by key concepts. For example, in my StreamModule system, the two key concepts are StreamModules, and StrChunks. Those two concepts lie at the roots of the bushiest hierarchies. In the UNIEvent subsystem, the two key concepts are Events and Dispatchers. It goes on similarily from there.

Bushiness is a very important quality for a class hierarchy to have. Depth should be viewed with suspicion, but a lack of bushiness is a sure sign of trouble.

Anyway, those are my rambling thoughts on the subject.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page