raph: Thanks for your thoughtful response.
Treating the tool as an object does let you track dependencies on the tool (as well as on the particular command). This is useful in some cases, but a hindrance in others. The useful case I know of is for gcc's target libraries: it makes sense to test every compiler change by rebuilding all the target libraries. Right now you have to use make clean in the right directory to accomplish this.
However, treating the tool as an object also lets you push intelligence into the build tool and out of the various ad hoc places it now lives. For instance, by integrating the tools into the build tool in a more first class way, we could eliminate the need for libtool. We could also export multiple methods from the tool objects; with make all you have is "do the work", when it would be nice to have ways to extract dependency information and perhaps other things. This means we could get rid of depcomp.
Finally, this approach lets us more closely model the underlying program we (may) use to implement a given tool. For example, sometimes there are limitations on running two instances of a given tool in a directory at once. In automake we work around this with the ylwrap script, which serializes invocations of yacc and lex. In the next tool, this could be handled internally. This would be faster and more reliable.
So when I say "treat a tool as an object", I'm really commenting on a facet of the implementation.
I'm very interested in nice logging and other debug features in a future build tool. With make one is frequently left wondering, "why did that happen?". I fixed a bug like that in libgcj recently, and in the end I didn't really understand the patch I came up with. I envision simple ways to ask questions of the build tool, and perhaps a full-fledged debugger as well.
I'll try to write more about this later. Today it is too hot to really think straight.