Via LtU, I became interested in interval arithmetic once again. I had first looked at this alternative method while struggling with errors in numeric computations in my Virtual Taj demo. If you have never heard of interval arithmetic, I recommend reading Brian Hayes's article "A Lucid Interval" (PDF, 84KB) first published in American Scientist and an interview with Bill Walster of Sun Microsystems. Essentially, interval arithmetic lets you keep track of the margins of error in your data and provides you an estimate of the probability of the correctness of the results of your computations with this data.
The main problem is not only that interval arithmetic is at least twice as slow as ordinary computer arithmetic, but also that the margins of error keep increasing over successive computations. Of course, this margin of error is anyway there in your computations whether you use interval arithmetic or not - at least now you have your "known unknowns" - but we humans normally do not like to face it. There are other problems too, including the difficulty of division when an interval contains the number 0, the non-distributive nature of computations, the necessity to anyway deal with floating-point precision and rounding errors when the endpoints of intervals are expressed as floating-point numbers, etc.
Despite all these problems, interval arithmetic might still be our best bet in attempting to perform meaningful computations on computers. Interestingly, Knuth also expresses a similar view in TAOCP Volume II ("Seminumerical Algorithms"), but sadly does not expand much on this topic.