Compare the following three C expressions (a and b are doubles and we are on a IEEE-754 compliant system):
double x = ( (a <= b) ? a : b ); double y = ( (a < b) ? a : b ); double z = ( (a > b) ? b : a );
Obviously, for the purpose of comparing real numbers (and also infinities), these functions all perform the same task: compute the minimum of a and b.
Are they equivalent? No, if you consider strict compliance with the standards.
Consider a "not-a-number" value (NaN): double a = 0./0.; double b = 1;
Those expressions respectively yield 1, 1, NaN.
Consider the different +0 and -0 values: double a = 0.; double b = -0.;
Those expression respectively yield 0, -0, 0.
A compiler that strives to comply with standards, such as gcc, will emit different code for those three functions.
On AMD64 (or x86 in SSE mode), gcc compiles the expression for x into a sequence involving a SSE comparison operator and some bit masks (implementing ? : without using jumps). But (with optimization turned on) it compiles y into a single minsd instruction.
-ffast-math turns on "unsafe" optimizations that result in x being compiled the same as y.
Now, this may seem innocuous enough. Unfortunately, we had the code for x inside a loop performing matrix computations, inside a "bottleneck" procedure taking up a significant part of computation times. Changing it into the y formula yielded a 2.5× speedup in a benchmark.