If you do math on arrays of double that contain large numbers of NaN's or Infinities, there is an order of magnitude performance penalty.
I first noticed the NaN problem when I had a large array of doubles, most of which were NaN, and I had a loop that was searching for the maximum value. With the array filled with NaNs, it took around 10s to search instead of ~300 ms. When I looked at the disassembly, it was using the FCOMIP instruction rather than FUCOMIP. The first raises the floating point exception for any type of NaN, while the second only raises the exception for signaling NaNs. I’m not sure why this is done, as so far as I know .Net never throws floating point exceptions. I’m not sure that FUCOMIP would fix the problem, but it seems likely.
As a workaround, I added a test for NaN to the search. However, this does not solve the problem as the .NET implementation of Double.IsNaN seems to be something like:
return y != y
This ends up using the same FCOMIP comparison instruction, thus even trying to check for NaN causes a major slowdown if lots of NaNs are present!
I ended up writing my own check for NaN:
static public unsafe bool IsNaN(this double value)
// Exponent of all ones, mantissa non-zero
return ((*(ulong*)&value) & 0x7fffffffffffffffL) > 0x7ff0000000000000L;
It's not just the comparisons that have the performance problem. I've also seen it with other array operations, such as multiplying by a scalar.
1) Make IsNaN() fast, like the one I suggest.
2) Don't use the instructions that raise floating point exceptions.