This href="https://stackoverflow.com/questions/9314534/why-does-changing-0-1f-to-0-slow-down-performance-by-10x">question
demonstrates a very interesting phenomenon: href="http://en.wikipedia.org/wiki/Denormal_number" rel="nofollow
noreferrer">denormalized floats slow down the code more than an order of
magnitude.
The behavior is well explained in the
accepted answer.
However, there is one comment, with currently 153 upvotes, that I cannot find
satisfactory answer to:
Why isn't the compiler just dropping the +/- 0 in this case?!? –
Michael
Dorgan
Side
note: I have the impression that 0f is/must be exactly representable (furthermore - it's
binary representation must be all zeroes), but can't find such a claim in the c11
standard. A quote proving this, or argument disproving this claim, would be most
welcome. Regardless, Michael's question is the main question
here.
/>
href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf" rel="nofollow
noreferrer">§5.2.4.2.2
An implementation may give zero and values that are not
floating-point
numbers (such as infinities and NaNs) a sign or may leave
them
unsigned.
No comments:
Post a Comment