Is there any (non-microoptimization)
performance gain by coding
float
f1 = 200f / 2
in
comparision to
float f2
= 200f * 0.5
A
professor of mine told me a few years ago that floating point divisions were slower than
floating point multiplications without elaborating the
why.
Does this statement hold for modern PC
architecture?
Update1
In
respect to a comment, please do also consider this
case:
float
f1;
float f2 = 2
float f3 = 3;
for( i =0 ; i < 1e8;
i++)
{
f1 = (i * f2 + i / f3) * 0.5; //or divide by 2.0f,
respectively
}
Update
2
Quoting from the
comments:
[I want]
to know what are the algorithmic / architectural requirements that cause > division
to be vastly more complicated in hardware than
multiplication
No comments:
Post a Comment