Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It can. But only for integer data types.

As already pointed out, for floating point data types the two versions could (and probably will) result in slightly different answers due to rounding issues.



It's not just rounding issues: depending on the values involved the order in which you carry out FP operations can lead to radically different results.

In this particular case, the difference due to re-ordering of multiplications is likely to be relatively small, but it doesn't hurt to be aware of the potential problems.


> It's not just rounding issues: depending on the values involved the order in which you carry out FP operations can lead to radically different results.

Different operation orders giving different results is due to rounding issues. At the end of each individual operation the intermediate result needs to be stored in a floating point data type and at this point it is rounded to fit the limits of that type - changing the order of the operations to try optimise in the way being talked about changes the number of intermediate results that get rounded (so change the error at the end of the process), and some numbers "suffer" more from the rounding than others (0.5 can be represented accurately in binary, 0.6 can not) so sometimes the difference in the values you get at the end of operations that are mathematically equivalent (until you consider the accuracy of intermediate results) will be much larger than other times.

Of course if all compilers optimised a.a.a.a.a.a to (a.a.a)^2 then it would be OK as for the same data type all code would give the same result, but for consistency as not all code does do this optimisation it is safer for none to do it unless the programmer explicitly codes that way (and therefore takes responsibility for the fact that the "optimal" version may give different results when compared to code using the unaltered version, rather than the compiler deciding that this is OK).

The same is true for any chained floating point operations. changing a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a+a to a*20 (if one multiple is faster than 19 additions) will give different results.


Why does it being different matter? Compilers can do all kinds of things that change answers.


Because consistency matters.

An optimisation that could change the result (even by the smallest possible amount) is not a safe optimisation. As a general rule, unless the build script tells it otherwise, a compile won't perform an optimisation that could make the optimised version give a different result from the basic one.

Of course you may decide it is safe enough, in which case you might find your compiler has an option to tell it that you would like such optimisations to be considered. Stepping away from arithmetic for a moment, variable aliasing is another example of this. If your code could potentially update a value several ways through pointer jiggery-pokery a compile may decide certain register based optimisations are not safe, and there are often directives you can give to tell the compile that it can consider these optimisations as you are sure the code does nothing that would make them a problem (the compile can't decide this for itself as that would potentially take for ever due to the "halting" problem).


They can't do it on code that doesn't invoke undefined behavior. This is not undefined behavior.


Obviously they can't do whatever they want but they can decide if they want to store variables in registers or main memory, they can change the order of a variety of operations, etc. Does C specify order of evaluation with multiplication? If so I wasn't aware of that. But you'll still have some compilers giving you 32 bit precision on intermediate results and some giving you 80 bits.


> they can decide if they want to store variables in registers or main memory, they can change the order of a variety of operations, etc.

As long as they follow the "as if rule", which states that any transformations that is done to code that does not invoke undefined behavior will produce the same final result as if the transformations were not done. Reordering floating point multiplications does not follow this rule.

(Watching stuff in a debugger doesn't count for this rule)


1. Are you saying that multiplications have a specified order? I asked that but didn't get a response.

2. The compiler is allowed to either leave the float inside the FPU or store it back to main memory after each operation, is it not? These give different results.

Somewhere around here I have a chart of the differences in floating point outputs of different GCC versions. And no, fastmath is not on.


Yes. Multiplication is left associative. I'm not sure that C defines the precision of floating point operations, although I think it sets an upper bound. I may be wrong, and I think that the C99 spec (optionally?) pushes compilers to conform to the iee754




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: