> True, but it's quite common that floating point ops throwaway bits (i.e. beyond the epsilon) will be numerically different between vendors.
This is a common sentiment, and it is perhaps a helpful way to look at things if you don't want to dig into the details (even if it's not quite right).
But it's worth understanding the magnitude of errors. rcpps will be 3-4 orders of magnitude "more wrong" compared to the typical operation (if you view the "epsilon" of most operations to be the error after rounding). Or said another way: it would take the cumulative error from many thousands of adds and multiplies to produce the same error as one rcpps operation.
This is a common sentiment, and it is perhaps a helpful way to look at things if you don't want to dig into the details (even if it's not quite right).
But it's worth understanding the magnitude of errors. rcpps will be 3-4 orders of magnitude "more wrong" compared to the typical operation (if you view the "epsilon" of most operations to be the error after rounding). Or said another way: it would take the cumulative error from many thousands of adds and multiplies to produce the same error as one rcpps operation.