Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The way you're using "decimal floating point" seems to imply there is a "floating point" that is not "decimal".

My understanding is that "decimal", in the context of programming, is a less-precise term to mean floating-point (i.e. encompassing IEEE 754 floats, double- vs single-precision, etc), because they're all binary numbers if you dig deep enough, right?

But apparently I am incorrect/underinformed about this.

What is "decimal", then?



Decimal means base 10. Decimus is the Latin word for 10, and deci- is the SI prefix for 1/10, e.g. decimeter is 1/10 of a meter. Decimal floating point is x*10^y whereas binary floating point is x*2^y . Compare to hexadecimal which means base 16.


Derp. I misunderstood.


IEEE-758 floating point standard defines decimal formats, decimal128 is relevant here, I believe that is what free42 uses: https://en.wikipedia.org/wiki/Decimal128_floating-point_form...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: