The way you're using "decimal floating point" seems to imply there is a "floating point" that is not "decimal".
My understanding is that "decimal", in the context of programming, is a less-precise term to mean floating-point (i.e. encompassing IEEE 754 floats, double- vs single-precision, etc), because they're all binary numbers if you dig deep enough, right?
But apparently I am incorrect/underinformed about this.
Decimal means base 10. Decimus is the Latin word for 10, and deci- is the SI prefix for 1/10, e.g. decimeter is 1/10 of a meter. Decimal floating point is x*10^y whereas binary floating point is x*2^y . Compare to hexadecimal which means base 16.
My understanding is that "decimal", in the context of programming, is a less-precise term to mean floating-point (i.e. encompassing IEEE 754 floats, double- vs single-precision, etc), because they're all binary numbers if you dig deep enough, right?
But apparently I am incorrect/underinformed about this.
What is "decimal", then?