Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because that gives you a Float, not a Decimal.


Sure, I get the difference between float and Decimal, but why can't floats be formatted with precision in the example? Are they using Decimal because it's best practice, even though it adds more code to the string formatting example ?


Floats can absolutely be formatted, but the result may be unexpected, because it is basically a (hardware based) C binary float:

https://en.wikipedia.org/wiki/IEEE_floating_point

So, because users will typically be surprised that 2.2 + 3.1 results in 5.300000000000001, or that round(2.675, 2) gives 2.67 (instead of 2.68), best practice is to use Decimal, which will give users the results they expect:

https://docs.python.org/3/library/decimal.html


Makes sense. I guess what I was getting at is that introducing Decimal to the example about f-string interpolation makes the example more complicated than it absolutely needs to be, but I can see why they did it.


I think some of that extra complexity could be papered over with either making decimal() available like int() and float(), which would eliminate the import, and/or implementing a decimal literal format, such as 0d2.675, analogous to the format of Hexadecimal and Octal literals (0xCC00CC and 0o267, respectively).


I like that second idea, perhaps a mention on the python-ideas mailing list?


Perhaps. I need to mull it over a bit before I take that plunge (but thank you for the encouragement!).

One possible objection is that while Decimal is in the standard library, it isn't a built-in type, so either Decimals need to be elevated to built-ins, or some magic needs to happen when importing the decimal module to add the literal.


They used Decimal just to show the nested attribute access


Here's a practical example that comes to mind. I work in film and we deal with frame rates. Often tools use 24 or 30 frames per second. 30fps is a simplification of 60 frames interlaced, which is technically 29.97fps. The main use-case we have is storing, comparing, and occasionally flipping between fractions and decimals. You would think integers would work. We've tried to update our tools to use 29.97 when it is necessary (although, this spans Python, databases, and third-party applications). The first attempt used floats, but we'd have scenarios where we'd get 24.00001 or errors comparing two values. The second attempt just used strings for frame rate so comparisons and storage work and you have a lookup table when you need to convert. Decimal sounds like a better solution.

A similar scenario in film (that hasn't caused as much trouble for me) is aspect ratio, which is the height/width of an image. Fractions and decimals are used interchangeably in conversation.


Besides using decimals instead of floats for your frame-rate use case (definitely a better fit), using a Python fraction may be a more useful representation for aspect ratios:

https://docs.python.org/3/library/fractions.html

Also possibly useful in that context (and bringing us back full circle to the OP's subject), Python 3.6 has added the as_integer_ratio() method to Decimal instances:

https://docs.python.org/3.6/library/decimal.html#decimal.Dec...


Doesn't seem very Pythonic to me...


Every type-inference language I am aware of assumes that anything that looks like a float is a float. It would be un-pythonic to assume that everything that looks like a float is a decimal. Imagine having to cast every damn float to a float.


In Haskell: `2.3 :: Fractional t => t`

In other words, it can be inferred to be any type that implements the Fractional typeclass, which includes Decimal types. Not that it is relevant here, because you basically need to have types for something like this to work, if you have proper type inference, then you don't need to assume any type, and can just interpret a literal as whatever is needed.


Since when does an exact decimal literal "look like a float" more than it looks like a decimal?


Since about 1972 when C adopted that as a convention.


Tradition. It was easier for computers and the rest is history.


Sure, lots but not all (Scheme doesn't, for instance) programming languages treat things that look like decimals as floats as a popular performance-over-correctness optimization, but I don't think that makes an exact decimal representation look like anything but a decimal.


The Python Decimal class was introduced 10 years ago in version 2.4. That's after Python was 12 years old (from v. 1.0). It is a standard library module, not a first level type. This is not the least unusual.

Even Swift, as modern as it gets, operates the same way. You are arguing against a convention that is as deep rooted as CPUs capable of floating point calculations.

To this day, there ARE no "decimal" CPU operations. There are integer and float operations.


Actually, I am arguing against describing exact decimal representation as something that "looks like a float" (particularly a binary float.)

I'm not arguing against Python's behavior. I do think that, at a minimum, there are major and very common software domains where the common optimization is more harmful than beneficial, and that it's good that some languages buck the trend. But while I don't think it should be an unquestioned tradition in language design, I don't think it's categorically wrobg, either.

> To this day, there ARE no "decimal" CPU operations

First, this isn't true, there are CPUs that have decimal (still floating point, so only exact within a subset of their full range) operations.

Second, it's irrelevant; many languages have types (including types for common literals) that don't map to a specialized class of operations in the CPU, where the language compiles operations down to aggregates of operations on some lower level type or types.


Yeah, I was going to say, "Since computers."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: