> Do you realize that making + do what the ordinary mathematical + does, is quite expensive? It requires arbitrary precision arithmetic and hence, potentially, memory allocations.
Yes I do. It's an acceptable cost in most cases (you will note much of the world runs on Python, a language in which + does behave sensibly, these days), and dangerous optimisations should be opt-in rather than opt-out.
In the worst case you should at least make your language fail-stop definite by making it error when the result of + is too large rather than silently continuing in an invalid state.
> In the worst case you should at least make your language fail-stop definite by making it error when the result of + is too large rather than silently continuing in an invalid state.
That's what Swift does fwiw. But there are downsides to that too (although they're also related to Swift not having a good fault recovery mechanism).
In practice, whenever you know you're going to use large enough numbers in Java, you probably want to use BigInteger or BigDecimal (e.g. in banking). Unfortunately, Java doesn't allow you to use the + operator with it.
> whenever you know you're going to use large enough numbers in Java, you probably want to use BigInteger or BigDecimal
When you've made a conscious decision you can pick the right thing, sure. The problem usually happens when you haven't thought about it at all, which is why the default should be safe and the unsafe optimisation should be opt-in.
Maybe for LOB applications, it would be better if languages defaulted to arbitrary precision arithmetic. But scientific computing is also a huge field that often uses the same languages and there, arbitrary precision is often the completely wrong tool, e.g. it would make certain key algorithms (like Gaussian elimination) exponential.
I feel like this is just one of these things that developers should know about so they can make the correct choice, just like DB indices.
> arbitrary precision is often the completely wrong tool, e.g. it would make certain key algorithms (like Gaussian elimination) exponential.
Sure, but even in science taking a lot of time to deliver a result, or failing to deliver one at all, is much safer than silently delivering the wrong result. There's the concept of fail-stop if you want a rigorous approach to safety. There's no analogous safety model that says silent overflow is the safe option.
Yes I do. It's an acceptable cost in most cases (you will note much of the world runs on Python, a language in which + does behave sensibly, these days), and dangerous optimisations should be opt-in rather than opt-out.
In the worst case you should at least make your language fail-stop definite by making it error when the result of + is too large rather than silently continuing in an invalid state.