Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. Mathematicians have different priorities than programmers, and they use different tools. Working with an equation on a whiteboard, it's easier to write "a+b+c" and then cancel terms as needed. When writing a formula on my computer, cancelling terms is something I almost never do, so it would be silly to use a notation that's been optimized for that.

When I am doing algebra on my computer, I hope I have a tool like Graphing Calculator (not "Grapher"!) that lets me simply drag a term from here to there, and automatically figures out what needs to happen to keep the equation balanced.

2. They have, except they use Σ for the prefix version. When it's more than a couple terms, and there's a pattern to it, Σ (prefix notation) is far more convenient than + (infix notation).

If programming languages look like they do because they're taking the useful notations from mathematics, why doesn't your favorite programming language have a Σ function? Who's being stubborn here?



Most programming languages do have some variant of `sum(seqence)`. Python certainly does. Or, like, loops, which do the same thing.

But they're optimized for different things. Using the same tool for infinite length sequences and fixed length sequences doesn't make a whole lot of sense. We often have different types for them (tuple/record vs. list/array) too.


Having done addition in both infix and prefix varieties on my computer, over the past few decades, I don't understand why prefix notation is considered 'optimized' for indefinite (not 'infinite') sequences and infix notation is considered optimized for definite length sequences.

What exactly "doesn't make a whole lot of sense" about (+ a b)? (It doesn't look the same as you wrote it in school? Neither does "3+4j", or "math.sqrt".)

Being able to use the same tool for different types of data is precisely what makes high-level programming so powerful. As Alan Perlis said, "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." Having only one way to add numbers (in languages that do that) is a great feature.

Python's insistence on these pre-selected groupings of functionality has always made it frustrating for me to work with. The two ways to add numbers look and work nothing alike. Does "TOOWTDI" not apply to math?

(Yes, I'm also frustrated and confused that Python has built-in complex numbers, and a standard 'math' module with trigonometric functions, but math.cos(3+4j) is a TypeError. What possible benefit is there of having a full set of complex-compatible functionality, but hiding it in a different math module, with all the same method names?)


The zen never says TOOWTDO, it says TO(APOO)OWTDI. (That's "there's one, and preferably only one, obvious way to do it.)

`reduce(op.add, [x, y])` works. Python could remove it's infix ops and use only prefix ones. But prefix ones aren't obvious. And as Guido says, readability matters.


Σ is sum in Python and many other languages, and it's generalized across all binary operators via reduce.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: