This line is a straw man. I'm sure code like that exists, but it's hardly the best we can do. The proposed system is good enough that you don't have to compare it against a straw man for it to look good and useful.
force = 6.67*10^-11*mass_1*mass_2/radius^2
Firstly, we can write it like this
force = 6.67*10^-11 * m1 * m2 / radius^2
And what language doesn't support this?
force = 6.67e-11 * m1 * m2 / radius^2
Andalso what about an abstraction, and the ability to write multiplication naturally?
-- a.k.a. G
-- nobody really knows why this is so different from what relativity predicts
const GRAVITATIONAL_CONSTANT = 6.67e-11
fun gravitationalAttraction(m1, m2, radius) =
GRAVITATIONAL_CONSTANT m1 m2 / radius^2
val force = gravitationalAttraction(mass1, mass2, radius)
Now we actually know what it's calculating and why, and what that tiny number is. Would it be better if rendered as a large fraction? Absolutely. But I think it is less necessary if you write code like this well.
The suggested system is good but not novel. Rendering maths as you type it has been there in Mathematica, etc. forever.
That syntax doesn't work along with using parentheses for function application unless you do something like making the parser context-sensitive (so function followed by '(' means application but value followed by '(' means multiplication) or treat whitespace as significant (so "f(x+1)" is application but "f (x+1)" is multiplication). This is why Mathematica does Sin[22] instead of Sin(22).
(It used to be that (a,b) in Mathematica was shorthand for Sequence[a,b], which led to silly bugs like accidentally writing f(a,b) instead of f[a,b], giving you Times[f,a,b] instead. This is just to illustrate why it is not obvious to make it so that juxtaposition is multiplication.)
> Rendering maths as you type it has been there in Mathematica, etc. forever.
I know you can type superscripts, subscripts, fractions, etc. using shortcuts. If you meant "as you type it" as in Mathematica will reformat what you type in more traditional notation as you type it, then I am wondering where you can enable that.
>That syntax doesn't work along with using parentheses for function application unless you do something like making the parser context-sensitive (so function followed by '(' means application but value followed by '(' means multiplication) or treat whitespace as significant (so "f(x+1)" is application but "f (x+1)" is multiplication). This is why Mathematica does Sin[22] instead of Sin(22).
I'm not so sure that's true? You can just parse 'a b c d e' as
Apply (Apply (Apply (Apply a b) c) d) e)
and then you can just say that Apply is overloaded to mean multiplication for numbers, just like + is overloaded to mean concatenation for strings in most languages, but addition for numbers. It doesn't actually change the parsing, your parsing doesn't become context-sensitive. 'Apply x y' would mean multiplication if the arguments are numbers and application if the left argument is a function.
Even if you allow multiplication of functions by scalars (which you probably shouldn't, IMO) you can easily say that 'x f' means 'x times f' while 'f x' means 'f applied to x'.
>I know you can type superscripts, subscripts, fractions, etc. using shortcuts. If you meant "as you type it" as in Mathematica will reformat what you type in more traditional notation as you type it, then I am wondering where you can enable that.
Well you can write :alpha: and it will make it an actual alpha character, you can write superscripts and it will superscript them, it will make fractions readable, etc. It will turn -> into a unicode right arrow, that sort of thing.
That's altering the syntax for function application by allowing f x to mean f(x), which is fine but changing the rules. If we do go with that, then "a b c" should be "Apply a (Apply b c)" because of the associativity of application --- traditional math tends not to have functions which return functions.
There is an ambiguity which comes up when you allow both f x and f(x) syntax, which is you cannot tell the difference between f(x,y) and f((x,y)) if your language has tuples. (One solution: make tuples be like Mathematica's Sequence, thus establishing the associativity of Cartesian products once and for all.)
A gotcha is that you can't write f(x)(y) if your function does return a function, since this will parse as f(x y). You would need to write (f(x))(y) instead. (If you insist the associativity rule should be the other way, then 3f(x) would be (3f)x, which is possibly ok, and sin cos x would be (sin(cos))(x).)
I don't think you can have all of traditional mathematical notation (which is often highly overloaded) in a programming language while only using syntax for disambiguation.
Mathematicians mostly depend on the semantics of an expression to disambiguate it. You could get something close by making use of a type system to filter all possible parses.
For example, sin cos x would parse as sin(cos(x)), (sin(cos))(x), (sin * cos)(x), ... and most of these would be discarded by the type checker. If you do it naively, the complexity will be exponential, but I think you could use dynamic programming to handle that. In my example, it would be more problematic that (sin * cos) might be interpreted as multiplication of functions, but this could be solved by having separate operators for numbers and other types, and only allowing some to be represented by juxtaposition.
Overall, I'm not convinced that such a highly ambiguous system would be beneficial in a general purpose programming language, but when you just want to type in a specific mathematical formula, disambiguate it once, and then work with the unambiguous representation (maybe using a pretty-printer), I think it would be useful.
> I don't think you can have all of traditional mathematical notation (which is often highly overloaded) in a programming language while only using syntax for disambiguation.
I think it is reasonable to be able to handle certain subsets of mathematical notation, but yeah, in total it seems to be context dependent. (I've tried designing languages to deal with this before, and my previous comments were about actual issues I had encountered making it all consistent.)
> For example, sin cos x would parse as sin(cos(x)), (sin(cos))(x), (sin * cos)(x), ... and most of these would be discarded by the type checker
Unfortunately, all of those make some sense! The first is what anyone who took precalculus would read, but the second could be substituting cos into the power series of sin, and the third could be the pointwise product (sin * cos)(x) := sin(x) * cos(x), which is particularly important for the ring of functions on the real line.
> Overall, I'm not convinced that such a highly ambiguous system would be beneficial in a general purpose programming language, but when you just want to type in a specific mathematical formula, disambiguate it once, and then work with the unambiguous representation (maybe using a pretty-printer), I think it would be useful.
One thing I've been enjoying about Mathematica is the ability to have the source code be written in 2D graphical notation.
I'm not convinced that it would be useful to have a special way to input and disambiguate mathematical formulae for a general purpose programming language because 1) actual formulae are infrequent and 2) it tends not to be too challenging to translate a formula into the syntax of the programming language. I think the main use case for trying to interpret traditional math notation is for exploratory calculating/graphing/programming where you would only ever use the equation at most a few times.
>If you do it naively, the complexity will be exponential
Hindley-Milner type inference is doubly exponential already but is still widely used. If people never actually use extremely nested expressions in real code that require this sort of disambiguation then it will probably be completely fine.
>That's altering the syntax for function application by allowing f x to mean f(x), which is fine but changing the rules. If we do go with that, then "a b c" should be "Apply a (Apply b c)" because of the associativity of application --- traditional math tends not to have functions which return functions.
That's the standard way of writing function application in functional programming languages.
And in those languages,
f a b c
==
(((f a) b) c)
because they're curried: a function of 2 arguments (f: a -> b -> c) is a function f from type a to type (b -> c).
And mathematics definitely has functions that return functions. For example, any function where the codomain is a space of sequences is technically a function that returns functions, as a sequence in X is a function from the natural numbers to X.
>There is an ambiguity which comes up when you allow both f x and f(x) syntax, which is you cannot tell the difference between f(x,y) and f((x,y)) if your language has tuples. (One solution: make tuples be like Mathematica's Sequence, thus establishing the associativity of Cartesian products once and for all.)
You don't need to tell the difference. If you have a curried function, then you write 'f x y' which isn't the same as 'f (x y)' (which is like the C-style 'f(x(y))'. If you have a non-curried function, then 'multiple arguments' are just a tuple, at least that's how it works in ML.
A function that takes a tuple and a function with multiple arguments are equivalent. That's related to the fact that ((A AND B) IMPLIES C) is equivalent to (A IMPLIES (B IMPLIES C)).
>A gotcha is that you can't write f(x)(y) if your function does return a function, since this will parse as f(x y). You would need to write (f(x))(y) instead. (If you insist the associativity rule should be the other way, then 3f(x) would be (3f)x, which is possibly ok, and sin cos x would be (sin(cos))(x).)
sin cos x is okay notation in mathematics because sin can't take cos as an argument, so it obviously has to mean 'sin(cos x)' and not 'sin(cos)(x)'. But when dealing with computers, I would far rather just write sin (cos x) which is how people write it in Haskell/ML/Lisp.
Not sure if you mean existential quantification or just the number 3 in '3f(x)' but as I said, I don't think that it's a good idea to support 'x f' as meaning '(lambda (a) (* 3 (f a)))' or '(a => 3 * f(a))' or '\a -> 3 * (f a)' or whatever your preferred function notation is for that scaling operation.
Just to keep things in perspective, all I'm saying is that "the ability to write multiplication naturally" comes at a cost. I'm aware of Haskell/ML-like syntax for curried functions, but it is at odds with juxtaposition for multiplication.
I'm speaking from the position of having tried designing languages with these features and not being able to find a way to make everything consistent.
> And mathematics definitely has functions that return functions
Recall, I said "tends not to". My point was that the notation of mathematics is set up to prefer the case of functions returning values.
> Not sure if you mean existential quantification or just the number 3 in '3f(x)'
I meant 3f(x) to mean 3*f(x). I believe you'd want to be able to write 3f(x) since you should be able to have the "ability to write multiplication naturally," in your words.
> sin can't take cos as an argument
But sin can be represented as a power series and cos can be substituted in with cos^n meaning iterating cos n times. This may or may not be reasonable; I don't think it is "obvious."
> Just to keep things in perspective, all I'm saying is that "the ability to write multiplication naturally" comes at a cost. I'm aware of Haskell/ML-like syntax for curried functions, but it is at odds with juxtaposition for multiplication.
But it doesn't. I've already explained why it's not an issue: they associate the same way (so it's not a parse-time issue), and they're never ambiguous (the type of the LHS is a function in one case and a number in the other case, and functions and numbers are never the same, so it's never an issue).
>Recall, I said "tends not to". My point was that the notation of mathematics is set up to prefer the case of functions returning values.
Functions are values...
>I meant 3f(x) to mean 3*f(x). I believe you'd want to be able to write 3f(x) since you should be able to have the "ability to write multiplication naturally," in your words.
3 (f x)
>But sin can be represented as a power series and cos can be substituted in with cos^n meaning iterating cos n times. This may or may not be reasonable; I don't think it is "obvious."
I think it's pretty obviously unreasonable. In any reasonable language:
sin : Number -> Number
cos : Number -> Number
So 'sin cos' is going to be a type error: "type mismatch, expected type 'Number' in argument to 'sin', got 'cos: Number -> Number'.
Ok, so you have indeed changed syntax. Saying that Haskell/ML-style function application syntax is consistent with juxtaposition for multiplication is in no way inconsistent with anything I have said so far. Remember, you used "gravitationalAttraction(mass1, mass2, radius)" in your very first example, and I am speaking to the difficulties getting that example to work.
When I said "it is at odds with juxtaposition for multiplication," I meant "having Haskell/ML-style function application along with classical function application notation", given your original example.
There is nothing controversial with the statement "if you are allowed to change the notation for function application, then you can add juxtaposition for multiplication in a consistent way." But I feel this goes against your design goal of "writ[ing] multiplication naturally." That's not to say we can't come to see Haskell/ML-style function application as natural, but it is not yet the mathematical syntax people learn in school.
>>Recall, I said "tends not to". My point was that the notation of mathematics is set up to prefer the case of functions returning values.
>Functions are values...
Either I missed something basic, or I actually meant something about how mathematicians think about the objects of their work. It is extremely rare to see the functions returned by functions being used immediately as functions. They'd rather either use subscripts like
F_t(v)
or use a pairing between spaces.
In abstract algebra, it's common to see fgx to mean f(g(x)) when f,g are in a ring R and x is in an R-module. With Rtimes being the multiplication in R and App being application, fgx is interpreted as both App(Rtimes(f,g),x) and App(f,App(g,x)), since they must be equal. Mathematicians would be surprised at the interpretation App(App(f,g),x).
> I think it's pretty obviously unreasonable. In any reasonable language:
Yes, you can make consistent systems where it is unreasonable, but that doesn't mean that there is no system in which it can be a reasonable interpretation. I didn't just make that interpretation up: things like e^A where A is a square matrix come up frequently in the study of differential equations, and it is interpreted by substituting A into a power series for e^x.
Does anyone know of any implementation (in some form, maybe unfinished) of this for the web? I'm having a hard time searching for it. NaSC looks awesome but is pretty hard to get running for a one-off lesson in schools (and for use by students at home).
The suggested system is good but not novel. Rendering maths as you type it has been there in Mathematica, etc. forever.