I know a lot of people don't like overloading in general because they've been bitten by people abusing it, but I don't think that's where Zig is coming from. Rather, the project places a high value on being able to understand a snippet of code in isolation, particularly the performance implications of that snippet.
Especially in math-heavy domains I don't think anyone is arguing that "vec1.mul(vec2.plus(vec3))" is cleaner code in the abstract (and I know I've personally written pre-processors _entirely_ to avoid having to write that kind of garbage when doing math-heavy code in an environment unfriendly to such syntax), but the function calls make it crystal clear that something non-trivial is happening under the hood.
Do I want to to give up operator overloading in Python? Absolutely not, and to be frank I wish that portion of the language were even more dynamic. Do I care about Zig not having operator overloading? Not in the slightest. It sits at a different point in the language design space, and I'm super excited about it.
> the function calls make it crystal clear that something non-trivial is happening under the hood.
Well, it makes it crystal clear that something non-trivial may be happening under the hood. If I have a vector type which is implemented using SIMD intrinsics, I'd still call its addition operation "trivial", even if it hasn't been blessed by the language as such.
The point that we only know something non-trivial _may_ be happening is definitely fair :) I'll leave the original comment as-is since you've already responded.
While we're nit-picking, a wide vector type implemented with SIMD intrinsics would still have a non-trivial addition as far as Zig is concerned; your specific example really only holds for sufficiently primitive vectors.
My beef with operator overloading is googling symbols and when I can't inspect it with a language server / IDE. In Pycharm/Clion you can jump to the dunder definition of the operator. Haskell lets you get :info on symbols.
Google has gotten much better at handling symbols.
Basically, custom infix operators are super convenient as long as they are auditable and aren't abused.
Especially in math-heavy domains I don't think anyone is arguing that "vec1.mul(vec2.plus(vec3))" is cleaner code in the abstract (and I know I've personally written pre-processors _entirely_ to avoid having to write that kind of garbage when doing math-heavy code in an environment unfriendly to such syntax), but the function calls make it crystal clear that something non-trivial is happening under the hood.
Do I want to to give up operator overloading in Python? Absolutely not, and to be frank I wish that portion of the language were even more dynamic. Do I care about Zig not having operator overloading? Not in the slightest. It sits at a different point in the language design space, and I'm super excited about it.