Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The gap between how fantastic the language is according to Julia fanboys and how annoying my experience has been is very hard on me.

Everything needs to install a package, even simple debugging. Graphics is very broken, especially for headless environments. You get incomprehensible, unfixable, error messages since everything is statically linked. But you don't get speed since everything has to compile first. Unicode symbols are encouraged (at minimum it takes an order of magnitude more to type/paste/convert some Greek symbol than to type an English letter). All the experience you feel a smug attitude coming from the environment "we make everything perfect, thoughtful, beautiful for you, if something breaks is your fault" but then something actually breaks...

The most beautiful thing about the language is the computational physics community' enthusiasm about it (probably in the hope of finally abandoning Fortran). So JuliaCon's conferences are actually good and you find a lot of interesting computational problems solved in the language.

Only that the basic unfriendliness and unsmooth experience, coupled with a lot of immature features, really ruins it for me.



I’ve been really enjoying it since 1.6 came out. A lot of the points you raise are perfectly valid, but the speed difference coming from R and python is real (so long as you’re writing computationally intensive programs and not scripts, for which it is a bad tool at the moment). There’s less friction to get performant code compared to writing supplemental functions in C++.

Also, I have to disagree with the unicode issue. The target audience is heavily invested in latex, so typing things like \theta is natural. It also results in code which is more terse compared to writing out all the letters, and at least for me results in a somewhat lower cognitive overhead when implementing algorithms etc.


> Julia fanboys

> smug attitude

This is a bit over the top, don't you think? Julia is an open community with varied contributors. I've not seen any core contributors here claiming Julia is perfect, if anything, they are conscientious about the "first time to print" and numerous other issues. Developers using Julia commenting here offer their unvarnished experience and critique.


Absolutely agree about the disconnect between my own experience and the views coming from the community. It has made an already frustrating experience even more off putting.


I agree that the experience is extremely off putting. But having been in the community for almost 5 years now, I have to say there are super awesome lovely people that are mostly silently working in the background and very hard at that. The loud minority unfortunately is grating, and I do wish the stewards of the language established more strict guidelines on conduct or philosophy of the language. I don’t want to name names here, but when prominent members of the Julia community bash other programming languages while ignoring the painful friction in Julia, it comes across as very tone deaf and in poor taste, and does set a bad precedent.

I think having a weekly what’s great about Python/Rust/Go/Zig and how can we port this to Julia would be awesome, instead of the weekly gripes about Python/MATLAB/R.


> the weekly gripes about Python/MATLAB/R

At the cost of repeating the old Bjarne quote for the millionth time, you hear complaints because those are languages that people use in their day-to-day. If only Julia were more popular, you'd definitely see it get caught in the crossfire.

Also worth noting that in terms of intended audience Julia is more like Matlab and R than Python, most of the more abstract part of the discussion would likely apply to Julia to a lesser degree.


As a moderator on the primary discussion board — discourse — please do flag problematic posts and/or DM me. We don't see everything and rely on flags.


It might be because my background is CS and not hard science, but non-ASCII variable names are one of the most baffling decisions I have ever seen in any programming language.

Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

I hear it's gaining traction, though, so it's likely many of the problems you mention will be fixed or mitigated, eventually.


Supporting the LaTeX expansion is just a necessary feature of any Julia-compatible code editor. It is not then particularly more difficult to type `\sigma` over `sigma`.

Also note that other languages often support non-ASCII identifiers as well --- for example, `σ` is a completely valid identifier name in Python (though it is more restrictive than Julia, `σ²` is not valid in Python and yet is valid in Julia). It even works in C, but that might just be a GNU extension.


It's not just identifiers though, it's also operators. To take an example from the OP article, I can't relate to anyone who claims that ∛x is better than cbrt(x). The fact that this is not only allowed but actively encouraged is absolutely baffling to me.


The language of science is mathematics, so code that more closely resembles mathematical notation can be more efficient for practicing scientists to understand. There's a lot of value in code that looks like the expression published in the paper, because when another scientist wants to build upon or modify that code, they'll read and understand the paper first, not the package code.

Also in physics, sometimes you get really large expressions with a lot of Greek letters and operators. In the paper, you make it a double-wide multiline equation with LaTeX. It makes a big difference if that corresponds to a few lines of Greek symbols in your code, and not twenty.


> I can't relate to anyone who claims that ∛x is better than cbrt(x)

I do. It's a simple \cbrt<tab> away in any editor worth their salt, and it greatly improves readability of computation-heavy code – ∛(ϵ² + ξ³) ≤ Φ(π) is much clearer than the ASCII equivalent.


Eh, I have to diagree on that. A mathematician reading ∛x in code will immediately know what it means, but that isn't necessarily true when seeing cbrt(x). Julia was written for mathematicians, not programmers, and takes some getting used to.


The mathematician still has to type \cbrt + tab to type the glyph so they have to know the name of the function. Not only that, they have to know the name of the function from reading the Unicode glyph or be constantly pasting characters into the REPL. I don't get how that's more efficient in any way.


We are in the age of Unicode. The LaTeX entry methods are options, not required. I rarely use them, and don’t paste in characters. I hit a modifier key. Typing a Greek letter is no more inconvenient than typing an uppercase letter.


> Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

There's a curious blindspot among coders with regard to customizing their main interface with their machines. Keyboards are heavily customizable.

Some wins are extremely easy: If you want instant access to all Greek letters on a Mac (e.g.), you can have your caps lock key toggle between your standard layout and the Greek layout. (no \alpha, just α.) The rabbit hole runs deep from there if you are adventurous. Look at all the option key symbols you don't use. Swap them out for nice things like real arrows, ∈, and whatever you fancy.

Maybe you have restraints or biases about work code being all ASCII, but typeability is up to you.


Absolutely. On Linux it’s as simple as defining a “dead Greek” key, and you have access to the whole Greek alphabet. When I hear people complaining about the terrible burden that Julia allows Greek letters, to me it sounds like someone complaining that it allows uppercase letters. Same thing: one modifier key.


Customization is a double edged sword though. A keyboard is simple and universal, and it's always the same, my skills transfer between machines, operating systems, languages, software, etc.

Yes, it's just muscle memory and you can always re-train it, but there's very little return for my investment.

> Maybe you have restraints or biases about work code being all ASCII

I'm heavily biased towards all-ASCII for everything except explicitly multilingual contexts (UIs, display formats, browsers, document editing, etc.). As far as I'm concerned, any byte set to any value above 127 in any source file should be a compile-time error. A few reasons why:

- It's basically guaranteed something somewhere will screw up the encoding. ASCII is the safest subset.

- Better ability to quickly, reliably input characters across machines and tech stacks.

- Easy to memorize. Characters are easily and immediately recognizable by everyone worldwide.

- Some fonts might lack support for some non-ASCII characters.

- Many non-ASCII characters are just plain unreadable. On my screen, lowercase alpha looks like a lowercase latin "A".

I can see an argument for allowing non-ASCII characters inside string literals and comments, but with non-ASCII identifiers you're just looking for trouble.


> Yes, it's just muscle memory and you can always re-train it, but there's very little return for my investment.

As a data point of one, I've found the return to be enormous and the investment suprisingly small. You have a whole lifetime of typing ahead, so what's a small investment of time compared to that? Just something to consider.

> As far as I'm concerned, any byte set to any value above 127 in any source file should be a compile-time error.

People can vote by using the languages and tools they want, but I have been waiting for that limitation to die for quite some time. I also use function names longer than 6 characters. That said, I realize I'm lucky enough to be in charge of my own programming environments and don't have to worry about its adoption in unknown limited scenarios.

> I can see an argument for allowing non-ASCII characters inside string literals and comments, but with non-ASCII identifiers you're just looking for trouble.

Again, just as a data point, I've never found this to be an issue (if one is prudent and not obnoxious with it), though the language support needs to be in place. In my main language these days, Swift, it's fine. Julia and Kotlin too.


> Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

I also am not really a fan in many cases, but in the case of Bayesian models it makes so much more sense to write "σ" than "sigma". In the case of these complex models, being able to stay closer to the math reduces complexity for the reader. Also, you don't have to use Unicode in your own Julia codebase. It's more of an extra feature which you can use if you want.


They can very easily be typed from a regular keyboard, I'm doing it every day. \alpha<TAB>. Also, it's much more readable than typing the characters out like alpha_ij.


I agree, it's like they looked at the typical C++ compilation experience and thought "I wish Matlab had those issues too!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: