Hacker Newsnew | past | comments | ask | show | jobs | submit | WeatherBrier's commentslogin

The part about C++ is true, when you need to use a combination of SIMD/CUDA/MPI there is no game in town but C++. The ecosystem is so vast, we rely on too many scientific/numerical libraries that took tiny armies of grad students to create.

But for new projects in the future, I am looking towards Mojo and Chapel.


Mojo has its foot in the door through compatibility with Python.

Otherwise, there is no reason to pick either when you can do `dotnet add package System.Numerics.Tensors` and access pure C# BLAS that runs at optimal hardware speed that lets you easily implement inference without ever reaching for C or C++. And for GPU-related scenarios there is ILGPU. Now, either is extremely niche compared to Python, but the overall choice will give you great support everywhere, unlike with Chapel or Mojo.


I feel the opposite, they really are focused on being a language that's great for AI and heterogeneous compute, since that's what Modular is focused on. But the features are attractive for other use cases, it has access to MLIR, compile time metaprogramming, easy and ergonomic SIMD, and soon GPU support.


Okay great.

Why does AI and heterogeneous compute need ownership and lifetime checks again?

These are really needed for low level pointer heavy code.


So they can optimize code. The main theme is addressing two-language problem (single high & low level language). They want uniform language for cpu/gpu/whatever-pu. They can't have garbage collection. They need strict dataflow analysis with precise destruction points.


Do you see my point that they are trying to do everything?


They're not trying to do everything. They don't care about stuff like Hindley–Milner inference, logic programming, relational algebra integration, tsx like html integration, functional purity, algebraic effects and dozens of other concepts. They just care about extending python so you don't need to use ffi to jump into c++/cuda for performance because language already supports high performance constructs.


Because writing a garbage collector to run on graphics cards is a thankless PhD class problem, even if technically possible, no one will adopt it.


I am really enjoying the language, the level of improvement over the last few months has been amazing. At work all our heterogeneous compute needs are implemented using C++/CUDA. I've been exploring new languages for the type of work we do on and off over the last few years: Julia, Rust, D, Chapel, and Mojo. It seems the only new languages serious about native heterogeneous compute are: Julia, Chapel, and Mojo.

Of those languages I can say Mojo and Chapel were the most impressive. But Mojo is just so fun to write, I've ported a bunch of my "hobby numerical code" to Mojo. I am practically all in on Mojo since it'll give me access to MLIR.

I have always wondered though, why does Chapel get no love online???


Most likely because HPC isn't cool for TikTok videos, by language influencers.

As CERN alumni, and language nerd, I find Chapel quite cool.


eech. i don’t want to use a language made by people who take advice from, let alone watch, tiktok influencers.


Bad news, you aren't having many left, given how much content being made across social media platforms, not only Tiktok.


I feel the same way, I love using Julia, but the features that Mojo provides are exciting. It's great that we have both of them.


The language is far from stable, but I have had a LOT of fun writing Mojo code. I was surprised by that! The only promising new languages for low-level numerical coding that can dislodge C/C++/Fortran somewhat, in my opinion, have been Julia/Rust. I feel like I can update that last list to be Julia/Rust/Mojo now.

But, for my work, C++/Fortran reign supreme. I really wish Julia had easy AOT compilation and no GC, that would be perfect, but beggars can't be choosers. I am just glad that there are alternatives to C++/Fortran now.

Rust has been great, but I have noticed something: there isn't much of a community of numerical/scientific/ML library writers in Rust. That's not a big problem, BUT, the new libraries being written by the communities in Julia/C++ have made me question the free time I have spent, writing Rust code for my domain. When it comes time to get serious about heterogeneous compute, you have to drop Rust and go back to C++/CUDA, when you try to replicate some of the C++/CUDA infrastructure for your own needs in Rust: you really feel alone! I don't like that feeling ... of constantly being "one of the few" interested in scientific/numerical code in Rust community discussions ...

Mojo seems to be betting heavy on a world where deep heterogeneous compute abilities are table stakes, it seems the language is really a frontend for MLIR, that is very exciting to me, as someone who works at the intersection of systems programming and numerical programming.

I don't feel like Mojo will cause any issues for Julia, I think that Mojo provides an alternative that complements Julia. After toiling away for years with C/C++/Fortran, I feel great about a future where I have the option of using Julia, Mojo, or Rust for my projects.


> I really wish Julia had easy AOT compilation and no GC, that would be perfect

I pretty strongly disagree with the no gc part of this. A well written GC has the same throughout (or higher) than reference counting for most applications, and the Rust approach is very cool, but a significant usability cliff for users that are domain first, CS second. A GC is a pretty good compromise for 99% of users since it is a minor performance cost for a fairly large usability gain.


Too bad Julia doesn't have this theoretical "well written GC". I do not like GCs, so I agree with OP's sentiment. Why solve such a hard problem when you don't have to?

I don't find ownership models that difficult. It's things one should be thinking of anyway. I think this provides a good example of where stricter checking/an ownership model like Rust has makes it easier than languages that do not have it (in this case, C++): https://blog.dureuill.net/articles/too-dangerous-cpp/


On the other hand, trying to represent graph structures in Rust (e.g. phylogenetic trees, pedigrees, assembly graphs) is absolutely horrible. The ownership models breaks completely apart, and while it can be worked around, it's just a terrible developer experience where I just wished I had a GC in those cases.

Practically speaking I rarely find GC pauses to be an issue, neither latency wise nor speed wise. Though of course that could be due to

1. I don't need low latency in research work,

2. I rarely work with massive complex data structures filling all my RAM where the GC has to scan the whole heap every time it runs, and

3. GC may have indirect performance effects that are not measures as part of GC runs, e.g. by fragmenting active memory more.


The arguably idiomatic way to implement such structures in Rust is to use arrays and indices, see crates like petgraph. It's probably faster as well, because there are less allocations and memory locality is better.


There's work on porting MMTk to Julia, which will provide some well written GCs: https://github.com/mmtk/mmtk-julia


It's unfortunate indeed if Julia does not have a well-written GC as you imply.

While I feel like I have my head wrapped around ownership well enough to write (dare I say idiomatic) Rust without too much difficulty, I do find myself often in a position where I wish I just had a GC.

I think this speaks to what your parent comment is saying: I think there are many situations where the performance improvement over having fine-grained control of my code's memory management is not worth the extra time I have to spend thinking about it. As it stands, I will sometimes give up and slap a bunch of clones or Rcs on my code so it compiles, then fix it up later. But the performance usually is good enough for my use even with all of these "inefficiencies," which makes me sometimes wish I could instead just have a GC.


I think Julia's GC is quite good now, it can even multithread.


I'd probably describe it as "moderately good". Julia has a pretty major head-start over languages like Java because it can avoid putting most things on the heap in the first place. The main pain point for the GC currently that Julia is missing some escape analysis that would allow it to free mutable objects with short lifetimes (mainly useful for Arrays in loops). The multi-threading definitely helps in a lot of applications though.


Given the effort Swift, Chapel, Haskell, OCaml, D are going through for adding ownership without Rust's approach, not everyone feels it is that easy for most folks.


> A well written GC has the same throughout (or higher) than reference counting for most applications

Reference counting has its own problems. The true comparison should be with code that (mostly) doesn’t do reference counting.

Then, the claim still holds, IF you give your process enough memory. https://cse.buffalo.edu/~mhertz/gcmalloc-oopsla-2005.pdf:

“with five times as much memory, an Appel-style generational collector with a non- copying mature space matches the performance of reachability- based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management.”

That paper is old and garbage collectors have improved, but I think there typically still is a factor of 2 to 3.

Would love to see a comparison between modern recounting and modern GC, though. Static code analysis can avoid a lot of recount updates and creation of garbage.


Well, there's some big DS projects written in rust that are now very widely used in Python world - e.g., polars.


I just came from a CERN event, HEP seems to still be all about C++, Fortran, Python, Java, and some Go due to Kubernetes.

No Rust or Julia in their radar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: