Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fortunately, open-source projects can have more than one contributor. 2024 commit log includes:

  Bump CI to macOS 13
  Add AVX512 support
  Update LuaJIT
  Document AMD, Intel, Nvidia GPU support
  Add support for LLVM18
  Support SPIR-V code generation


I'm well aware but consider what it means that the guy that created it has walked away from it


If the project is actively maintained after 10 years, it means he created a sustainable foundation.

If you're aware of specific open issues with the project, identified by the founder or others, please share details.


do you understand that "use one language to emit another language that is then immediately compiled" is literally like hundreds of languages now? numba/cupy/pytorch/jax/julia/etc/etc/etc. they all implement both a high level IR that can be transformed via the high level language (python) and a JIT/compiler/whatever.

so like why would i ever use this thing terra+lua+"5 guys maintaining it in a basement somewhere" thing?


> why would i ever use this thing

Los Alamos U.S. National Lab is funding Legion, which uses Terra, https://legion.stanford.edu/overview/

  Achieving high performance and power efficiency on future architectures will require programming systems capable of reasoning about the structure of program data to facilitate efficient placement and movement of data.
Are the Stanford researchers in a basement? The lab's previous work lead to CUDA. Does that earn them any consideration? How about lanl.gov using the language?


> Are the Stanford researchers in a basement?

I don't know how to explain this to you because you and everyone else around here worships at the alter of academia (especially HYPSM academia) but the answer is absolutely 100% yes. There is literally nothing coming out of any edu lab, even stanford, that has any relevance in this same industry (ML/AI compilers). To wit: there is no company running absolutely anything on top of legion.

> The lab's previous work lead to CUDA

This is like saying a sketch of a car led to the car. No. CUDA is the result of thousands of working engineers years labor, not academia, not academic aspirational hacking.

> How about lanl.gov using the language?

"using the language". using how? every single group at lanl is using the language? they've migrated all of their fortran to using the language? they're starting new graduate programs based on the language? you really have no clue how low the barrier to entry here is - any undergrad can start an association with lanl (or any other national lab) and just start writing code in whatever language they want and suddenly "lanl is using the language". you don't believe? me go look for any number of reports/papers on julia/go/python/javascript/etc/etc/etc coming out of lanl and ornl and jpl and etc. how do i know this? i'm personally on a paper written at fermilab based primarily around task scheduling in JS lolol.


> there is no company running absolutely anything on top of legion

Does Nvidia count?

  Supercomputing is on the verge of a breakthrough—computing at the exascale, where a billion billion calculations per second will be the norm. That’s 10 times faster than current supercomputing speeds... Los Alamos scientists, along with colleagues at five other institutions (the Nvidia corporation, the University of California–Davis, Stanford University, SLAC National Accelerator Laboratory, and Sandia National Laboratories) created a parallel programming system to do just that. The system, called Legion, sifts through an application to determine which tasks can run in parallel, or simultaneously, to save time and boost computing efficiency. [1] 
> This is like saying a sketch of a car led to the car.

The ACM Turing Award (computer science equivalent of Nobel Prize) committee believes otherwise.

  Beginning in the 1990s, he and his students extended the RenderMan shading language to work in real time on the newly available technology of graphical processing units (GPUs). The GPU programming languages that Hanrahan and his students developed led to the development of widely used standards, including the OpenGL shading language that revolutionized the production of video games. Subsequently, Hanrahan and his students developed Brook, a language for GPUs that eventually led to NVIDIA’s CUDA. The prevalence and variety of shading languages ultimately required the GPU hardware designers to develop more flexible architectures. These architectures, in turn, allowed the GPUs to be used in a variety of computing contexts, including running algorithms for high performance computing applications, and training machine learning algorithms on massive datasets for artificial intelligence applications. [2]
From SIGGRAPH 2004 slides, Stanford's research on Brook was sponsored by Nvidia, ATI, IBM, Sony, U.S. DARPA and DOE. [3]

[1] https://web.archive.org/web/20241212055421/https://www.lanl....

[2] https://amturing.acm.org/award_winners/hanrahan_4652251.cfm

[3] https://graphics.stanford.edu/papers/brookgpu/buck.Brook.pdf


> > there is no company running absolutely anything on top of legion > Does Nvidia count?

I don't understand? To prove that NVIDIA ships legion you sent me a link to a LANL post? How does that make sense? Show me a product page on NVIDIA's domain to prove to me that NVIDIA uses this in a product.

> The ACM Turing Award (computer science equivalent of Nobel Prize) committee believes otherwise

You seem to not get what I'm saying: my firm is position is academia doesn't understand absolutely anything in this area. Zero. Zilch. Nada. And absolutely no one in the industry cares either. So given that position, why is this relevant?

The only thing academia is good for is a talent pool of hard-working, smart people. We take those people and then completely retrain them to do actually useful work instead of research nonsense. The vast majority of PhDs coming from academia to industry (yes even from Stanford) literally are horrible software/hardware engineers. Many of them stay that way. The good ones (at least in so far as they care about having a successful career) learn quickly. That's how you get CUDA, which is a product worth a trillion dollars.

Look I've already told you: you workshop at an alter and you've also clearly never worked at a Stanford or an NVIDIA or a LANL. You'll never be convinced because... well I have no idea why people need mythologies to worship.


> Show me a product page on NVIDIA's domain

Legion is research (where _future_ products originate), not yet a product, https://images.nvidia.com/events/sc15/pdfs/SC5117-future-hpc...

  LEGION: A VISION FOR FUTURE HPC PROGRAMMING SYSTEMS
  Michael Bauer, NVIDIA Research
  Patrick McCormick, Los Alamos National Laboratory
> academia doesn't understand absolutely anything in this area. Zero. Zilch. Nada. And absolutely no one in the industry does either.

Would you care to share the names of some software/hardware engineers worthy of emulation by academia and industry?


> Legion is research (where _future_ products originate)

my drawings of spaceships are also where future spaceships come from. it's plausible right? therefore my drawings of spaceships are valuable right?

> Would you care to share the names of some software/hardware engineers worthy of emulation by academia

sorry that was a mistype - i meant to say no one in the industry cares either. engineers in the industry do understand things because they're working on the things every day.


> my drawings of spaceships are valuable right?

Are your drawings vetted by DARPA (annual budget of $4B)?


did you miss the part where i've written/collaborated-on research papers? on grants that are $10MM+. it's meaningless because the goals of research are not "make something useful".


People change, interests wander. The guy who made brainfuck wrote parts of the sauerbraten engine. Is that a betrayal of brainfuckery?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: