Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are two problems with this thinking.

1.) The rewrite never seems to have all the features of the original (for many reasons), so you end up keeping the original around because science can be very niche. Now you have two or more packages to deal with.

2.) There is always a better language. Science (or at least parts of science) have switched before - from Fortran to C to C++ to Python. Some of the gains have materialized (some safety, performance, borrowing stuff from outside science). But it has come at a cost (language fragmentation, and also packaging is an absolute shit show right now, partly because #1).

But I'm sure the next batch of languages will finally solve all our problems once and for all, and we will never have to switch again.

(In general, I am talking about non-AI type science. I am a computational chemist, and our code really does date back > 40 years at times. That is not always a bad thing).



> That is not always a bad thing

Sure and that's reasonable, but in the cases where performance/throughput/training time/RAM usage is the main limiting factor of a field, suddenly this stuff matters a lot. There's plenty of useful code written in COBOL and Basic from back in the day, but you don't see people using those things to train LLMs

Imagine if instead of letting Basic effectively die, we had improved it with a myriad of extensions to the point where you can run LLMs and GPU code and things in a performant-ish way on it. Now replace the word Basic with Python


How do you, or "you" as the industry, decide that new code and new techniques are worth moving away from existing code?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: