Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes I could do that. I could indeed invoke something that requires god-knows how many tensor cores, vram, not to mention the power requirements of all that hardware, in order to power a simple CRUD App.

Or, I could not do that, and instead have it done by a sub-100-lines python script, running on a battery powered Pi.



You don't actually think this is serious, do you?


I promise that even if this is a joke, people will see this and take it seriously, implement it and preach it seriously to other people. It's impossible to make jokes online if you don't want to have harmful effect on the world.


Richard’s Pied Piper box was certainly a parody on this very real thing that happens.


As evidenced in this very same thread. You can't make this up, can you?


Is this just another face of Poe's law?


Jokes are what led to Donald Trump running for president.

Now, this joke will lead to BE work that is abysmally optimized but some MBA will instead throw hardware at the problem and call it a day.

Congrats, you've been replaced by AI!


That's what I said about JavaScript almost 30 years ago.


I didn't get this was a joke ...if it is indeed then this is more of a troll than a post.


CTOs and CEOs will think it is serious.


Well, it’s also a joke. I think the point you’re making is the punchline.


I think they're sincere, though? I can't tell and I'm a little concerned


I mean, I could think of thousands of apps which amount to < 1 dozen transaction per month on a few hundred megs of data. Paying for the programmer time to build them dwarfs the infrastructure costs by orders of magnitude.

LLMs are not perfect, and can't enforce a guaranteed logical flow - however I wouldn't be surprised if this changes within the next ~3 years. A lot of low effort CRUD/analytics/data transformation work could be automated.


But why, when I could easily just tell the AI to generate the code for the CRUD app for me, thus resulting in minmal dev costs while also getting minimal infrastructure requirements?


> I could indeed invoke something that requires god-knows how many tensor cores, vram, not to mention the power requirements of all that hardware, in order to power a simple CRUD App.

The app doesn't need to be powered by the LLM for each request, it only needs to generate the code from a description once and cache it until the description changes.


The underlying complexity isn’t relevant at all when considering such solution, if it makes otherwise business sense and is abstracted away.

Otherwise you could make the same argument about your 100 lines python script which invokes god knows how many complex objects and dicts when a simple C program (under 300 lines) could do the job.

(I know the original repo is a joke… for now)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: