Yes I could do that. I could indeed invoke something that requires god-knows how many tensor cores, vram, not to mention the power requirements of all that hardware, in order to power a simple CRUD App.
Or, I could not do that, and instead have it done by a sub-100-lines python script, running on a battery powered Pi.
I promise that even if this is a joke, people will see this and take it seriously, implement it and preach it seriously to other people. It's impossible to make jokes online if you don't want to have harmful effect on the world.
I mean, I could think of thousands of apps which amount to < 1 dozen transaction per month on a few hundred megs of data. Paying for the programmer time to build them dwarfs the infrastructure costs by orders of magnitude.
LLMs are not perfect, and can't enforce a guaranteed logical flow - however I wouldn't be surprised if this changes within the next ~3 years. A lot of low effort CRUD/analytics/data transformation work could be automated.
But why, when I could easily just tell the AI to generate the code for the CRUD app for me, thus resulting in minmal dev costs while also getting minimal infrastructure requirements?
> I could indeed invoke something that requires god-knows how many tensor cores, vram, not to mention the power requirements of all that hardware, in order to power a simple CRUD App.
The app doesn't need to be powered by the LLM for each request, it only needs to generate the code from a description once and cache it until the description changes.
The underlying complexity isn’t relevant at all when considering such solution, if it makes otherwise business sense and is abstracted away.
Otherwise you could make the same argument about your 100 lines python script which invokes god knows how many complex objects and dicts when a simple C program (under 300 lines) could do the job.
Or, I could not do that, and instead have it done by a sub-100-lines python script, running on a battery powered Pi.