Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's hard to say "never" in technology. History isn't really on your side. However, LLMs have largely proven to be good at things computers were are already good at: repetitive tasks, parallel processing, and data analysis. There's nothing magical about an LLM that seems to be defeating the traditional paradigm. Increasingly I lean toward an implosion of the hype cycle for AI.


LLMs are a legitimate technology with legitimate applications. However in a desperate bid for a new iPhone moment to assure Wall Street that the fantasy of infinite growth in a finite world is possible, they have utterly lost the plot regarding what statistical analysis of words at scale is capable of doing. Useless? Far from it. The basis for a 300 billion company with no meaningful products after almost a decade working on it? I have doubts.

I can't fathom a future where OpenAI for sure doesn't eat dirt, with Anthropic likely not far behind it. nVidia will likely come out fine, since it still has gamers to disappoint, and the infrastructure build out that did occur will crater the cost of GPUs at scale for smaller, smarter companies to take advantage of. So it will likely still kick around, but as another technology, not the second coming of Cyber Christ as it's been hyped to be.


You seriously underestimate the appeal of burning cycles on GPUs to get something cool, if barely useful, out. Cryptocurrencies are still very much alive, too.


> Cryptocurrencies are still very much alive, too.

Yeah, like I said, LLMs will be around. Frankly I think they'll be way more around than crypto which as far as the mainstream is concerned might as well be dead.


Funny, I don't remember any computer program in the past being able to explain a news article through the lens of one particular philosopher.

Or being able to explain the static physical forces in a picture that are keeping a structure from collapsing.

Or recommend me a python library which does X, Y and Z with constraints A, B and C.

But I guess you can file all the above under "data analysis".


it is the result of data analysis. the computer program isn't explaining anything, or recommending anything. it's simply presenting the results of querying data analyzed at scale and returning the "most likely" result (as determined by the system prompt and human input from developers and users of the program). "most likely" is still a super-fuzzy grey area.

https://www.plough.com/en/topics/life/technology/computers-c...


It's all just electricity and binary bits, nothing new here...

/s?


What I don’t understand is, how can a liar be good at data analysis?


If you give an LLM the data in the prompt and then ask it to extract information from that data it does pretty well. This is the premise of RAG. Where LLMs do poorly is when you ask it for information you haven’t given it.


It works great if all you're looking for is an output, with not a care for what it is. So if you're trying to generate slop children's books to shit onto Amazon, it's awesome. If you want to give your boss a huge bloated report on your daily activities, works great. If you want to phone in an assignment that doesn't add value to your education, LLM will do that. If you want a header image for your LinkedIn post that you don't want to pay for, generate it. Who cares.

This isn't even an indictment, not really. I'm just reading between the lines here regarding when/how it's used. Nobody with intentionality uses these things. Nobody who CARES what they're making uses these things. And again, I want to emphasize, this is not an attack. There are tons of things I do in my work life that I utterly do not give a shit about, and LLMs have been a blessing for it. Not my code, fuck no. But all the ancillary crap, absolutely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: