If you give an LLM the data in the prompt and then ask it to extract information from that data it does pretty well. This is the premise of RAG. Where LLMs do poorly is when you ask it for information you haven’t given it.
It works great if all you're looking for is an output, with not a care for what it is. So if you're trying to generate slop children's books to shit onto Amazon, it's awesome. If you want to give your boss a huge bloated report on your daily activities, works great. If you want to phone in an assignment that doesn't add value to your education, LLM will do that. If you want a header image for your LinkedIn post that you don't want to pay for, generate it. Who cares.
This isn't even an indictment, not really. I'm just reading between the lines here regarding when/how it's used. Nobody with intentionality uses these things. Nobody who CARES what they're making uses these things. And again, I want to emphasize, this is not an attack. There are tons of things I do in my work life that I utterly do not give a shit about, and LLMs have been a blessing for it. Not my code, fuck no. But all the ancillary crap, absolutely.