But it’s not replicating results that a human would give you.
Since it’s not giving the same type of results, then it’s not doing the same thing. If anything, LLMs have definitively ruled out probabilistic guessing as the model for human intelligence.
Even now, you’re trying to force LLMs onto human intelligence. Insisting it is despite it not delivering the results. And I’m sure you believe if we just fire up another few million gpus, we’d get there. But we’ll just get wrong faster. LLMs don’t produce new, they just remix old
> Even now, you’re trying to force LLMs onto human intelligence
I'm not forcing anything, I'm specifically refuting the claims that we know that LLMs are not how humans work, and that LLMs are not reasoning. We simply don't know either of these, and we definitely have not ruled out statistical completion wholesale.
Also, I don't even know what you mean that LLMs are not giving the same types of results as humans. An articulate human who was hired to write a short essay on given query will produce what looks like ChatGPT output, modulo some quirks that we've forced ChatGPT to produce via reinforcement learning.
Since it’s not giving the same type of results, then it’s not doing the same thing. If anything, LLMs have definitively ruled out probabilistic guessing as the model for human intelligence.
Even now, you’re trying to force LLMs onto human intelligence. Insisting it is despite it not delivering the results. And I’m sure you believe if we just fire up another few million gpus, we’d get there. But we’ll just get wrong faster. LLMs don’t produce new, they just remix old