Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Are large language models even Turing complete?

Idealized deterministic computing systems are the only thing that can be Turing complete, actual systems cannot be (because Turing completeness requires infinite space), LLMs are actual systems, and also are not limited-space approximation of idealized deterministic systems (they are, I suppose, deterministic if you know all the relevant parameters, including potentially some that are hardware-dependent, but they generally are a deterministic approximation of a nondeterministic system.) You can, of course, prompt an LLM to predict the output of a deterministic system and to do direct computation, but, absent an interface to external tools that actually do the computation, the results for that are notoriously unreliable.



> Idealized deterministic computing systems are the only thing that can be Turing complete

That’s not true. My computer is for all practical purposes Turing complete - it’s tape is not the RAM, but due to side effecting, being connected to the internet, the whole universe. So while the universe itself is finite, nothing material can be mathematically infinite, Turing completeness fails “lazily”. Unless you hit the limits, it is as good as infinite.

As for the current topic, prompting problems are after a while just memoization to some limit with some strange encoding.


> My computer is for all practical purposes Turing complete

“For all practical purposes” is a long way of saying “not”; a large-but-finite tape is not infinite, and the key properties of Turing completeness (both universal computation and the consequent equivalence with all other Turing complete systems) do not hold with “finite but large tapes”, no matter if large is 640 kilobytes or 640 quettabytes. Particularly, differently structured “Turing complete but for finite size” systems of similar actual capacity in bytes are not guaranteed to be able to compute the same subset of all computable results. (Actually Turing machines with the same size tape would be, but “Turing complete but for size” does not imply a consistent ratio between problems of material storage space to equivalent Turing machine tape size.)


Can you give me any way that can differentiate between my practical machine with memory sized n, and a real infinite Turing machine in a finite amount of time t?

If not, than for all purposes the two are the exact same, which is my point. This is not the case with LLMs.


Sure, then a trivial example of a non-“promotable” input would be an input containing a problem whose solution requires more memory than any computer currently has. But I don’t think that’s what they were looking for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: