Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> however the brain thinks must be describable by math Roger Penrose believes that some portion of the work brains are doing is making use of quantum processes. The claim isn't too far-fetched - similar claims have been made about photosynthesis.

That doesn't mean it's not possible for a classical computer, running a neural network, to get the same outcome (any more than the observation that birds have feathers means feathers are necessary to flight).

But it does mean that it could be that, yes you can describe what the brain is doing with math ... but you can't copy it with computation.



it feels self-evident that computation can mimic the brain. as a result, it's difficult to argue this line much further. to say the brain is non-computable is to assert the existence of a soul, in my opinion.


A lot of things feel self-evident then turn out to be completely wrong.

We don't understand the processes in the brain well enough to assert that they are doing computation. Or to assert that they aren't!

> say the brain is non-computable is to assert the existence of a soul, in my opinion

I don't believe in souls, but the brain might still be non-computable. There are more than two possibilities.

If it is the case that brains are doing something computable that is compatible with our Turing machines, we still have no idea what that is or how to recreate it, simulate it, or approximate it. So it's not a very helpful axiom.


> We don't understand the processes in the brain well enough to assert that they are doing computation. Or to assert that they aren't!

We absolutely do know enough about neurons to know that neural networks are doing computation. Individual neurons integrate multiple inputs and produce an output based on those inputs, which is fundamentally a computational process. They also use a binary signaling system based on threshold potentials, analogous to digital computation.

With the right experimental setup, that computation can be quantified and predicted down to the microvolt. The only reason we can't do that with a full brain is the size of the electrodes.

> I don't believe in souls, but the brain might still be non-computable. There are more than two possibilities.

The real issue is neuroplasticity which is almost certainly critical to brain development. The physical hardware the computations are running on adapts and optimizes itself to the computations, for which I'm not sure we have an equivalent.


dendrocentric compartmentalization, spike timing, bandpass in the dendrites, spike retiming etc... aren't covered in the above.

But it is probably important to define 'computable'

Typically that means being able that can take a number position as input and output the digit in that location.

So if f(x) = pi, f(3) would return 4

Even the real numbers are uncomputable 'almost everywhere', meaning choose almost any real number, and no algorithm exists to produce it as f(x)

Add in ion channels and neurotransmitters and continuous input and you run into indeterminate features like riddled basins, where even with perfect information and precision and you can't predict what exit basin it is in.

Basically look at the counterexamples to Laplace's demon.

MLPs with at least one hidden layer can approximate within an error bounds with potentially infinite neurons, but it can only produce a countable infinity of outputs, while biological neurons, being continuous input will potentially have an uncountable infinity.

Riddled basins, being sets with no open subsets is another way to think about it.

Here is a paper for that.

https://arxiv.org/abs/1711.02160


We can write code that writes code. Hell even current LLM tech can write code. It's at least conceivable that a artificial neural network could be self-modifying, if it hasn't been done already.


Penrose's argument is that

(a) brains do things that aren't computable and

(b) all of classical physics is computable therefore

(c) thinking relies on non-classical physics.

(d) In addition, he speculatively proposed which brain structures might do quantum stuff.

All of the early critiques of this I saw focussed on (d), which is irrelevant. The correctness of the position hinges on (a), for which Penrose provides a rigorous argument. I haven't kept up though, so maybe there are good critiques of (a) now.

If Penrose is right then neural networks implemented on regular computers will never think. We'll need some kind of quantum computer.


That's a good summary of it. Thank you.

> If Penrose is right then neural networks implemented on regular computers will never think.

I disagree that that is necessarily an implication, though. As I said before, all that it implies is that computational tech will think differently than humans, in the same way that airplanes fly using different mechanisms from birds.


Part of Penroses's point (a) is that our brains can solve problems that aren't computable. That's the crux of his brains-aren't-computers argument. So even if computers can in some sense think, their thinking will be strictly more limited than ours, because we can solve problems that they can't. (Assuming that Penrose is right.)


I wonder if LLM's have shaken the ground he stood on when he said that. Penrose never worked with a computer that could answer off the cuff riddles. Or anything even remotely close to it.


So the trouble with this argument is that there is no evidence whatsoever that the brain can solve problems that a turing machine can't. There's none. No one has been able to formulate a problem in a reasonable way that a computer algorithm can't be devised to solve it that people can solve. It is basically a bunch of handwaving nonsense like the tripartite nature of god(father, son and holy spirit...) Searle's chinese room argument is slightly better, but is still ultimately a pile of horseshit. From an external point of view we cannot distinguish between a room full of people who do not speak chinese but can translate it following rigorous instructions and tables and a room full of qualified chinese translators. For all external purposes the black boxes are equivalent except that you can take a chinese translator out of the room and still use them to translate chinese without the rigorous instructions and reference material in the room.

There is no good philosophical argument against Strong AI. It is a bunch of quasi-religious, humans are special because we say so wishy-washy nonsense.


(a) doesn't hold up because the details of the claim necessitate that it is a property of brains that they can always perceive the truth of statements which "regular computers" cannot. However, brains frequently err.

Penrose tries to respond to this by saying that various things may affect the functioning of a brain and keep it from reliably perceiving such truths, but when brains are working properly, they can perceive the truth of things. Most people would recognize that there's a difference between an idealized version of what humans do and what humans actually do, but for Penrose, this is not an issue, because for him, this truth that humans perceive is an idealized Platonic level of reality which human mathematicians access via non-computational means:

> 6.4 Sometimes there may be errors, but the errors are correctable. What is important is the fact is that there is an impersonal (ideal) standard against which the errors can be measured. Human mathematicians have capabilities for perceiving this standard and they can normally tell, given enough time and perseverance, whether their arguments are indeed correct. How is it, if they themselves are mere computational entities, that they seem to have access to these non-computational ideal concepts? Indeed, the ultimate criterion as to mathematical correctness is measured in relation to this ideal. And it is an ideal that seems to require use of their conscious minds in order for them to relate to it.

> 6.5 However, some AI proponents seem to argue against the very existence of such an ideal . . .

Source:

https://journalpsyche.org/files/0xaa2c.pdf

Penrose is not the first person to try to use Gödel’s incompleteness theorems for this purpose, and as with the people who attempted this before him, the general consensus is that this approach doesn't work:

https://plato.stanford.edu/entries/goedel-incompleteness/#Gd...


Is the following source a good starting block to learn Penrose's argument?

https://philosophy.stackexchange.com/questions/39993/how-doe...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: