> I mean, you're just a continuous chain of chemical reactions.
Actually, I’m not. You have a ridiculous premise I will not entertain. It’s so misanthropic and dehumanizing and contributes 0 to the book on what being a human even is.
The idea of “emergent behaviors” is no better an explanation for say, human consciousness, than the idea that we were made in the image of a god. Emergent is just a way for an atheist to cover up that they have no idea. That chemical 1 + chemical 2 + electricity happens and is observed and the reason we can’t explain the other 99% is……emergent.
Now I hear gravity must be emergent. What a useful tool to sound smart.
> Emergent is just a way for an atheist to cover up that they have no idea.
We don't. I don't think many try to cover that up.
Emergence just means that a system has properties not apparent from the parts going into it. It's not a tool to handwave away that fact and ascribe it to "emergent hocus pocus", it's a tool to frame the search for why that happens.
Emergence is just complexity. The problem with complexity is it takes a lot of time and massive amounts of effort so determine what it is in the computation (digital or quantum) that produces the effect, and is commonly not reducible.
Bringing in an imaginary character in order to explain the complexity is just a failure of logic.
It’s because it’s not an interesting point and it’s plainly obvious. It’s reductive and simplistic and denies a fleet of thought and study that says otherwise.
First of all we know very little about how we work. We have sone ideas and some observations but know nearly nothing.
We have no idea how consciousness works. “It’s emergent” is not an answer.
We don’t even know how memory works or creativity or really anything. We can observe certain chemicals and electrical changes when exposed to certain things but that explains very little and there’s 1000 fables about why this type of partial knowledge is very incomplete.
Explain religion and spirituality. There’s no consensus on determinism.
We know very little. But we do know that the entire human condition and experience is far different and complex and impossible than a stats processor running on a von Neumann machine. Because I can explain to you precisely how that works.
I love how you said so much and yet replied to practically none of my comment. Kudos.
Also, you need to work on a better differentiator than "I can explain precisely how that works", especially if you proceed to not do just that. I also doubt you can, but that's a whole separate topic.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Really? Calling someone an imbecile is your idea of a good comment?
The program you reference defines a system that is isomorphic to a neural network with feedback and feedforward. The system is somewhat similar to a human brain.
No one is writing a program that is running on the CNN. No one can describe exactly how it produces the results it does or why there appears to be emergent behavior.
Your view of these systems is not fully developed.
Gotta love adding “you’re just not smart enough to understand it” to the list of ways you’ve shut down conversations. You’re going for a hat trick, aren’t you?
> But LLMs have no desires of their own, which to me settles the question of rights.
For now. The whole point of this discussion is that LLMs have been rapidly improving to the point where emergent properties are bound to come out.
> IMO, the only realistic answer to that question is "we have no idea".
So we have no idea what’s happening in our own brains, what’s happening in other animal’s brains to cause us to feel empathy for them, or what’s happening in a LLM’s “brain” but we can confidently say they should never have rights and we should never feel empathy for them? That seems a bit premature.
Yes, I believe some careful and informed pondering leads one towards that conclusion. Hume's guillotine is an implacable philosophical device: https://www.youtube.com/watch?v=hEUO6pjwFOo (I warmly recommend all the content on that delightfully frightening channel)
I've seen that paper, and it's amazing indeed what GPT-4 is capable of. But none of that supports that closing quote, which to me points to a worrying philosophical naïveté. How can one equip an entity with "intrinsic motivation"? By definition, if we have to equip it with it, that motivation is extrinsic. It belongs to the one who puts it there.
A software engineer might decide to prompt his creation with something along the lines of "Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky, and over every living creature that moves on the ground." and the bot will duly and with limitless enthusiasm put those wise orders into practice. If that ends up causing some "minor" problems, should we confine the software for one thousand years in an air-gapped enclosure, until it learns a bitter lesson? Or should we take to task the careless Maker of the bot?
Actually, I’m not. You have a ridiculous premise I will not entertain. It’s so misanthropic and dehumanizing and contributes 0 to the book on what being a human even is.