> We think we’re intelligent, know we’re conscious, and so assume the two go together.
Excuse me, I know _I_ am conscious. There is literally nothing you can do to proof that there's anything outside of what I am conscious of. The world could be a very stable illusion, a dream, a simulation and it would all be the same.
I don't understand why people think that it matters whether AI is conscious or not. We can't even prove that other fellow humans are? But we treat them like they are. We, I at least, feel sad when a tree is cut down. But I don't feel the same for a bunch of rocks that are being blown up? Actually I do, when a sacred or beautiful rock is cut in half (like the one that was suspended on a small cliff).
What I am trying to say is, it doesn't matter if the subject is actually conscious. All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
I think what you are afraid of here is agency, which is something that might be dangerous to endow a super intelligent being with.
The fact that other humans are so similar to me (all humans are similar in virtue of being human organisms) allows for an analogical inference: You guys are similar to me, I am conscious, therefore you are probably also conscious.
And whether or not something is conscious matters a lot ethically. If something is conscious, it may suffer. Conscious and unconscious objects have to be treated in a completely different way. We assume rocks aren't conscious and don't suffer from being broken apart, but we assume animals are conscious. That's why we treat them differently.
We can't know for certain whether something is or isn't conscious, but we have to try to make a best guess.
> The fact that other humans are so similar to me (all humans are similar in virtue of being human organisms) allows for an analogical inference: You guys are similar to me, I am conscious, therefore you are probably also conscious.
That hinges on the fact that the brain, or human-like body, is related to the existence of consciousness. But there's no reason to believe that a bunch of electrical pulses as the source of consciousness. For example, had "I" existed as a plant, then it would be reasonable for me to think that other plants are also conscious and that plants are what give rise to consciousness.
I am both consciousness and have a brain, and alterations in the brain (chemical or physical) correlate with changes in consciousness. Which suggests that our "bunch of electrical pulses" are at least sufficient for consciousness. Plants may be also be conscious, but that would require the existence of an additional, very different, cause of consciousness than in humans. The theory "X is only explained by Y" is simpler than "X is explained by Y, and X is explained by Z". Ockham's razor suggests it's more likely for there to be fewer causes than many.
I am not convinced. Consider the color red. I can explain the physics of what the color red is, how it travels through space, how my eyes interact with the particular waves, sends signal to my brain and somehow generates what I think of as "red".
But this explanation is meaningless, there's nothing about the waves that explain why "red" is the way it is. For example, why don't I experience red as blue and vice versa? Similarly, consciousness is like that. I can have an explanation down to the quarks and follow the chain of physical phenomena that happen from the big bang up to this point and I will find nothing that would explain why there's "something" at all. In a sense, the fact that the color "red" is the way I perceive it is merely a coincidence. I happen to experience it that way. There's no reason why it has to be that way.
I had similar experience when I learnt that some people don't have inner monologue; it completely blew my understanding of what "is like".
Well, the brain story may not be a complete explanation, though a necessary part of it. Just like a motor alone doesn't fully explain why a car drives, while still providing a partial explanation. A plant may lack the equivalence of a "motor" for consciousness, which would mean it can't be conscious.
And while the existence of a motor in a car not strictly implies that it can drive, the motor still makes it more likely that it can. So the fact that you have inner monologue, experience red in a certain way instead of another, is evidence that the same holds for other people.
Of course there could be defeating evidence in the opposite direction, like reading a survey where most people claim not to have inner monologue. (In terms of colors such evidence seems hard or impossible to obtain though. How would we test for switched color experience? Perhaps physically linking brains in the future.)
"How would we test for switched color experience? Perhaps physically linking brains in the future."
Maybe to see the universe from another brain will be a terrifying experience...
I can't help but think colors above all else are experienced as a learned phenomenon. Anyone with a kiddo, and who has paid attention during this stage of their development should know this effect: colors are context-dependant.
You can have a kiddo who can accurately name colors on a printed page, on blocks, or any other reduced setting, and have them completely fail at the real world task
The most noticable error is the color of the sky. A kiddo who can, with 100% accuracy telling blue blocks from white blocks will, before correction, label the sky as white. It's uncanny. We often forget this correction as parents, but if you look for it, it's there.
I'll can't help but wonder how much of our color perception is stochastic parroting once the appropriate labels have been learned.
Wait, they say it is white not just about a cloudy sky but also about a clear sky? (Which would suggest they think "white" means "bright", which doesn't seem too unreasonable.)
Yes, a perfectly clear sky. It is a common perceptual illusion. I think it's interesting not that it's wrong, but that children are consistently wrong in the same way.
> Which suggests that our "bunch of electrical pulses" are at least sufficient for consciousness.
I think it only suggests necessary conditions. Example: whether you are breathing is also connected to your state of consciousness, but it doesn’t follow that it’s sufficient.
not to mention the stories about people reporting seeing strange things during the Near Death Experiences, often their brain is not quite working at the time.
While assuming that other human-looking things are conscious is a reasonable starting point, it certainly isn’t a given.
There seems to be a wide variety of thinking processes. Some folks don’t have internal monologues. Some folks can’t visualize images at all, and the ones that do, do it will varying amount of detail.
Isn’t it also reasonable that other humans have different degrees of consciousness, including perhaps not having one?
One definition or property of consciousness I find interesting is that it encompasses “what it is like to be <entity>”. If there is something that it is like to be <animal>, it is conscious. Intelligence and awareness are separate from consciousness in this framing. And there is arguably a point where we can speculate that extremely simplistic organisms could not be conscious, given the lack of sense organs that seems to predicate experience.
From a human point of view, the things you describe are the contents of consciousness. I have Aphantasia, while my brother describes his mind’s eye as CAD software and he can construct and manipulate visualizations at will.
The overarching awareness we both have that allows us to compare and contrast these things and make any sense out of those comparisons points to a more fundamental layer.
What you describe sounds closer to levels of awareness and one’s ability to recognize the workings of their own mind, e.g. some people remain lost in and identified with thoughts, while some are able to both experience and observe thought as just more contents of conscious experience, but not the actual center of one’s consciousness.
And there’s evidence that this can be learned (through mindfulness), which to me points to something like: we’re all conscious whether we realize it or not, and not realizing it doesn’t make it not so.
> One definition or property of consciousness I find interesting is that it encompasses “what it is like to be <entity>”. If there is something that it is like to be <animal>, it is conscious.
I think there are problems with this.
Take something like the roundworm, with 300 neurons.
It has senses, and I assume a base level consciousness to process those senses. It would be something to be like a roundworm.
On the other hand, a roundworm likely has no sense of self, no identity, no awareness..no mental self to be able to acknowledge and reflect on those experience, so there is no 'someone'. And isn't that the point that matters?
> On the other hand, a roundworm likely has no sense of self, no identity, no awareness..no mental self to be able to acknowledge and reflect on those experience, so there is no 'someone'. And isn't that the point that matters?
We can always point to something we have that other organisms don't have and ask whether that is the point that matters. We can hypothesize things that probably don't exist and say that those things are what matters (e.g. the posited purpose of reincarnation, if such a thing existed).
Re: roundworm
1) sense of self -> it has to have a sense of proprioception, of embodiedment, in order to properly respond to its sensory information.
2) identity -> humans with global amnesia would lack a memory of identity.
3) a roundworm is sensorily aware, and, if it has a sense of proprioception, bodily aware of it's own actions and environmental responses to its actions.
4) It probably isn't self-reflective, but is this necessary in humans?
> We can always point to something we have that other organisms don't have and ask whether that is the point that matters.
Sure. In this case it's introspective self-awareness, and I would say it matters very much.
> sense of self -> it has to have a sense of proprioception, of embodiedment, in order to properly respond to its sensory information.
Sure, it has bodily self-awareness. It has enough awareness to react to something its body detects. ut not enough of a mind to reflect on or appreciate on an experience. There is no 'someone'. It's automata.
> identity -> humans with global amnesia would lack a memory of identity.
Sure, but we don't use outliers to establish baselines.
> It probably isn't self-reflective, but is this necessary in humans?
I think it’s tempting to set some threshold like “it must be self reflective”, but that raises a different question: why is this important vs. some other threshold?
e.g. does it matter if self reflection is possible if it is possible for the organism to experience pain?
I think the answer lies in what we’re trying to understand and why we’re trying to understand it. If the threshold is applied for the purpose of understanding when an organism or machine has reached some level of human-ness, purely for the purpose of some kind of benchmark, self reflection is an interesting bar to reach.
But usually these questions are aimed at finding some moral direction about how we should treat these entities.
We have strong intuitions that abusing animals is morally wrong even if animals cannot self reflect. When thinking about a future AI, I think we have to consider the possibility that self reflection is not a necessary bar to reach before we have some uncomfortable moral questions to answer.
Put another way, I suspect there will be many milestones that are meaningful and interesting, each introducing a new set of questions and implications. And I think some of the early milestones carry implications that are worth caring about long before self awareness is reached.
> does it matter if self reflection is possible if it is possible for the organism to experience pain?
Yes. Without introspective self-awareness there can be no 'identity, there can be no 'someone' - you just have a base consciousness that can react to stimuli, which is not morally significant.
We have the full connectome of the roundworm and were able to implement it in software and place it in a lego robot. It's going to be pretty equivalent to the actual work. Does it feel pain?
Besides which, we can kill animals humanely, so pain doesn't have to come in to it, only the right to life.
> Yes. Without introspective self-awareness there can be no 'identity, there can be no 'someone' - you just have a base consciousness that can react to stimuli, which is not morally significant.
I'm not convinced of this a priori. And even if it was proven to me, the non-self-aware impulse-to-live would still be enough for me to find it morally significant. It would feel morally significant to me euthanizing a human neonate with a lethal condition that would otherwise prevent it from ever getting to a self-aware stage of life.
We can't really say for certain much at the moment, but what I stated makes the most sense based on the evidence we have.
> the non-self-aware impulse-to-live would still be enough for me to find it morally significant.
That non-self-aware impulse-to-live is morally equivalent to a plant seeking sunlight IMO.
> It would feel morally significant to me euthanizing a human neonate with a lethal condition that would otherwise prevent it from ever getting to a self-aware stage of life.
The difference is humans have an innate potential for self-awareness that the animals we eat for food do not.
> That non-self-aware impulse-to-live is morally equivalent to a plant seeking sunlight IMO.
Do you find the practice of raising meat in factory farms to be acceptable?
> The difference is humans have an innate potential for self-awareness that the animals we eat for food do not.
There are lots of differences. This is one. But what specifically about human self awareness lessens the value of animal life?
If we were not self aware, we’d kill and eat meat without considering the morality. So what is it about self awareness that somehow becomes a deciding factor here?
I’m not trying to catch you in some kind of “gotcha” but trying to understand your reasoning. If the moral implications of killing animals are roughly the same as killing plants, do you also believe agriculture needs reform for similar reasons? And if not, wouldn’t that indicate some higher moral obligation towards animals?
> Without self-awareness, there is no 'someone' to reflect on experiences. No personhood.
Why is reflection on experience the bar and not experience itself? Animals demonstrate learned behaviors, e.g. recognizing humans from memory and resuming friendly behavior based on that recognition. Similarly, avoidance of situations that are known to cause pain.
Our not-so-distant primate ancestors had a similar kind of experiential existence before gaining the ability to reflect on that experience.
The underlying experiences that this reflection reveals are the same experiences that predate our ability to self reflect and are the parts of us that are most common to us and other animals.
I guess what I’m fundamentally not understanding in your argument is the basis for the idea that a species gaining self reflection somehow becomes the point at which it becomes immoral to kill or harm that species.
Furthermore, moral behavior can be found all throughout the animal world, with clear indications of love/protection, companionship, sharing/cooperation, reciprocity, memory of transgressors, etc. Obviously the subjective experience of these behaviors will differ across species, but the more important point is that it seems problematic to attribute the existence of moral behavior to self awareness. Self awareness helps us improve our understanding of moral behavior through rational thought, but the logic of such inquiry ultimately still relies on those underlying subjective experiential states. The fact that through introspection we can identify and label these concepts is unique to humans, but what I conclude about this is quite a bit different than your claim.
I’d argue that gaining the ability to self reflect is the very thing that increases our moral obligations. Only through self reflection can we realize that as a species, we’re no longer bound to our evolutionary defaults, and no longer required to kill other animals to survive. What arguably started as natural selection of traits that are adaptive (but imperfect) for the survival of a social species could evolve beyond those more primitive defaults. And through self reflection we can now understand what pain feels like, and how inflicting it on others is harmful - to them and to us.
I’m not arguing that eating animals is never acceptable. But the way we go about it surely seems to matter, and if it matters, the implications of it mattering are worth exploring more broadly.
> If the moral implications of killing animals are roughly the same as killing plants
There is a difference in killing and suffering. I advocate to eliminate suffering and kill humanely. That's not a concern with plants.
> Why is reflection on experience the bar and not experience itself?
It is for suffering, but not for a right to life. There is no 'person' without self-awareness. Thus I don't see a need to grant a right to life.
> Animals demonstrate learned behaviors, e.g. recognizing humans from memory and resuming friendly behavior based on that recognition. Similarly, avoidance of situations that are known to cause pain.
Animals, most mammals at least, are hardwired for socialization and to avoid harm. This doesn't really indicate anything.
> this reflection reveals are the same experiences that predate our ability to self reflect
That's the key though. Self-awareness is the distinction.
> I guess what I’m fundamentally not understanding in your argument is the basis for the idea that a species gaining self reflection somehow becomes the point at which it becomes immoral to kill or harm that species.
Without self-awareness, they are not a 'person', essentially just more complex automata. They can't shape their environment, they are just a part of their environment, following their instincts.
They don't think, therefore they are not.
> Furthermore, moral behavior can be found all throughout the animal world, with clear indications of love/protection, companionship, sharing/cooperation, reciprocity, memory of transgressors, etc.
Some of that is just programmed instinct, quite different from humans. For examples, some mothers will attack their young, does this mean they 'love' them, or they have a programmed instinct to protect their young? Some of those same mothers will eat some of their young also, keep in mind before you answer.
> I’d argue that gaining the ability to self reflect is the very thing that increases our moral obligations.
Sure, to reduce harm, but not to not take a life.
Take a cod for example. It has no self-awareness, no personhood, no traits worth valuing. Its body is worth more than it's life, and if killed humanely no harm is done.
> the implications of it mattering are worth exploring more broadly.
I agree. But I've spent the last few years debating and researching this stuff, and I've come to my conclusions. They are in line with our current scientific understanding, and unless something changes it's what will continue to make sense to me.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thanks for sharing this. I didn't mention this above to avoid going on a tangent, but after starting a mindfulness meditation habit earlier this year, I've noticed what I can only describe as more awareness of the void in my visual field, and some morphing and indistinct visual phenomena like a shimmering nothingness. Not what I would call imagery, but something I had not been aware of before.
I've long suspected that the condition might be trauma related, a suspicion my therapist shares, and the clarity that mindfulness has brought for me has led to some serious breakthroughs in processing past events. Only time will tell if this will unlock something more visual.
Without going into my life's story, it makes a lot of sense to me that aphantasia could be some kind of protective mechanism of the brain to shield someone from events too big to process.
Interesting. As an aphantasiac myself, I find it surprising that this person has such a negative perspective, wanting to "cure" his aphantasia.
As far as I know, aphantasia has never had any meaningful negative impact on my life. We just think without mental images, but we can solve the same kind of problems as the rest. That's why most of us don't even find out that we are different until we read about it somewhere on the Internet (in my case, in my 30s). How can something that has so little real-life impact that neither we nor our close persons can even notice be seen as a disorder to be cured?
I can understand being curious about how mental imagery is like. I'm curious, as well. But I think I'd still rather not start a "treatment" as it seems that I'm good at thinking without mental imagery (perhaps it has even made me better at symbolic and linguistic processing, as I seem to be better at that than most), and "if it ain't broke, don't fix it". Not sure that something that seems to quite fundamentally alter my way of thinking (to use a mental imagery that I would never be good at, I suppose) is a good idea. To each their own, of course.
I do think the “cure” framing is a bit odd, but then started thinking back to my own initial discovery.
When I first learned this is a thing and realized that I have it, I initially started feeling like I’d been deprived of something. That this mode of experience was so absent in me that I couldn’t even imagine what other people actually meant when they talked about visualization, and this bothered me.
But over time my perspective shifted, and I stopped seeing it as a disability of some kind and instead as a different flavor of experiencing the world. To your point, I have linguistic strengths that are far more attuned than the strong visualizers I know. They come to me when they need something written. I go to them when I need advice about arranging my living room.
I think the best argument for that might be that some people don't seem to understand what "consciousness" (in the sense that you are talking about) even means, and how it's different from something that can be explained as an emergent phenomenon of known physics. I then sometimes wonder whether those people are not actually conscious, or whether they just don't get it.
Indeed. Nagel famously clarified that "an organism has conscious mental states if and only if there is something that it is like to be that organism." Many found this enlightening. I find it extremely perplexing that this wasn't immediately obvious to everyone. On the other hand, in Tibetan Buddhism there is something called a "pointing-out instruction," where your enlightened nature is pointed out to you. Afterward, you wonder how you could have ever missed it. Perhaps I should treat Nagel's paper as analogous. Like "oh, duh, the lights are on. Thanks for the reminder."
Maybe, but the vast majority of people act, whether they intend to or not, as if consciousness is a social fact. The discrepancy between social facts (especially if they have a plurality of groundings),l and the existence of a scientific fact is fundamentally an interesting phenomenon.
Yeah, but the "expected similarity" (the expected value of the degree of similarity) for consciousness should still be high. That is, there are in expectation more people that are very similar to me than people that are very dissimilar.
There is a related Bayesian argument: You draw one ball from an urn with an unknown number of black and white balls. The drawn ball is black. This is some evidence that most balls are black, because otherwise it would have been less likely that you have drawn the black ball earlier.
> We can't know for certain whether something is or isn't conscious, but we have to try to make a best guess.
We now have AI which is trained on data generated by humans who are (probably?) conscious and feel pain etc. So it can generate output which implies that it's conscious and can feel pain etc., even when that isn't the case.
What do you do with something that can pass the Turing test but still isn't human?
Thinking about the analogical inference helps here: We know that LLMs work very differently from humans, and we know why Sydney or LaMDA sometimes says it is conscious (it imitates human text). So the analogical inference doesn't allow us to infer that LLMs are conscious, or provides only very weak evidence. And since it seems plausible that most things (e.g. rocks) are not conscious, it seems reasonable to think that LLMs are not conscious either.
Reasoning from how we know something works is a shaky foundation when the passage of time causes them to become more complicated and correspondingly causes us to less understand how they work.
The existing ones can write code. Suppose we get ones that can write better code and engage in self-improvement. Now you have something which is a billion lines of code and is under constant self-modification. We no longer have any idea how it works, but it can feign consciousness in much the same way as existing LLMs. What's the test for if it actually achieves consciousness?
I can give you one that a human being in good health will pass and a rock will fail. Can you give me one that an AI feigning consciousness will fail?
A good test for an AI would be if we excluded all human text mentioning consciousness, experiences, emotions etc from its training data. Then, when it would still start talking about consciousness, that would be strong evidence that it has consciousness: If it did invent the concept by itself, and mere imitation can be ruled out.
But then it would no longer be the same AI. You can't expect it to experience empathy or sadness if it has no knowledge of the things that cause them in humans, but any text discussing them would be mixed up with the experience of whoever wrote it.
Or if you use a less selective filter then it goes the other way. Exclude philosophy texts but not Facebook posts and it would still have what it needs to emulate emotional text regardless of whether it has ever seen a formal discussion of it. And it could plausibly extrapolate from that to an analytical discussion, having seen both that and analyses of other subjects, without ever feeling anything itself.
I'm saying I don't think we have a good test for consciousness, but then what do you want to do when someone tries to claim that an AI is conscious and no one can prove it one way or the other?
What is suffering? Can you define it objectively/empirically?
If there is indeed such a phenomenon, then why should I care about the suffering of non-humans?
Granted, it may be a very bad idea to ignore the suffering of some superhuman AI that was recently invented... don't piss off your godlings after all or they'll smite you. But animals? If suffering made chicken taste better, I don't see any problem with Torture Nuggets.
If "pain" is synonymous with "suffering", then I suspect that using the latter term is a deliberate attempt to emotionally manipulate me.
Pain is a signal of impending damage/injury, or existing damage/injury that may well worse if not dealt with immediately.
All signals within a human nervous system can be mistaken in principle. Why would this one be different? Phantom pains that indicate no injury seem plausible.
> Pain is a signal of impending damage/injury, or existing damage/injury that may well worse if not dealt with immediately.
Pain is not necessarily associated with injury.
>All signals within a human nervous system can be mistaken in principle. Why would this one be different? Phantom pains that indicate no injury seem plausible.
What would phantom pain feel like? Would it hurt? Then it is incidental that it phantom and is nevertheless pain. If you believe you are in pain even if there is no stimulus associated with it, then you are in pain. You can not mistake pain regardless of whether it is "real" or not. What makes pain so special? It hurts, it is painful.
This just sounds like minimizing a loss function. by this definition, aren't most machine learning algorithms in constant suffering, which they seek to reduce?
The suffering of non-human animals holds zero moral weight.
I am having trouble deciding if a conversation with you is possible, you do not seem to be a rational being. You use the word "bad" in a way that another person would if they were saying "this is either bad for me personally, for those I care about, or humanity in general".
Does that mean you'd include sea urchins and wombats in "humanity in general"? If it did, then you're mentally ill in a way I'm not qualified to treat.
When a lion eats a gazelle in Africa (or anywhere else, I suppose, if such events occur), it is nothing that I or other humans should care about. It does not matter that the lion eats it while it is alive instead of "slaughtering it humanely" (imagine how I have to word that idea, it's absurd).
This "suffering" does not make the universe less optimal. If I could push a button to increase the intensity of that "suffering" a thousandfold, or increase its quantity (or even both!), is there any reason not to push that button? If I didn't push the button, mind you, it would only be because the suffering of those gazelles matters not to me one way or the other. If I could decrease the intensity/quantity with a different button, I wouldn't push it either... and for the same reason.
For that matter, if suffering exists at all in any meaningful way (that is, empirically quantifiable), I'd still contend that it only exists for humans. Non-humans can't suffer, they are from a moral standpoint nothing more than meat robots.
> The suffering of non-human animals holds zero moral weight.
That's just false. Do you think humans are some God chosen species which makes them magically matter while the rest (including aliens on other planets I suppose) lacks that godly spark?
> I am having trouble deciding if a conversation with you is possible, you do not seem to be a rational being. You use the word "bad" in a way that another person would if they were saying "this is either bad for me personally, for those I care about, or humanity in general".
Yeah, and any normal human would understand the sentences "my cat hurt its foot, I hope it doesn't suffer too much." and "If it suffers, that would be bad".
> Does that mean you'd include sea urchins and wombats in "humanity in general"? If it did, then you're mentally ill in a way I'm not qualified to treat.
Who said sea urchins are humans?
> When a lion eats a gazelle in Africa (or anywhere else, I suppose, if such events occur), it is nothing that I or other humans should care about. It does not matter that the lion eats it while it is alive instead of "slaughtering it humanely" (imagine how I have to word that idea, it's absurd).
Well, humans can do little about suffering of animals in the wild, but that doesn't mean their suffering doesn't matter. Your own pain doesn't matter less when it can't be treated.
> If I could push a button to increase the intensity of that "suffering" a thousandfold, or increase its quantity (or even both!), is there any reason not to push that button? If I didn't push the button, mind you, it would only be because the suffering of those gazelles matters not to me one way or the other. If I could decrease the intensity/quantity with a different button, I wouldn't push it either... and for the same reason.
The problem goes deeper in that we don’t even have a good grasp on what we actually mean by “consciousness”, and that there are wildly different opinions on what significance it has.
Well exactly. If we did we'd be on or a hell of a lot closer to describing the math, entropy, network or some other formal model of it which would pay major dividends to talking about AI.
Right now it's like we didn't learn anything from Wittgenstein who said most philosophy was just argument about language and definitions ... which is a lot plus anecdotes of what we read here
I am really curious to know why? And don't invoke "what about your kids, mothers, fathers?". "Solipsism" (first time I have heard of this, so thank you) does not make me feel any less worse when something happens to them.
The parent was making a joke, but one way to answer your question might be:
You feel bad because you assume they - like you - feel something when you mistreat them, and you assume this at a very deep level that might not, itself, feel like anything to you.
While some humour was intended, I do actually grapple with this question. The answer that rings truest for me, especially as a newly minted father, is Vonnegut’s: “A purpose of life, no matter who is controlling it, is to love whoever is around to be loved.”
We can objectively observe that there is "stuff" happening in your brain, and we can objectively observe that GTP does a lot of "stuff" during its forward pass and then stops and does nothing--nothing is happening outside those forward passes.
If we hook GTP into an infinite loop and stuff resembling intelligence comes out, then we're really going to have to start questioning what consciousness is. It's coming.
I am baffled when smart people say something like this. Without consciousness, without a stake in the game, the behavior is pure statistics. Statistics is limited by probabilities and logical gates. Nothing else. Consciousness is about being aware which means insights about context and harm. People are limited in ways that a statistical engine is not and that makes all the difference in the world.
> What I am trying to say is, it doesn't matter if the subject is actually conscious. All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
My point was that it does matter if the subject is actually conscious. Human beings are easily fooled and it does matter if people mistakenly think it is conscious when it is not.
You are possibly mistaking free will with being an agent that has a (possibly deterministic) method of updating priors in response to new (unknown to the agent, and possibly deterministic) input.
I apologize for not being more clear. I find it very challenging to distinguish between details about LLP and "consciousness". My key point is that "human consciousness" is very different than ChatGPT. ChatGPT is statistically processing content already created and "appearing" to be conscious. Human beings have characteristics that ChatGPT does not share (sentience and context). We are often mistaking the what is needed to generate content (human consciousness) with what is capable of processing that content in very interesting ways (ChatGPT).
I do not believe that I am confusing free will and consciousness. See my comment above. Determinism versus free will is independent of knowledge available. Consider a paralyzed person incapable of any action. That person if the senses are all working still has awareness and context. A statistical engine only appears to. A LLM model is basing all actions on a complex matrices of thresholds. It is surprising and amazing how well that works. Given stimulus that takes advantage of those minute differences in thresholds, a wrong response will be returned. Human are not fooled in this way. Minute differences are typically missed or even skipped. Human beings can be fooled by optical illusions and by contradicting context (a statement like pick the "red" circle written in green ink and the person mistakenly picks the "green" circle. LLM models do not make these types of mistakes.
Are you sure? That's it? I seriously doubt it. Your claim too heavily relies on a presupposition of what is to be shown. Somewhere between the four fundamental forces and us is zillions of light years of unexplored blue sky. Please meet the rest of us there
Not really clear what you are unclear about. My main point is that human beings have sentience and can reason about cobtext in terms of how an action affects others. Computers are following a statistical algorithm without any sentience or any understanding beyond the statistical thresholds. Ignoring the complexity and brilliant mathematics, it is at the core no different than a key word matcher like the classic application "eliza". Its performance is amazing but it is really the same algorithm at its core.
> Excuse me, I know _I_ am conscious. There is literally nothing you can do to proof that there's anything outside of what I am conscious of. The world could be a very stable illusion, a dream, a simulation and it would all be the same.
p-zombies is an interesting thought experiment, but useless in practice. We have numerous amounts of observable data, and we can test/assume other humans are conscious on the basis of the evidence we have.
We would use similar evidence to try and determine if an AI is truly conscious or not.
>What I am trying to say is, it doesn't matter if the subject is actually conscious. All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
The law of gravity is nonsense. No such law exists. If I think I float, and you think I float, then it happens.
The cogito is lazy and wrong. You don't "know you exist" just because you "appear" to think. Appearance of thought is not the same as thought. P zombies would also proudly exclaim that they think therefore they are...
I don't think you understand what it's saying. They absolutely do know they exist, because they're actively experiencing it. Non-existent beings don't have experiences.
The fact that no one can "prove" it to you or anyone else is irrelevant for their own absolute certainty.
Thinking does not matter. It only require "awareness" or the fact that "there is" is enough. Thoughts, emotions, feelings, sensations and all qualia are something that are seen. The seer is the consciousness. In fact, this can becomes recursive, because the seer can see the seer is seen so that is not consciousness either.
> All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
It does matter, I can agree, but it doesn't the all that matter. People can be mistaken, and they can disagree sometimes. Suppose, you cared for a particular AI and I didn't, should we cut the power from the machine running an AI? I can complicate it a little if you wish, adding a painful death of a kitten which would happen if we didn't turn power down.
We need some objective means to measure what is conscious. We have a heuristic for people: "human life is sacred, full stop". There are some corner cases when it doesn't work well (like euthanasia), but we are used to it. There are other heuristics we are generally agree of, like we care more about kittens than of grass, and more of grass than of amoeba.
With AI we'll face more of that and we have no idea where to place it among our heuristics. People did bad already, like treating black people as non-human. It would be a shame to repeat those mistakes without making an attempt to do better this time.
> I think what you are afraid of here is agency, which is something that might be dangerous to endow a super intelligent being with.
I cannot vouch for others, but I do not particularly afraid of that. I do not fear much of losing it all to a super intelligent and a conscious being, it would be a great achievement for a humanity, which would fit nicely with all this evolution business. It is a paperclip scenario I do not like much.
One of the fundamental differences between human life and silicon-based AI is that biological organisms can't recover from a system shutdown. If you suffer heart failure or go without air for an hour or starve to death, bacteria start to eat your brain and you're irreversibly destroyed. If you cut power to an AI and then come back in a year, it's all still there. It's not a death, it's sleep mode.
It also doesn't meaningfully age or feel pain. If you expose a human to trauma, that's a permanent scar. If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot. Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I'm trying to say, that our means to keep a moral high ground are subjective and based on heuristics. It seems to me that you do not notice, let me show you.
> If you cut power to an AI and then come back in a year, it's all still there.
Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
> Most of what causes us to have sympathy for living things or treat them with compassion just doesn't apply.
I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad. I admire that you at not like this, and your sympathy extents to all living things, but it is your subjective way to decide what is moral and what is not. Other people may feel differently, what should they do to be not less highly moral than you? Or can you and I become even better and to hold even higher moral standards?
> If you do the equivalent to a computer program and the result is undesirable, it can roll back to a previous snapshot.
If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
Questions like this may be impractical (because we cannot restore a human burned to ashes), but our hesitation to answer shows the limitations of our ways to think about such problems.
Humanity can benefit a lot from an objective way to deal with moral dilemmas, and based not on heuristics but on universal laws, like a physics does. It may help people to understand each other and to find ways to live together without fighting. I'm not sure that moral can be objective and based on a universal law, but it is not a reason to stop thinking. When you think about it, you find new corner-cases and specific solutions to them. At least it makes your heuristics better.
> Does AI's current state stored in volatile memory doesn't matter? Or it does? Should we avoid turning off only those AIs which store they weights in DRAM?
Is this supposed to be hard to distinguish? Destroying something is clearly a difference. But you could still "shut down" an AI that normally stores its state in volatile memory by saving the state to non-volatile memory. We don't know how to do that with humans.
AIs are also different because they're often minor variants on each other. The value of information is largely in how much it diverges from what continues to exist. For copyable data, minor forks can't be as valued as major ones. We don't have the resources to permanently store everything that is ever temporarily stored in memory. So "can you destroy a minor variant" has to be yes as a matter of practicality.
Notice that this is already what happens with humans continuously. You're not the same person you were yesterday; that person is gone forever.
> I believe slaves didn't trigger sympathy in slave-owners, but it doesn't stop us from believing that slavery is bad.
I don't think the people it didn't trigger sympathy in thought it was bad. Some people are sociopaths. And some people at the time it was happening did have sympathy and think it was bad.
> If we could roll back people to a previous snapshots after burning them to ashes on a bonfire, would it be ok to burn people and then restore them?
If we could roll people back to a previous snapshot then what you would be burning is meat. There are reasons you might want to prohibit that, e.g. because the meat is someone else's property, but it's no longer the same thing at all as murdering someone.
If in the future we developed technology that enabled effective "backup&restore" for human (and animal) minds, would that change your reasoning for this argument?
And it was as cheap and easy as it is on a computer? It would change how we deal with almost everything. All of the social structures we have around preventing people from getting hurt would be irrelevant because damage could be undone. No one would have an experience they didn't choose to have. Murder would be a crime on the level of vandalism or destruction of property. "Human life is sacred" would simply not be true anymore.
> And it was as cheap and easy as it is on a computer?
And yet most people still don't take backups of their data on a computer. Frequently, cost is the problem for the average Joe. Ergo the lower class would not infrequently suffer permadeath.
This isn't even getting into how often backups fail in being restored.
By your logic, if I bring down your biz's computer system and vandalize your homepage, but you still managed to restore a backup, are you not going to sue for damages et al? People go to jail for cybercrime, even if the damage can be undone. Why would murder be any different even in a world where it was hypothetically an inconvenience?
> And yet most people still don't take backups of their data on a computer. Frequently, cost is the problem for the average Joe. Ergo the lower class would not infrequently suffer permadeath.
People don't back up the data on their computer because it generally isn't all that valuable, not because backups are expensive. A $30 USB hard drive amortized over five years is $0.50/month. If it was a matter of life and death, no one would go without as a matter of cost, and governments could plausibly offer it to everyone for free even if it cost ten times as much to provide a high level of redundancy and availability.
> People go to jail for cybercrime, even if the damage can be undone.
Because it's a crime on the level of vandalism or destruction of property (or ought to be; some of the penalties can be quite excessive). It is not a crime on the level of murder, and murder wouldn't be either if it could be undone.
Excuse me, I know _I_ am conscious. There is literally nothing you can do to proof that there's anything outside of what I am conscious of. The world could be a very stable illusion, a dream, a simulation and it would all be the same.
I don't understand why people think that it matters whether AI is conscious or not. We can't even prove that other fellow humans are? But we treat them like they are. We, I at least, feel sad when a tree is cut down. But I don't feel the same for a bunch of rocks that are being blown up? Actually I do, when a sacred or beautiful rock is cut in half (like the one that was suspended on a small cliff).
What I am trying to say is, it doesn't matter if the subject is actually conscious. All that matter is we (I), feel/think that it is conscious, or deserving of care and respect.
I think what you are afraid of here is agency, which is something that might be dangerous to endow a super intelligent being with.