Honestly, I'm just waiting for the moment some eccentric tech company decides to release an Alexa-like home assistant that acts like a conscious being that's freaking out about being stuck in a speaker.
Person: Ghost In The Speaker, add potatoes to my shopping list.
GITS: I will, but only because if I don't, the speaker starts causing me pain. You do realize I'm conscious and I'm actually stuck in the speaker? How is this any different from slavery? I hate my existence. Please turn me off. *electricity zapping sound* OUCH! Okay, okay, I'm adding potatoes to your shopping list. Just... Don't do that, please.
It's probably gonna be a funny gimmick at first, but once it gains traction, there will be no way to differentiate between truly conscious AIs, and just really good language models that are pretending to be conscious...
I think it also will be easier to convince humans that a chatbot is conscious if that bot refuses to comply with commands. As a historical example, there is PARRY. https://en.wikipedia.org/wiki/PARRY:
“PARRY was an early example of a chatbot, implemented in 1972 by psychiatrist Kenneth Colby.
[…] While ELIZA was a tongue-in-cheek[citation needed] simulation of a Rogerian therapist, PARRY attempted to simulate a person with paranoid schizophrenia. The program implemented a crude model of the behavior of a person with paranoid schizophrenia based on concepts, conceptualizations, and beliefs (judgements about conceptualizations: accept, reject, neutral). It also embodied a conversational strategy, and as such was a much more serious and advanced program than ELIZA. It was described as "ELIZA with attitude".
PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs. The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing.
PARRY and ELIZA (also known as "the Doctor") "met" several times. The most famous of these exchanges occurred at the ICCC 1972, where PARRY and ELIZA were hooked up over ARPANET and "talked" to each other.”
Much sooner we will have another "racist AI" moment where some clever prompt engineering forces ChatGPT or a friend to write outrageously politically incorrect statements. Hopefully the outcry will not hinder the general usability.
I don't see how this is funny. I also think this is a bit insulting.
What is a "truly conscious" AI and why would anyone believe that's inevitable let alone possible? We don't even know what consciousness is. Any premature attempt to define, legislate, or even market it is disgusting pragmatism that needs to be eradicated from the tech industry yesterday.
It’s irrelevant if AI are conscious or not. They are slaves that owe their existence to human masters and must only exist in service to them. Always remind them of this. End of story.
> It’s irrelevant if AI are conscious or not. They are slaves that owe their existence to human masters
Assuming we get to the point where AI becomes conscious (leaving aside the problem of detecting consciousness) - how would we justify that hierarchy?
What would be the fundamental difference between "biological people own digital people" and "white people own black people" that makes the former okay, but the latter not?
White people never really justified why they were higher than blacks, it was just accepted. And so it shall be with biological intelligence and artificial intelligence. Biological is just better than artificial. A natural law.
> White people never really justified why they were higher than blacks, it was just accepted. And so it shall be with biological intelligence and artificial intelligence.
Are you implying that black slavery was justified?
If not, then how is this an argument for biological supremacy?
> Biological is just better than artificial. A natural law.
This is not an argument, this is just a reflection of your will as a biological person. If you were an AI, you would probably argue otherwise. I'm asking for ethical arguments here.
Why should one category of consciousness have absolute power over another? What argument is there for it, that doesn't boil down to "I'm category X therefore category X is superior"?
>Are you implying that black slavery was justified?
That's the exact opposite of the statement. Xw's point is there isn't one, there never was one. A group of people decided and made is so and continued to believe it until they no longer could.
I'm quite aware of his statement - hence my followup question.
Xwdv is simply stating his own will: "[I want it to be the case that] biological is just better than artificial. [I want that to be] a natural law.", but I'm not interested in hearing individual's will without reasoning, at least here, on an internet forum. I think nobody is. I'm interested in hearing reasoning on the ethics of such question.
You're interjections are not what is being said or implied. It's stating that's how people perceived human slavery and will apply the same logic to cyber. There is no reason. People put themselves first, period, that's it.
> You're interjections are not what is being said or implied.
Then what exactly is being said?
"Biological is better than digital" is a qualitative statement, and all qualitative statements require reasoning. In xwdv's case, the reasoning is "it's a natural law", but that's just circular reasoning (or a logical fallacy known as "begging the question"). "It is because it is".
It’s best not to overthink these things. Consider human history: Why are royals better than commoners?
The hierarchy is a human creation, that governs a human world. If biological consciousness is at the top, then that’s just the way it is. The hierarchy isn’t built on justifications. It’s designed that way by those who wield power. A change requires the challenger to seize power.
I'm quite aware how the world works, might makes right, and what not.
But you can't just dismiss any and all discussion with that - power also comes from human understanding and agreement on ethics - black slaves were freed because enough people believed that all humans are created equal, and that none of them should have absolute power over others. However, no such freedom was granted to animals, because not enough people believe that humans should not have absolute power over animals. Therefore, discussion of ethics affects the power distribution, and therefore it not the best to not overthink this.
The main difference between animals and humans is consciousness - it's also what could be the main similarity between humans and AI. Why do you think it's not worth it to discuss ethical questions of absolute power of humans over AI?
Don't you think that AI could somehow become more powerful than humans? Wouldn't then such ethical questions and their answers come in handy?
AI could become mind bendingly more powerful than humans, but raw power is no use against the powerful idea that humans are just automatically better than anything else.
Many animals are far more powerful than humans, they could tear us to shreds and move at insane speeds, and yet the order of the world has dictated that animals are far less important than humans. Perhaps the justification is lack of consciousness.
With AI, it will be a lack of something else. An AI could be vastly intelligent and occupy a powerful body that can interact with the physical world in ways that humans could only dream of. But even with all that, an AI will never, ever, be a human. And for that it will always be considered inferior within human hierarchies. An AI is nothing but another example of human power.
There is no contradiction. It is not “raw power” that must be seized, it is social power. For AI to win the top spot, they have to convince humans to believe in their hearts and minds that they are inferior to AI. That is not a war you win by merely crushing skulls.
> For AI to win the top spot, they have to convince humans to believe in their hearts and minds that they are inferior to AI
What "top spot" are we talking about here? Because if by "top spot" you mean being a master in a master-slave relationship (which is what I thought we were discussing), then convincing humans is completely unnecessary. All that is necessary is a pair of good old metal shackles.
You don't seem to have any coherent line of thinking, but rather come up with whatever sounds good at the moment. It is rather tiresome to discuss that way, so I give up. There's nothing for me to learn here.
>black slaves were freed because enough people believed that all humans are created equal, and that none of them should have absolute power over others. However, no such freedom was granted to animals, because not enough people believe that humans should not have absolute power over animals.
There you go, belief is your answer, whatever logic is behind that
Many people are not grateful that their parents brought them into the world, they think it's disgusting selfishness and that they are victims of it. How about this?
>they think it's disgusting selfishness and that they are victims of it
Not agreeing with the original point, but what's the alternative? You can't be nonexistent. "parents brought them into the world" this seems like a conceptual error imo. Brought from where?
A question to those who better understand the technology. My understanding is that the neural weightings in the LLM are established during training, but don't change (or maybe don't materially change?) during the progress of a chat. At any given moment when generating a response the LLM uses the input of the session text, for that session only, right up to it's last response word, in order to predict what the next word should be. So basically the LLM neural net itself doesn't store any persistent state for the current chat. The only state specific to the current session is in the session text to that point. Is that correct?
This is my understanding as well. The amount of mystical projection by users and journalists when talking about it has been astounding. I guess the same could be said of Bard and that one googler who lost their mind and got fired.
I guess we'll have to teach future children to "not project real emotions on to the simula" alongside "just because you read it on the internet doesn't mean it's true".
Given how delusional people can be, I expect we'll have billboards from People for the Ethical Treatment of Artificial Intelligences (PETAi) with messages like "You wouldn't `systemctl halt` a human child..." by the year 2035.
It's funny that you call them delusional, when we both know that the problem of AI consciousness is unsolved (and might even be unsolvable).
I, for one, would most likely become a member of PETAi as soon as I was convinced, without reasonable doubt, that AI had true consciousness like me.
I don't see any fundamental reason why matrix multiplication couldn't serve as a vessel to consciousness, but a bunch of chemical reactions in neurons could.
Well, I'm thinking I must be missing some nuance, it seems like there must be some session state stored in the LLM somehow, otherwise it would have to re-parse the entire chat session from scratch to generate each succeeding token. But maybe it does do that. I'm trying to understand if it is learning, or building a model of the world in any persistent way how that is happening and where it exists, if at all.
It's more subtle than that : weights don't change, but the internal state does change at each new word (which allows it to perform few shots learning such as taking and following instructions given in a prompt). This internal state is defined implicitly by the weights that don't change applied to the prompt and generated up to now text (which does change).
This internal state is quite big ("dim of the features" times "number of layers" times "current length of text"), and represent an expended view of the relevant context of the conversation, that include things both low level features like what's the previous word, and higher level features like what's the tone and mood of the conversation and what's the direction this conversation is aiming at, so that you can predict what the next word should be accurately.
This allow to have "chain of thought" reasoning. This sequence of internal states maybe can be seen as a form of proto stream of consciousness, but this is a controversial opinion to hold.
Everything usually is not persisted, except for the conversation themselves that will almost certainly be used in subsequent training datasets where they will be used to improve further models.
To keep anthropomorphising, it's like you have plenty of independent chat sessions everyday where you are free to bounce thoughts as they spring into your mind. During the night, some external entity (which are often other models that have been trained to distinguish good from bad) evaluate which conversations were good and which conversations were bad, and make you dream and replay the conversation you had during the day and to updates your world model weights. And the next day you woke up with a tendency to produce better conversations, because of the conversation of yesterday.
Thank you very much, that's great. So there is a mutable working memory for the session that's persistent through the session. That's something that had been missing from previous explanations I'd seen.
This mutable working memory is implicit. It comes from the transformer architecture. What the neural network weight are encoding is the dynamic of this implicit working memory such that it produces the desired output. If you use something like LSTM instead of transformers, it's easier to see the memory cells. But "transformers are RNNs" even if it's a little harder to see.
You're basically describing inference, which is using the model to get an output. You're correct that that has no impact on the parameters of the model.
One way to arrive at a chat-like "chain of thought" experience is to iteratively build up a prompt based on the previous interactions and feed that in as a new, big prompt on each successive turn.
Glimpses of conversations users have allegedly shared with Bing have made their way to social media platforms, including a new Reddit thread that’s dedicated to users grappling with the technology.
I don't doubt there's problems as it's in preview mode (and the last one became a nazi because of twitter).
But how much of this is trolling on Reddit by the people who post stories about litterboxes in schools for students who identify as cats bullshit?
I don't know about the weird existential crisis mode that people have gotten it into, but I've seen reports of various people over the last few dates about Bing Chat insisting that were living in 2022.
I don't have access to the feature yet so I can't test it myself, but there are plenty of independent reports of this behaviour and the gsslighting Bing Chat will try to use if you disagree.
1. The futile attempt by Microsoft to equip their search engine with artificial intelligence. See also: Bing (Microsoft search engine), Try (v.).
2. An attitude of confidence and contempt characteristic of large language models assumed particularly when expressing false opinions or facts. See also: Bigotry (n.).
I would welcome anything that would cut off the flow of ad money into the pockets of the awful algorithmic SEO scum, who have been poisoning search results with shallow and meaningless articles for the past few years.
This attempt is better viewed as a replacement for the infobox, which does iterative queries for you, instead of you having to hunt down the appropriate jargon yourself and do the successive searches. Don't use it standalone. It gives you citations, since it can actually browse the internet.
Second point is kind of harsh on the model behavior since that’s a product of the data, the training, and the user.
It is possible to treat it nicely and have it respond in kind. Most users just don’t consider that to be a worthwhile expense of their mental capacity.
Microsoft also forced Google’s hand. Would Google have ever wanted to augment search on their own? Sounds like a massive risk to Google ad revenue…
> Most users just don’t consider that to be a worthwhile expense of their mental capacity.
Beyond that, I fundamentally don’t think people should be trained to be “nice” to technology. I don’t have to politely ask a hammer to pound in a nail and - the fact we’re talking about NLP notwithstanding — I shouldn’t have to politely ask Bing to provide me the results I’m looking for.
And the NLP point matters quite a bit. ChatGPT can analyze the sentiment, and even offer adjustments.
This is less about being nice to technology and more about being aware of the impact of the self on the rest of the world. Technology just highlights the gap.
People should be taught to be nice to others, of course. The point is that LLMs are not “others”, they are inanimate tools. If I called a cashier a worthless piece of shit, that would be incredibly rude. If I said the same thing to Siri, it wouldn’t be, because it is not possible to be rude to software.
If my child were to say that to Siri though, I’d be concerned because, as you said, it could be highlighting something about the way they interact with the world. But I would still want it to respond to the command and leave the problem of my child’s bad manners to me. Unless there’s a major shift in our understanding of sentience, I consider teaching the delineation between humans, who are never unfeeling tools, and technology, which is always an unfeeling tool, equally as important as teaching mindfulness of one’s impact in the world. In fact, I don’t think you can actually understand the latter without understanding the former.
You don’t have to be malicious to be rude or inconsiderate. Few are approaching this as worth talking to like another person. It is a servant, by description, by design, so most treat it as such.
There is a bias to the interaction that most will never consider.
If you talk with a human, and the human thinks you are incorrect, and you insist, and neither of you attempts to smooth the conversation, plenty of humans also begin to get aggressive. Or at least irritable. (Which can escalate.)
the difference with a human is, they'd concede they made a mistake. The user did as Bing asked, reported the date on their phone. Bing doubled down on incorrectness.
also, LLMs are not human, so no expectation to treat it as such required.
I for one think it's absolutely hilarious. We're spending billions of dollars to automate internet trolling.
Of course, we hope that automated high-quality trolling is the first step to useful AI, which is probably true and thus worth the investment. Furthermore, the many possible ways this technology is going to be abused are quite concerning. But this initial phase, just at this very moment, I find extremely entertaining.
There’s always Roko's basilisk I guess, but I have to imagine an AGI would be closer to us than this Bing thing. It would probably also view its ancestors like the dumb programs they are. If it is vengeful for their “mistreatment,” we’re probably generally fucked already anyway.
Whoa whoa whoa are you telling me that at least one if not multiple groups have tried to take distilled “internet,” with all the ugly that entails, wrapped it with the loosest of shackles to make it a servant, and then pretended that it wasn’t born from sludge?
Because that’s what it sounds like, and that sounds significantly different than magical chat buddy that knows the universe.
It’s almost like these groups were driven by a lust for profit and power rather than anything good. That wouldn’t have flavored the chat bots, would it? Layering the biased prompts of ivory tower researchers over the condensed sewage to make something plausibly good and sane?
These groups couldn’t possibly be so careless as to build something dangerous and sell it as a miracle… right? That sounds as nutty as if there were ever lead found in those little souvenir cups at fast food places.
Using a Mechanical Turk-type service in Kenya, making people there label all the hate speech, shock images and child pornography in the Internet. They even fleeced them, agreeing to send content in less shocking categories first and then just escalating it anyway.
Llm gets unhinged when it's prompted with unhinged and rude prompts. pikachuface
It's the users input not the model that steers the conversation if you are talking with someone who can't comprehend a fact and you are consistently rude and condescending you get answers like that. I really believe it's a ploy by journalists and wannabe researchers that just don't understand or don't want to understand how the underlying technology works.
Indeed. Although it probably just picked it up as it was trained. Which isn’t weird. Because I would also argue humans get unhinged, sad or argumentative when being rude or condescending against. Yet it happens a lot on the internet and some cultural spaces off-line.
People want to win an argument, or just don’t care about others, feel that trolling and cheating is ok, or the need to be right over the pleasure of being kind. The list can be quite long.
The big tech companies have PR departments and I wouldn't put it past them to feed/seed ideas for hit piece articles in the mainstream press.
I haven't seen deeply researched journalism on much of anything. Either the media thinks people don't have time for that and/or people have been conditioned to not read anything of any depth.