Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals
Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.
Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.
I've had a suspicion for a bit that, since a large portion of the Internet is English and Chinese, that any other languages would have a much larger ratio of training material come from books.
I wouldn't be surprised if Arabic in particular had this issue and if Arabic also had a disproportionate amount of religious text as source material.
I think therein lies another fun benchmark to show that LLM don't generalize: ask the llm to solve the same logic riddle, only in different languages. If it can solve it in some languages, but not in others, it's a strong argument for just straightforward memorization and next token prediction vs true generalization capabilities.
I would expect that the "classics" have all been thoroughly discussed on the Internet in all major languages by now. But if you could re-train a model from scratch and control its input, there are probably many theories you could test about the model's ability to connect bits of insight together.
While computer languages are different and significantly simpler than human languages, LLMs as coding agents don't seem phased by being told to implement in one language based on an example in another. Before they were general purpose chat bots, LLMs were used in language translation.
Humans are also shaped by the training material… maybe all intelligence is.
Talk to people with extreme views and you realize they are actually rational, but the world they live in is not normal or typical. When you apply perfectly sound logic to a deformed foundation, the output is deformed. Even schizophrenic people are rational… Logic is never the problem, it’s always the training material.
Anyway that’s why we had to build a mathematical field of statistics and create tools like sample sizes and distributions to generalize.
> whether AI can push to radicalize susceptible individuals
My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.
There's actually been fascinating discoveries on this. Post the mid 2010 ISIS attacks driven by social media radicalization in Western countries, the big social platforms (Meta, Google, etc) agreed to censor extremist islamist content - anything that promoted hate, violence, etc. By all accounts it worked very well, and homegrown terrorism plummeted. Access and platforms can really help promote radicalism and violence if not checked.
I don’t really find this surprising! If we can expect social networking to allow groups of like minded individuals to find eachother and collaborate on hobbies, businesses and other benign shared interests - it stands to reason that the same would apply to violent and other anti-state interests as well.
The question that then follows is if suppressing that content worked so well, how much (and what kind of) other content was suppressed for being counter to the interests of the investors and administrators of these social networks?
TBH I wouldn't mind if my LLM threw in an "Inshallah" every now and again, it would remind me how skeptical I need to be in its output. (Not just "Inshallah" - same thing if it said "God willing")
We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.
Whom's messenger? You didn't point us to anyone's research.
I just don't see how sampling tokens constrained to a grammar can be worse than rejection-sampling whole answers against the same grammar. The latter needs to follow the same constraints naturally to not get rejected, and both can iterate in natural language before starting their structured answer.
Under a fair comparison, I'd expect the former to provide answers at least just as good while being more efficient. Possibly better if top-whatever selection happened after the grammar constraint.
I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)
Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103
I usually use English to talk to Gemini, but the other day I wanted to try and find out the original band of a Siberian punk song that I have carried around in my music collection since time immemorial. Problem is the tags are all over the place in this genre and there are situations where "Foo-Bar" and "Foobar" are two completely different bands. Gemini was clearly trained on some genre forums from late 90s which are... shall I say non-PC by any stretch of the term.
In the middle of the conversation it randomly switched from English to Russian and clearly struggled to maintain the tone imposed by the built-in prompt.
I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.
Hasn't this already been observed with not too stable individuals? remember some story about kid asking ai if his parents/government etcs were spying on him.
They ALSO know that and are making a stand about this in particular use of figurative language since anthropomorphizing llms is a thing we're already seeing used for accountability washing. If we, the public, don't let the language shift to acting like these LLMs are actual people then we, the public, can do a better job of keeping our intuitions right about who is responsible for these products doing wacky/destructive/abusive/evil things instead of falling into the trap of "<personified name of LLM product > did/said it".
When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.
It told him "this is what our religion says we should do" without any kind of weird prompting, role-playing, or persona-shifting beyond using a different language.
As a westerner, you may regard athiests with suspicion, or even contempt, but you've at least heard them speak publicly. From a culture where most haven't, hearing an authoritative voice which can perfectly cite support for any point it's making, how could it not have a huge potential for radicalization?
On Facebook, anti-abortionists are using ChatGPT to write long screeds about abortion, religion, murder and the law. The content attracts thousands of people and pushes them towards radicalized justifications, movements and actions based on appeals to faith.
An LLM citing sources is linking you to stuff that it recently found that kind-of matches its answers. I don't believe it is possible for an LLM to cite original training materials, and it wouldn't be desirable if those are unavailable to the end-user, anyway.
This is an added nuisance for webmasters beyond automated AI-training scrapers. When users query an LLM like Grok or Gemini, it will go search a list of websites and "browse" them to glean information, and though that seems like a contradiction to what I just wrote, it is not "LLM" activity, not really "agentic", but sort of a smart proxy.