Seems like the Nigerian Prince bayesian model as analysed by Microsoft. Due to many false positives within the thousands of potential responders pool, they emit a signal that only a real easy victim would fall for to reduce the costs of their final filtering process.
“The company’s mission is to understand the true nature of the universe”
- There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM.
But on the philosophical side, if an understanding can’t be communicated, does it exist? We humans only have various movements and vibrations of flesh, sensing those, text, and images to communicate.
> But on the philosophical side, if an understanding can’t be communicated, does it exist?
There are deep mathematical results about our limits to understand things simply because we communicate through finite series of symbols from finite dictionaries. Basically what we can express and prove is infinite but discrete, but there is much larger infinities than that that will be beyond our grasps forever. Things like theorems that are true but can not be proven to be true, or properties on individuals real numbers that exist but can not be expressed.
And there is no reason to believe the universe doesn't have the same kind of thing: it remains to be shown whether or not you can describe or understand the universe with a finite set of symbols.
Yep. Expanding on that; before AI everyone I knew would postulate on the fictional Library of Babel. The idea was a thought experiment, where you assume there exists a library with every possible combination of words and letters written down in one of it's books. There would be millions of issues that are filled with garbled and meaningless text; only a few would be legible, and fewer yet understandable.
It begs the question, if sifting through noise is a meaningful way to look for scientific progress. And of course, what if it's wrong? Both the Library of Babel and AI are fully capable of leading us down untested and nonsense rabbit-holes. The difference between Alice and Wonderland and Jabberwocky is unknown to us; we wouldn't know which books are worth reading and which are not.
On the one hand, you have people excited by this idea. Some people really do think that the world's answers are up on a bookshelf in the Library of Babel, somewhere. The philosophical angle runs deeper yet, though; what kind of cargo-cult society would we build relying on a useful AI? Are we guaranteed meaningful progress because an AI model can keep pressing the "randomize" button? Do we eventually hit a point where fiction and reality are indistinguishable? It's all hard to say.
" Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM. " Tesla is losing money and cant fulfill its promises about AI. What do you mean?
Could you name the competing self driving systems (as in currently competing, with similar performance) that are available to the public, for private transport, that you have in mind?
Waymo is operating and doing passenger miles commercially with no one behind the wheel. Tesla hasn't yet done that even for the controlled Vegas loop they said they would do it in. Waymo still has remote operators who can handle unusual situations but they handle multiple cars and only the car itself responds to sudden events. They are operating at level 4.
Tesla still has one local operator per car who has to be able to have twitch reactions at all times.
Competitors like Honda and Mercedes also let you take your hands off the wheel and eyes off the road in certain areas (level 3), which Tesla hasn't yet achieved.
Many are not available in the US. Audi have a leading system and Mercedes have the highest rated system available in the US and are officially at level 3. The problem is Musk sucks up so much air marketing Tesla as leading people have come to believe it. The leading systems aren’t super impressive yet but Musks lies about his system which doesn’t work aren’t proof of anything but hubris. He’s just pumping stock to the ignorant.
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
> And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
I don't know what you mean by that.
If you mean qualia, then sure. Unsolved and undescribed. But other than that, I think everything has a linguistic form; perhaps inefficient, but it is possible.
Separately, transformers don't have to use what humans recognise as a langue, this means they can use things such as DNA sequences and pictures. They're definitely not the final answer to how to do AI, because they need so many more examples than us, but I don't have confidence that they can't do these things, only that they won't.
Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
This is both extremely powerful and limiting.
An LLM is never going to give you some of the most famous films like "Star Wars" which bounced around before 20th Century Fox finally took a chance on it because they thought Lucas had talent. Is what we want? A society that just uses machines to produce variations of the same thing that already exist all the time? It's hard enough for novel creative projects to succeed.
> Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
Yes, state of the art models like midjourney, sd3 are _really_ good. You are bounded only by your imagination.
The idea that generative AI is only derivative was never an empirical claim, its always been a cope.
Yes... I'm not sure what the archetype of intelligence is, but for practical purposes I'd say: Humans have some of it. And it's not clear to me that what humans have is very far from what AI is starting to have. The hallucinations are weird and wonderful, but so are some of the answers I saw from below-average students when I was in university. Can't tell whether the two weirdnesses are different or similar. Exciting times lie ahead.
> Can't tell whether the two weirdnesses are different or similar
Because you focus on how they are similar and not how they are different, to me it is extremely obvious they are very different. Students make mistakes and learn and then stop doing them soon after, when I taught students at college I saw that over and over. LLM however still does the same weird mistakes they did 4 years ago, they just hide it a bit better today, the core different in how they act compared to humans is still the same as in GPT-2 to me, because they are still completely unable to learn or understand their mistakes like almost every human can.
Without being able to understand your own mistakes you can never reach human intelligence, and I think that is a core limitation of current LLM architecture.
Edit: Note that many/most jobs doesn't require full human general intelligence. We used to have human calculators etc, same will happen in the future, but we will continue to use humans as long as we don't have generally intelligent computers that can understand their mistakes.
I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.
> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.
Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)
>“The company’s mission is to understand the true nature of the universe” - There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
I disagree. The day is coming when some *BIG* problem is solved by AI just because someone jokingly asks about it.
I regularly try to ask them to give me fluid dynamics simulation code to see what level they are at. Right now, they can't do that kind of thing all by themselves, and I don't know enough to debug the code they give me.
But even without any questions about free will or consciousness or whatever, a sufficiently capable — not yet existing — transformative search engine (as it has been derided as) and a logical inference engine (which it isn't, but it can use) could have produced the Aclubiere metric with nothing newer than the Einstein field equations and someone asking the right question.
I do not expect transformer models to be good enough to do that given their training requirements, but I wouldn't rule it out either.
These people always exist. They pick up whatever is en vogue and sell it to investors. What happens later is of secondary importance, what matters is that money changed hands.
It kinda reminds me of James Bond Diamonds are Forever where the main scientist is convinced Blofeld is doing the right thing until the very bitter end.
Whether you agree that their work constitutes advances toward more general purpose AI, they're in an industry where that is ostensibly the goal, which makes their choice of mission statement appropriate.
X.ai was founded March 2023. That’s one year three months. Is general AI a good first goal? I think most uses, outside of LLM, will be very specialized AI, and unrelated to chat.
Assuming I can choose linked list implementation, that is trivial:
It's a doubly linked list where the head contains a pointer to the tail, and a flag that determines which pointer in the nodes is forward and which is backward.
This is a pretty blithe comment that assumes perfect labour mobility. Many of Twitter's remaining employees are on work visas that are tied to Twitter and can't easily be ported to another employer.
Huh, so these work visa employees can go back to their home countries. Surely they can get a great job with excellent perks. These people are not really escaping from war zones. The fact they are not leaving would just mean Twitter's crappy job is better than their other options.
This is a deeply ignorant comment. Firstly people who emigrate make ties. Just leaving isn’t easy or often desirable. Secondly the valley overpays. It’s very hard to find jobs with equivalent salaries even in Europe. In India forget about it. On top of that there is SVs lackadaisical work culture and Americas in general. Elsewhere people work harder for less. The people who chose to emigrate are a self selecting driven group. I know because I did it. People who live and work in the country they were born to don’t understand the motivations and drive of the people who don’t.
I think SpaceX and Tesla do actually have a reputation for low pay compared to other major tech companies.
I think it might be similar to game companies where people are attracted to the work itself (whether it’s because they’re True Believers in Musk or because electric cars and space are cool, not sure, probably mostly the latter). This lets the company pay less for the same level of talent, since the work is in itself a form of compensation (as perceived by the people who accept the jobs for lower pay).
To play devil’s advocate, he lists “truthful” as a goal, which is emphatically missing from openai, google, microsoft, facebook. Google even removed don’t be evil. Elon is greedy and truthful (although obviously with plenty of self deceipt when conflict of interest…). But how far can you really go with truth, when no one wants the truth: not the west, not the east, and not the Middle East. And your allies and investors are in it for the greed part, but not the truth part so much. Trump tried the same thing with truth social… problem is all the greed and shadiness loses credibility with truth also.
Those are culturally biased questions. You could as well ask about the incident which drew America into Vietnam, or if the US deliberately bombed China and Russia during the Korean War and as equally accuse a system of being dishonest.
One has sent his car to a trans-Martian orbit, the other was unable to admit which inauguration had the most people present or how large his apartment is.
Don't get me wrong: Musk has and will continue to get into serious trouble for things he insists are true but nobody else believes (420 etc.), I'm just saying there's a huge gap between them.
We don’t give a s%#* about people wanting to use AI to write SEO spam, automate their customer support or generate content to keep the kids quiet. We want to use this tech as a tool to solve real world problems in a way that, looking back 500 years from now, people will see this as a time of innovation, rather than a time of decline.
Wether he’ll succeed is a different question, of course. But such a direction is clearly missing in the other players. They are just too eager to cater to the laziest segment of the economy of bits. They’re about changing pixels on other people’s screen.
This is probably the response Elon is looking for when he simply writes something vague that can elicit the imagination of any applicant’s specific worldview.
I yawned so hard my jaw unlocked.
Can't wait to see groundbreaking... checks notes... "advancements in various applications, optimizations, and extensions of the model".
Do these companies only hire yes men?