Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm asking it about how to make turbine blades for a high bypass turbofan engine and it's giving very good answers, including math and some very esoteric material science knowledge. Way past the point where the knowledge can be easily checked for hallucinations without digging into literature including journal papers and using the math to build some simulations.

I don't even have to prompt it much, I just keep saying "keep going" and it gets deeper and deeper. Opus has completely run off the rails in comparison. I can't wait till this model hits general availability.



You mean it's giving very good sounding answers.


That's what I've observed, I gave it a task for a PoC on something I've been thinking about for a while and it's answer while syntactically correct is entirely useless (in the literal sense) due to it ignoring parts of the task.


You know at one point we wouldn't be able to benchmark them, due to the sheer complexity of the test required. I.e. if you are testing a model on maths, the problem will have to be extremely difficult to even consider a 'hustle' for the LLM; it would then take you a day to work out the solution yourself.

See where it's getting at? When humans are no longer on the same spectrum as LLMs, that's probably the definition of AGI.


There is a huge class of problems that's extremely difficult to solve but very easy to check.


Humans supervising models solving difficult problems is the beginning of an AGI society.


Prove it...


*Assuming you don't mean mathematically prove.*

I can't test the bot right now, because it seems to have been hugged to death. But there's quite a lot of simple tests LLMs fail. Basically anything where the answer is both precise/discrete and unlikely to be directly in its training set. There's lots of examples in this [1] post, which oddly enough ended up flagged. In fact this guy [2] is offering $10k to anybody that create a prompt to get an LLM to solve a simple replacement problem he's found they fail at.

They also tend to be incapable of playing even basic level chess, in spite of there being undoubtedly millions of pages of material on the topic in their training base. If you do play, take the game out of theory ASAP (1. a3!? 2. a4!!) such that the bot can't just recite 30 moves of the ruy lopez or whatever.

[1] - https://news.ycombinator.com/item?id=39959589

[2] - https://twitter.com/VictorTaelin/status/1776677635491344744


Multiple people found prompts to make LLM solve the problem, and the $10k has been awarded: https://twitter.com/VictorTaelin/status/1777049193489572064


The entire problem with LLMs is that you don't want to prompt them into solving specific problems. The reason why instruction finetuning is so popular is that it makes it easier to just write whatever you want. Text completion on the other hand requires you to conform to the style of the previously written text.

In a sense, LLMs need an affordance model so that it can estimate the difficulty of a task and plan a longer sequence of iterations automatically according to its perceived difficulty.


Have you ever heard the term NP-complete?


Yeah, I mean, that's the joke.

The comment I replied to, "a huge class of problems that's extremely difficult to solve but very easy to check", sounded to me like an assertion that P != NP, which everyone takes for granted but actually hasn't been proved. If, contrary to all expectations, P = NP, then that huge class of problems wouldn't exist, right? Since they'd be in P, they'd actually be easy to solve as well.


We could end up with a non-constructive proof of P=NP. That is, a proof that the classes are equal but no algorithm to convert a problem in one into the other (or construct a solution of one into a solution of the other).


Me: 478700000000+99000000+580000+7000?

GPT4: 478799650000

Me: Well?

GPT4: Apologies for the confusion. The sum of 478700000000, 99000000, 580000 and 7000 is 478799058000.

I will be patient.

The answer is 478799587000 by the way. You just put the digits side by side.


I recently tried a Fermi estimation problem on a bunch of LLMs and they all failed spectacularly. It was crossing too many orders of magnitude, all the zeroes muddled them up.

E.g.: the right way to work with numbers like a “trillion trillion” is to concentrate on the powers of ten, not to write the number out in full.


Predicting the next character alone cannot achieve this kind of compression, because the probability distribution obtained from the training results is related to the corpus, and multi-scale compression and alignment cannot be fully learned by the backpropagation of this model



You know, people often complain about goal shifting in AI. We hit some target that was supposed to be AI (or even AGI), kind of go meh - and then change to a new goal. But the problem isn't goal shifting, the problem is that the goals were set to a level that had nothing whatsoever to do where we "really" want to go, precisely in order to make them achievable. So it's no surprise that when we hit these neutered goals we aren't then where we hope to actually be!

So here, with your example. Basic software programs can multiply million digit numbers near instantly with absolutely no problem. This would take a human years of dedicated effort to solve. Solving work, of any sort, that's difficult for a human has absolutely nothing to do with AGI. If we think about what we "really" mean by AGI, I think it's the exact opposite even. AGI will instead involve computers doing what's relatively easy for humans.

Go back not that long ago in our past and we were glorified monkeys. Now we're glorified monkeys with nukes and who've landed on the Moon! The point of this is that if you go back in time we basically knew nothing. State of the art technology was 'whack it with stick!', communication was limited to various grunts, and our collective knowledge was very limited, and many assumptions of fact were simply completely wrong.

Now imagine training an LLM on the state of human knowledge from this time, perhaps alongside a primitive sensory feed of the world. AGI would be able to take this and not only get to where we are today, but then go well beyond it. And this should all be able to happen at an exceptionally rapid rate, given historic human knowledge transfer and storage rates over time has always been some number really close to zero. AGI not only would not suffer such problems but would have perfect memory, orders of magnitude greater 'conscious' raw computational ability (as even a basic phone today has), and so on.

---

Is this goal achievable? No, not anytime in the foreseeable future, if ever. But people don't want this. They want to believe AGI is not only possible, but might even happen in their lifetime. But I think if we objectively think about what we "really" want to see, it's clear that it isn't coming anytime soon. Instead we're doomed to just goal shift our way endlessly towards creating what may one day be a really good natural language search engine. And hey, that's a heck of an accomplishment that will have immense utility, but it's nowhere near the goal that we "really" want.


There are different shades of AGI, but we don’t know if they will happen all at once or not.

For example, if an AI can replace the average white collar worker and therefore cause massive economic disruption, that would be a shade of AGI.

Another shade of AGI would be an AI that can effectively do research level mathematics and theoretical physics and is therefore capable of very high-level logical reasoning.

We don’t know if shades A and B will happen at the same time, or if there will be a delay between developing one and other.

AGI doesn’t imply simulation of a human mind or possessing all of human capabilities. It simply refers to an entity that possesses General Intelligence on par with a human. If it can prove the Riemann hypothesis but it can’t play the cello, it’s still an AGI.

One notable shade of AGI is the singularity: an AI that can create new AIs better than humans can create new AIs. When we reach shades A and B then a singularity AGI is probably quite close, if not before. Note that a singularity AGI doesn’t require simulation of the human mind either. It’s entirely possible that a cello-playing AI is chronologically after a self-improving AI.


The term "AGI" has been loosely used for so many years that it doesn't mean anything very specific. The meaning of words derives from their usage.

To me Shane Legg's (DeepMind) definition of AGI meaning human level across full spectrum of abilities makes sense.

Being human or super-human level at a small number of specialized things like math is the definition of narrow AI - the opposite of general/broad AI.

As long as the only form of AI we have is pre-trained transformers, then any notion of rapid self-improvement is not possible (the model can't just commandeer $1B of compute for a 3-month self-improvement run!). Self-improvement would only seem possible if we have an AI that is algorithmically limited and does not depend on slow/expensive pre-training.


What if it sleeps for 8 hours every 16 hours and during that sleep period, it updates its weights with whatever knowledge it learned that day? Then it doesn't need $1B of compute every 3 months, it would use the $1B of compute for 8 hours every day. Now extrapolate the compute required for this into the future and the costs will come down. I don't know where I was going with that...


These current LLMs are purely pre-trained - there is no way to do incremental learning (other than a small amount of fine-tuning) without disrupting what they were pre-trained on. In any case, even if someone solves incremental learning, this is just a way of growing the dataset, which is happening anyway, and under the much more controlled/curated way needed to see much benefit.

There is very much a recipe (10% if this, 20% of that, curriculum learning, mix of modalities, etc) for the type of curated dataset creation and training schedule needed to advance model capabilities. There have even been some recent signs of "inverse scaling" where a smaller model performs better in some areas than a larger one due to getting this mix wrong. Throwing more random data at them isn't what is needed.

I assume we will eventually move beyond pre-trained transformers to better architectures where maybe architectural advances and learning algorithms do have more potential for AI-designed improvement, but it seems the best role for AI currently is synthetic data generation, and developer tools.


At one time it was thought that software that could beat a human at chess would be, in your lingo, "a shade of AGI." And for the same reason you're listing your milestones - because it sounded extremely difficult and complex. Of course now we realize that was quite silly. You can develop software that can crush even the strongest humans through relatively simple algorithmic processes.

And I think this is the trap we need to avoid falling into. Complexity and intelligence are not inherently linked in any way. Primitive humans did not solve complex problems, yet obviously were highly intelligent. And so, to me, the great milestones are not some complex problem or another, but instead achieving success in things that have no clear path towards them. For instance, many (if not most) primitive tribes today don't even have the concept of numbers. Instead they rely on, if anything, broad concepts like a few, a lot, and more than a lot.

Think about what an unprecedented and giant leap is to go from that to actually quantifying things and imagining relationships and operations. If somebody did try to do this, he would initially just look like a fool. Yes here is one rock, and here is another. Yes you have "two" now. So what? That's a leap that has no clear guidance or path towards it. All of the problems that mathematics solve don't even exist until you discover it! So you're left with something that is not just a recombination or stair step from where you currently are, but something entirely outside what you know. That we are not only capable of such achievements, but repeatedly achieve such is, to me, perhaps the purest benchmark for general intelligence.

So if we were actually interested in pursuing AGI, it would seem that such achievements would also be dramatically easier (and cheaper) to test for. Because you need not train on petabytes of data, because the quantifiable knowledge of these peoples is nowhere even remotely close to that. And the goal is to create systems that get from that extremely limited domain of input, to what comes next, without expressly being directed to do so.


I agree that general, open ended problem solving is a necessary condition for General intelligence. However I differ in that I believe that such open ended problem solving can be demonstrated via current chat interfaces involving asking questions with text and images.

It’s hard for people to define AGI because Earth only has one generally intelligent family: Homo. So there is a tendency to identify Human intelligence or capabilities with General intelligence.

Imagine if dolphins were much more intelligent and could write research-level mathematics papers on par with humans, communicating with clicks. Even though dolphins can’t play the cello or do origami, lacking the requisite digits, UCLA still has a dolphin tank to house some of their mathematics professors, who work hand-in-flipper with their human counterparts. That’s General intelligence.

Artificial General Intelligence is the same but with a computer instead of a dolphin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: