Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just because it's newly created doesn't mean that the structure of the language and the concepts it represents are actually new.

It's clear that whatever tests he writes cover well established and understood concepts.

This is where I believe people are missing the point. GPT4 is not a general intelligence. It is a highly overfit model, but it's overfit to literally every piece of human knowledge.

Language is humanities way of modelling real world concepts. So GPT is able to leverage the relationships we create through our language to real world concepts. It's just learned all language up until today.

It's an incredible knowledge retrieval machine. It can even mimick how our language is used to conduct reasoning very well.

It can't do this efficiently, nor can it actually stumble upon a new insight because it's not being exposed in real time to the real world.

So, this professors 'new' test is not really new. It's just a test that fundamentally has already been modelled.



Watching posts shift in real time is very entertaining. First it's not generally intelligent because it can't tackle new things then when it obviously does its not generally intelligent because it's overfit.

You've managed to essentially say nothing of substance. So it passes because structure and concepts are similar. okay. are students preparing for tests working with alien concepts and structures then because i'm failing to see the big difference here.

A model isn't overfit because you've declared it so. and unless GPT-4 is several trillion parameters, general overfitting is severely unlikely. But i doubt you care about any of that. Can you devise a test to properly asses what you're asserting ?


I have no idea what is shifting in real time. I formed this opinion of GPT4 by running it through several benchmarks and making adjustments to them, so my view is empirical and it was formed 1 week after it came out.

Your post says nothing of substance because it offers no substantial rebuttal and seems to just attack a position by creating a hand-waved argument without any clear understanding of how parameters in-fact impact a model's outputs.

You also completely missed my point.


Oh several benchmarks ? Wow. Please do tell what these benchmarks were and how you evaluated them. Should surely be easy enough to replicate.


You seem to have a serious attitude problem in your responses so this is my last one.

It's propietary company evaluation data, and it's for a specific domain related to software development, a domain that OpenAI is actively attempting to improve performance for.

Anyways enjoy your evening. If you want to actually have a reasonable discussion without being unpleasant I'd be happy to discuss further.


How does it empirically prove general overfitness ?

People study from books or from teachers or other sources of knowledge and internalize it and relate it to other concepts as well, and no one considers that to be a form of overfitting.

You basically said what amounts to "it overfits to concepts" which is honestly quite ridiculous. Not only is it a standard humans would fail, that's not what overfit is generally taken to mean.


I agree with the parent post. I can get ChatGPT to solve a basic world problem but if I add a small wrinkle to it that a human would understand it fails hard. Overfitted seems apt.

Yeah it's amazing, but it's not AGI.


Stop confusing ChatGPT with GPT-4. Most common rookie mistake. GPT-4 is way stronger at 'solving problems' than ChatGPT. I was baiting ChatGPT with basic logical or conversion problems, I stopped doing that with GPT-4, since it would take too much effort to beat it.


Possibly rookie mistake?

https://chat.openai.com/chat

What is this? Is this ChatGPT, or GPT4? I'm talking about my experiences last week with this URL.


Are you paying $20/month and selecting the GPT-4 from the drop-down menu?


It's trivially easy even with gpt-4.

> Please respond with the number of e's in this sentence.

> There are 8 "e" characters in the sentence "Please respond with the number of e's in this sentence."


Dealing with words on the level of their constituent letters is a known weakness of OpenAI’s current GPT models, due to the kind of input and output encoding they use. The encoding also makes working with numbers represented as strings of digits less straightforward than it might otherwise be.

In the same way that GPT-4 is better at these things than GPT-3.5, future GPT models will likely be even better, even if only by the sheer brute force of their larger neural networks, more compute, and additional training data.

(To see an example of the encoding, you can enter some text at https://platform.openai.com/tokenizer. The input is presented to GPT as a series of integers, one for each colored block.)


Almost like it has a kind of dyslexia when it comes to "looking inside" tokens.

If you instead ask it to write a Python program to do the same job, it will do it perfectly.


First GPT-4.

Second, You're going to have to give specific examples on what a small wrinkle is. I've seen "can't solve variation of common word problem" but that's a failure mode of people too. and if you reword the question so it doesn't bias common priors or even telling it it's making an assumption wrong, it often gets it right.


OK write your basic word problem including its small wrinkle, so that the parent commenter can be entertained when GPT-5 solves it.


> Watching posts shift in real time is very entertaining. First it's not generally intelligent because it can't tackle new things then when it obviously does its not generally intelligent because it's overfit.

This wasn't new in the same way that making any test about Romeo and Juliet isn't new. You're still going to the same sources for the answer. It's the exact same goalpost.


Ah the good old "it's not me it's the test" argument. These systems are not just next token predictors, they learn complex algorithms and can perform general computation, its just so happens that by asking them to next-token predict the internet they learn a bunch of smart ways to compress everything, potentially in a way similar to how we might use a general concept to avoid memorizing a lookup table. Please have a look at https://arxiv.org/pdf/2211.15661 and https://mobile.twitter.com/DimitrisPapail/status/16208344092.... We don't understand everything that's going on yet but it would be foolish to discount anything at this stage, or to state much of anything with any degree of confidence (and that stands for both sides of the opinion spectrum). Also these systems aren't exposed to the real world today, but this will be untrue very soon https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal...


I never said: - "it's not me it's the test" - "These systems are not just next token predictors"

None of the papers or blogs you've shared offer any points that actually rebutt what I'm saying.

And yes, we will eventually have them work in real time. Can't wait.


Don't students prepare for tests by studying past instances of them?

"Teaching the test" (aka overfitting of human students at the expense of "real" learning) is a common complaint about our current education system.

Do you think it doesn't "deserve" an A here?


Did I say that?

The OP's post was saying it's somehow able to solve something new. It's showing a severe misunderstanding how how language modelling works.


I think the hallucinations show that it's not simply overfit to all of human knowledge. To hallucinate, there is a certain amount of generalization and information overlap that is necessary.


I’m working in a related area and I’m rather curious about this point. In what way is GPT-4 overfit? Does overfit in this context mean the conventional: validation loss went up with additional training, or something special?


More specifically validation loss is irrelevant when you can't even sample out of distribution anymore.


This is an unusual comment to say the least. It suggests that unless GPT4 can somehow independently derive facts entirely on its own, then it's nothing more than an overfit model, almost as if to say that it's basically just a kind of sophisticated search engine on top of a glorified Wikipedia.

Of course that's not actually true, people don't independently invent knowledge either. People study from books or from teachers or other sources of knowledge and internalize it and relate it to other concepts as well, and no one considers that to be a form of overfitting.


What would a "new" test look like then?


I would certainly be peeved if I showed up to a midterm that asked questions outside of existing human knowledge.


"I didn't make it into the university I wanted because I didn't invent enough new mathematics during the entry exams."


Given that OpenAI were THEMSELVES surprised by how even GPT-3 ended up, it’s always funny to see HN know-it-alls pipe up with all the answers.

These sorts of poorly formed faux-philosophical arguments against LLMs have become the new domain of people that confuse blindly acting skeptical with actual intelligence.

Ironic.

This latest generation of AI quite rightfully raises questions and challenges assumptions about what it means to be intelligent. It quite rightfully challenges our assumptions about what can be accomplished with language. And, thank God, it quite rightfully challenges assumptions many have made about what sets humanity apart from everything else.


> poorly formed faux-philosophical arguments against LLMs

There's a misunderstanding here. The post you're replying to is not an argument against LLMs. It's an argument about what LLMs can and cannot do, what their fundamental capabilities are, and so forth.

It's very clear that if you need a system to provide answers based on a substantial body of human writing, LLMs are totally awesome. But that doesn't mean, in and of itself, that they can X or that they can Y.


> Given that OpenAI were THEMSELVES surprised by how even GPT-3 ended up,

Yeah and they have 0 incentive to overhype their takes. OpenAI has already slanted already impressive data in the past to make it more "hype building" for the general public, when a more scientific study style reading is "this is really cool, here's where it still fails". I am very confident shit like that is the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: