Care to justify those beliefs or are we just supposed to trust your intuition? Why exponential and not merely quadratic (or some other polynomial)? How do you even quantify "it"? I'm teasing, somewhat, because I don't actually expect you're able to answer. Yours isn't reasoned arguments, merely religious fervor dressed up in techy garb. Prove me wrong!
Not necessarily 'exponential' (more superlinear) in capabilities (yet) but rather in parameters/training data/compute/costs, which may sometimes be confused for the other.
If you read the article, he explains that there are multiple scaling paths now, whereas before it was just parameter scaling. I think it's reasonable to estimate faster progress as a result of that observation.
I like that the HN crowd wants to believe AI is hype (as do I), but it's starting to look like wishful thinking. What is useful to consider is that once we do get AGI, the entirety of society will be upended. Not just programming jobs or other niches, but everything all at once. As such, it's pointless to resist the reality that AGI is a near term possibility.
It would be wise from a fulfillment perspective to make shorter term plans and make sure to get the most out of each day, rather than make 30-40 year plans by sacrificing your daily tranquility. We could be entering a very dark era for humanity, from which there is no escape. There is also a small chance that we could get the tech utopia our billionaire overlords constantly harp on about, but I wouldn't bet on it.
This outcome is exactly what I fear most. Paul Graham described Altman as the type of individual who would become the chief of a cannibal tribe after he was parachuted onto their island. I call this type the inverse of the effective altruist: the efficient psychopath. This is the type of person that would have first access to an AGI. I don't think I'm being an alarmist when I say that this type of individual having sole access to AGI would likely produce hell on earth for the rest of us. All wrapped up in very altruistic language of "safety" and "flourishing" of course.
Unfortunately, we seem to be on this exact trajectory. If open source AGI does not keep up with the billionaires, we risk sliding into an inescapable hellscape.
No, I don't think it's overly harsh. This hype is out of control and it's important to push back on breathless "exponential" nonsense. That's a term with well defined easily demonstrated mathematical meaning. If you're going to claim growth in some quantity x is exponential, show me that measurements of that quantity fit an exponential function (as opposed to some other function) or provide me a falsifiable theory predicting said fit.
I believe they are using 'exponential' as a colloquialism rather than a strict mathematical definition.
That aside, we would need to see some evidence of AI developments being bootstrapped by the previous SOTA model as key part of building the next model.
For now, it's still human researchers pushing the SOTA models forwards.
When people use the term exponential I feel that what they really mean is 'making something so _good_ that it can be used to make the N+1 iteration _more good_ than the last.
Well, any shift from "not able to do X" to "possibly able to do X sometimes" is at least exponential. 0.0001% is at least exponentially greater than 0%.
> It's a bit crazy to think AI capabilities will improve exponentially. I am a very reasonable person, so I just think they'll improve some amount proportional to their current level.