Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Care to justify those beliefs or are we just supposed to trust your intuition? Why exponential and not merely quadratic (or some other polynomial)? How do you even quantify "it"? I'm teasing, somewhat, because I don't actually expect you're able to answer. Yours isn't reasoned arguments, merely religious fervor dressed up in techy garb. Prove me wrong!


Not necessarily 'exponential' (more superlinear) in capabilities (yet) but rather in parameters/training data/compute/costs, which may sometimes be confused for the other.

[0]: https://ourworldindata.org/grapher/exponential-growth-of-par...

[1]: https://ourworldindata.org/grapher/exponential-growth-of-dat...

[2]: https://epoch.ai/blog/trends-in-training-dataset-sizes

[3]: https://ourworldindata.org/grapher/exponential-growth-of-com...

[4]: https://blog.tebs-lab.com/p/not-exponential-growth


If you read the article, he explains that there are multiple scaling paths now, whereas before it was just parameter scaling. I think it's reasonable to estimate faster progress as a result of that observation.

I like that the HN crowd wants to believe AI is hype (as do I), but it's starting to look like wishful thinking. What is useful to consider is that once we do get AGI, the entirety of society will be upended. Not just programming jobs or other niches, but everything all at once. As such, it's pointless to resist the reality that AGI is a near term possibility.

It would be wise from a fulfillment perspective to make shorter term plans and make sure to get the most out of each day, rather than make 30-40 year plans by sacrificing your daily tranquility. We could be entering a very dark era for humanity, from which there is no escape. There is also a small chance that we could get the tech utopia our billionaire overlords constantly harp on about, but I wouldn't bet on it.


>There is also a small chance that we could get the tech utopia our billionaire overlords constantly harp on about, but I wouldn't bet on it.

Mr. Musk's exitement knew no bounds. Like, if they are the ones in control of a near AGI computer system we are so screwed.


This outcome is exactly what I fear most. Paul Graham described Altman as the type of individual who would become the chief of a cannibal tribe after he was parachuted onto their island. I call this type the inverse of the effective altruist: the efficient psychopath. This is the type of person that would have first access to an AGI. I don't think I'm being an alarmist when I say that this type of individual having sole access to AGI would likely produce hell on earth for the rest of us. All wrapped up in very altruistic language of "safety" and "flourishing" of course.

Unfortunately, we seem to be on this exact trajectory. If open source AGI does not keep up with the billionaires, we risk sliding into an inescapable hellscape.


Ye. Altman, Musk. Which Sam was the exploding slave head bracelet guy, was that Sam Fridman?

Dunno about Zuckerberg. Standing still he has somewhat slided into the saner spectrum of tech lords. Nightmare fuel...

"FOSS"-ish LLMs is like. We need those.


that seems a bit harsh dont you think? besides youre the one making the assertion, you kinda need to do the proving ;)


No, I don't think it's overly harsh. This hype is out of control and it's important to push back on breathless "exponential" nonsense. That's a term with well defined easily demonstrated mathematical meaning. If you're going to claim growth in some quantity x is exponential, show me that measurements of that quantity fit an exponential function (as opposed to some other function) or provide me a falsifiable theory predicting said fit.


I believe they are using 'exponential' as a colloquialism rather than a strict mathematical definition.

That aside, we would need to see some evidence of AI developments being bootstrapped by the previous SOTA model as key part of building the next model.

For now, it's still human researchers pushing the SOTA models forwards.

When people use the term exponential I feel that what they really mean is 'making something so _good_ that it can be used to make the N+1 iteration _more good_ than the last.


Well, any shift from "not able to do X" to "possibly able to do X sometimes" is at least exponential. 0.0001% is at least exponentially greater than 0%.


I believe we call that a "step change". It's only really two data points at most so you can't fit a continuous function to it with any confidence.


> It's a bit crazy to think AI capabilities will improve exponentially. I am a very reasonable person, so I just think they'll improve some amount proportional to their current level.

https://www.lesswrong.com/posts/qLe4PPginLZxZg5dP/almost-all...


>No, I don't think it's overly harsh.

Where's the falsifiable framework that demonstrates your conclusion? Or are we just supposed to trust your intuition?


Why is it “important to push back”? XKCD 386?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: