Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OpenAI (or rather, Sam Altman) has believed they'll crack AGI every year so far.


Just like autopilot/full self driving on Teslas


These current llm companies will be unable to "crack AGI".

Some independent researcher will, and it will require ridiculously less compute then the embarrassing things these llm companies produce.

Unfortunately, these current llm companies will still be in the best position: liquid capital, relationships with more capital, access to hardware; to productize the discovery.

At that time I hope these resources will be nationalized and handed to the public.


Could've sworn they told us AGI was here 2 years ago already.


Asking people to define "general intelligence" in a way that does not include capabilities we have had for years always reveals that people do not believe that they themselves are capable of general intelligence. At least if they held themselves to the same standards.

People expect general intelligence to mean oracle of truth, yet they make mistakes. People expect general intelligence to mean perfect memory, yet they themselves forget. People expect general intelligence to be infinitely moldable and adaptable, yet they themselves will be unable to break habits, and be unable to reason about psychedelic experiences.

If we do not currently have something that can be called AGI, we do not have something that can be called GI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: