Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: AGI and Day-to-Day Consequences
3 points by helij on Nov 18, 2023 | hide | past | favorite | 2 comments
Within all the frenzy chat around OpenAI in the last day we can pick up some rumours of AGI being here already. I have a hard time framing and visualizing what that would mean for people day-to-day.

Any real world examples what that would mean for an average person?



AGI should just mean general purpose AI as opposed to narrow AI. We know exactly what that is like. It's GPT-4, or whatever more powerful version follows it. There will be multiple levels and types.

People have sloppily associated AGI with ASI or artificial life. All of these things are different.

When AGI gets to the level of a person and other qualities of people, what will that be like? We have billions of examples of that. Look in the mirror to start.

The thing we need to be concerned about pretty soon is the truly lifelike AI that is hyperspeed and hyper connected. "Thinking" X times faster than any human, instantly exchanging info with a swarm of peers.

We are a bit far off from that but we really need to avoid building digital intelligent life with AI hardware that is too much faster than what we have now. And definitely we don't want to create something truly lifelike and with hyperspeed intelligence and enslave it. That would be incredibly stupid.


> The thing we need to be concerned about pretty soon is the truly lifelike AI that is hyperspeed and hyper connected. "Thinking" X times faster than any human, instantly exchanging info with a swarm of peers.

I've gone from doomerish on AGI safety to slightly more optimistic for two reasons.

There's only so much you can do with this hyperspeed. Most intellectual progress requires interaction with the physical world, which entails time constraints on the level of humans.

The second reason is that I believe AGI will asymptote to the level of intelligence present in its corpus of training data. AGI isn't going to become much better at writing or programming than the best humans because it doesn't have examples to go off. Unlike with board games, this kind of data can't be easily generated. So AGI will hit a wall at a certain level of competence within the realm of human intelligence. After hitting that wall progress will become more incremental. Hopefully.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: