Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

yeah, true. The standard conversation about the AI singularity pretty much hand-waves the resource costs away ("the AI will be able to design a more efficient AI that uses less resources!"). But we are definitely not seeing that happen.


Compare also https://slatestarcodex.com/2018/11/26/is-science-slowing-dow...

The blog post is about how we require ever more scientists (and other resources) to drive a steady stream of technological progress.

It would be funny, if things balance out just so, that super human AI is both possible, but also required even just to keep linear steady progress up.

No explosion, no stagnation, just a mere continuation of previous trends but with super human efforts required.


I think that would actually be the best outcome - that we get AIs that are useful helping science to progress but not so powerful that they take over.

Though there is a part of me that wants to live in The Culture so I'm hoping for more than this ;)


I think that's more to do with how we perceive competence as static. For all the benefits the education system touts, where it matters it's still reduced to talent.

But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?


> But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?

To quote a comment from elsewhere https://news.ycombinator.com/item?id=42491536

---

Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.

But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.

Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.

Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: