Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's akin to asking "what sort of human catastrophe do you think would happen?" A whole host of things are possible in principle on a long enough timeline. If you constrain the timeline, then you'll get more plausible answers but with potentially wide error bars because we don't really understand what "intelligence" really is, ie. current research could be pretty close but could also be pretty far from it.

The fact that we don't know how close we are is itself dangerous. It's like doing gene editing on pathogens without a proper understanding of germ theory and biosafety. That's where we are with AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: