Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But other AI dangers sound more like the work of philosophers and science fiction authors.

Your ability to read this sentence right now when we have never met and may not even be on the same continent was once in the domain of science fiction. Don't underestimate technological progress, and specifically, don't underestimate the surprising directions it could go.

Some fantastical AI predictions will happen, most probably will not, and some utterly terrifying ones no one foresaw will almost certainly happen. The unknown unknowns should worry you, and AI is full of them.



> The unknown unknowns should worry you, and AI is full of them.

Sure, but where should that rank in my worries relative to 'designer babies' and 'rise of authoritarian states as economic powerhouses' and 'corporations that can commit crimes with impunity' and 'rising medical bills' and 'widening gap between rich and poor' and 'far right extremism' and 'water shortages' and 'economic crisis wipes out my savings' and 'cyber warfare targeting vital infrastructure' and 'rising obesity' and 'voter suppression' and the many other things a person could worry about?


In my view, the only wrong opinions on where to rank this are "at the very top" and "at the very bottom or not at all". We will only know the correct answer in hindsight, so the sensible position is to just start funding some legitimate AI safety research.


A whole lot of the arguments about why we shouldn't be concerned really boil down to "I cannot conceive of risk until that risk has materialized." Impossible to argue against, really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: