Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> “People should stop training radiologists now,” Geoffrey Hinton said, adding that it was “just completely obvious” that within five years A.I. would outperform humans in that field.

I'm starting to notice a strong trend where non-domain experts will confidently assert that X field will be replaced by AI.



Hinton is a domain expert in AI, case in point Scott Aaronson cites AI revolutionizing protein-folding research, putting many protein-folding academics out of jobs.

I'd argue that AI experts and CS theorists are in a better position to speak about technological change than the domain experts who are trained/selected mainly to practice the status quo.


> putting many protein-folding academics out of jobs.

Has this actually happened? I'd wager it's more like in OP's article, AI as a helper that speeds up research, enabling bigger or different research questions. Reliable prediction not of protein shape, but of protein-protein or protein-mRNA interactions, for example.


Isn't this a common trend with CS? It seems like there's just constant over hype that drives booms and crashes, even to such a degree that if something was there it can't even be built. We have billionaires and hundred millionaires that made their money on VR, blockchain, and Web3. The wealth comes even when no product does and that seems like a big issue that's being exploited




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: