Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash

What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.



Previously, Google displayed the relevant 3rd party links (or, paid links), and Google didn't take responsibility for the content.

This clear boundary was blurred once google started auto-summarizing results with some sort of knowledge panel, which was frequently wrong. Google 1st party data has more weight & liability to it than a link to 3rd party sites. https://www.hollywoodreporter.com/business/digital/10-months... https://www.vox.com/recode/22550555/google-search-knowledge-... https://www.executiveprivacy.com/resources/when-google-gets-...

Already these AI searches provide fatal advice (suggesting 40+ mile hiking days in the desert, lying about the distance between water sources & campsites), this example about the plane, inaccurate dosage of substances, etc. Essentially, any time there's a number that can be wrong with a serious outcome, these services will bullshit the number with disastrous consequences. And there's no 3rd party to hide behind now. There's no 'just a link' - a trillion dollar corporation is directly responsible for these results.

Sort of like manual-driven-cars vs self-driving-cars, removing the millions of small 3rd parties could change who is responsible for the outcome. Rushing these out could be setting the stage for the next tobacco / opioid / talc lawsuits.


They passed the algorithm test though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: