> One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash
What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.
Already these AI searches provide fatal advice (suggesting 40+ mile hiking days in the desert, lying about the distance between water sources & campsites), this example about the plane, inaccurate dosage of substances, etc. Essentially, any time there's a number that can be wrong with a serious outcome, these services will bullshit the number with disastrous consequences. And there's no 3rd party to hide behind now. There's no 'just a link' - a trillion dollar corporation is directly responsible for these results.
Sort of like manual-driven-cars vs self-driving-cars, removing the millions of small 3rd parties could change who is responsible for the outcome. Rushing these out could be setting the stage for the next tobacco / opioid / talc lawsuits.
What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.