Exactly right. This reminds me of the X-ray machine that was misprogrammed and caused cancer/death.
> If the smart lawnmower decides (emphasis added) that not being turned off
Which is exactly what it shouldn’t be able to do. The core issue is what powers you give to things you don’t understand. Nothing that cannot be understood should be part of safety critical functionality. I don’t care how much better it is at distinguishing between weather radar noise and incoming ICBMs, I don’t want it to have nuclear launch capabilities.
When I was an undergrad they told me the military had looked at ML for fighter jets for control and concluded that while its ability was better than a human on average, in novel cases it was worse due to lack of training data. And it turns out most safety critical situations are unpredictable and novel by nature. Wise words from more than a decade ago, holds true to this day. Seems like people always forget training data bias, for some reason.
> If the smart lawnmower decides (emphasis added) that not being turned off
Which is exactly what it shouldn’t be able to do. The core issue is what powers you give to things you don’t understand. Nothing that cannot be understood should be part of safety critical functionality. I don’t care how much better it is at distinguishing between weather radar noise and incoming ICBMs, I don’t want it to have nuclear launch capabilities.
When I was an undergrad they told me the military had looked at ML for fighter jets for control and concluded that while its ability was better than a human on average, in novel cases it was worse due to lack of training data. And it turns out most safety critical situations are unpredictable and novel by nature. Wise words from more than a decade ago, holds true to this day. Seems like people always forget training data bias, for some reason.