Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RAND conducted a study that criticized the statistical analysis that most autonomous vehicle companies are doing to try and prove their safety:

"Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability."

http://www.rand.org/pubs/research_reports/RR1478.html



Google does a lot of driving in simulation, using data captured by real vehicles. They log all the raw sensor data on the real vehicles, so they can do a full playback. They do far more miles in simulation than they do in the real world. That's how they debug, and how they regression-test.

For near-misses, disconnects, poor decisions, and accidents, they replay the event in the simulator using the live data, and work on the software until that won't happen again. Analyzing problems which didn't rise to the level of an accident provides far more situations to analyze. They're not limited to just accidents.

See Chris Urmson's talk at SXSW, which has lots of playbacks of situations Google cars have encountered.[1]

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik


RAND's point is true but irrelevant. As Tesla likes to point out, when you roll out to an entire fleet (rather than a few dozen test cars), you rack up hundreds of millions of miles almost overnight. 'Hundreds of millions' may sound like a lot, but Americans drive trillions of miles per year.


One of the points addressed in this paper is the degree of testing that already needs to be done in order to proclaim that these systems are safer than humans with statistical rigour. Tesla says that their systems are _statistically safer_ than human drivers, but there is simply not enough data to make this conclusion. I respectfully suggest that you read the paper.


I did read the RAND paper, when it came out months ago, and I double-checked the traffic accident per mile citation and the power calculation as well. Their point is irrelevant because their fleet size estimate is ludicrously small, and their statistics is a little dodgy as well: it should be a one-tailed test (since the important question is only if the Tesla is worse than a human driver), and if one wanted to debate the statistics, this is somewhere that Bayesian decision theory minimizing expected lives lost would be much more appropriate, and that approach would roll out self-driving cars well before a two-sided binomial test yielded p<0.05.


From the paper: "Therefore, at least for fatalities and injuries, test-driving alone cannot provide sufficient evidence for demonstrating autonomous vehicle safety."

Note that the number of crashes per 100 million miles is a lot bigger than the number of injuries. One would hope a statement about safety would look at all of the data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: