In the world of robocars, from the start people have claimed that the cars will do better than humans when it comes to safety. Because over 90% of accidents are caused by human mistakes, robots won’t make those mistakes and we’ll have fewer accidents. This has always been true by definition — broadly nobody wants to release a car to the world until they have reached at least that level of safety, and probably more.
Waymo threw down a gauntlet last fall when they released a document showing they had done that, and better, at least in the easy environs of suburban Phoenix. Today they released a study suggesting their cars are better at avoiding being the “other” car in an accident, the one who is not at fault. They believe Waymo cars will do better at getting out of the way when somebody else is about to cause an accident.
To test this, they extracted a database of every fatal accident in recent history inside the territory near Chandler Arizona where they currently operate. They wanted territory where they are ready to simulate things in a realistic way, and which can be compared to their existing record there.
They found 72 crashes since 2008 and they read all the details in the police reports to try to recreate the accident in their simulator. Then they had the Waymo driving system take the role of the car which had originally caused the accident. Each time, it drove better and caused no accident. That’s not too surprising — in fact it would be surprising if it weren’t able to do this. The Waymo driver should not be at all likely to make the same mistake some stupid person made on the roads in the same situation.
Preventing accidents that are somebody else’s fault
More interestingly, they also replayed each two-car accident with the Waymo system driving the other car. They did well. 82% of the crashes they prevented the accident. 10% of the time they reduced the severity of the accident. 8% of the time things remained the same — but all of those were cases where the 2nd car was stopped and hit by the bad car, which can be a pretty tough situation to make better.
They are particularly proud that in two-vehicle collisions which resulted in a fatality for a pedestrian or cyclist, they were always able to prevent the collision.
The consequences of this are striking. A fleet of good robocars will not simply reduce accidents by making fewer mistakes than human drivers, they will also reduce the number of accidents caused by other driver’s mistakes. The robocars won’t just avoid running red lights, they will avoid being hit by other cars that are running red lights.
There are a number of reasons these simulations might not be perfect, and Waymo goes into several of them in their paper. For one, you only have the police report and there might be many factors police did not write down or that witnesses didn’t see. And simulation is still simulation. But we get some good evidence that robots might be able to be superhuman at having quick reaction times to accidents, and make us safer even when it’s the other guy that’s dangerous.
In addition, as we know, Waymo cars have been involved in many real and simulated accidents as the not-at-fault party, so they certainly don’t have as good a record as the simulations suggest. The simulations focused on fatal accidents and converting them to non-fatal or preventing them, which is easier than fixing all crashes. This video shows an example of a Waymo that was hit badly by a car that jumped the median and was going the wrong way, which it could not prevent:
I’m waiting for Waymo to go even further. In their fall report, they outlined 29 accidents where their safety driver intervened and prevented an accident. In simulation, they found the Waymo driving software, had it not been disengaged, woudl have continued on to an accident. The safety driver was better, it was able to avoid an accident the Waymo system could not. Waymo has declined to comment, but it is my expectation that when something like this happens — and each one is a big deal on any self-driving team — an effort will be made to try to get the car able to prevent that particular accident, which can be simulated with much better accuracy because it was recorded by the car. Since the human avoided it, avoiding it is possible, even though these are specially trained professional drivers.
As a result, one hopes that if a similar situation were to arise again, this time the Waymo software would succeed where it failed before. Broadly, if a robocar is involved in an accident which could have been avoided, it will never make the same mistake twice. That’s not at all like humans. If one human makes a mistake, that doesn’t stop the next human from making the same mistakes. Robots work differently.
In time, it’s probably worth encoding the millions of fatal and non-fatal accidents on which there is good reconstruction data into the simulators. Every car should learn if it can avoid the mistakes of that accident — and never make them for the first time out in the real world. That’s the path to true automotive safety with robocars. When they report that over 90% of crashes are due to human error, they don’t talk about the drivers in the other cars. They can help prevent crashes even when it is not their fault, and thus robocars can reduce crashes beyond just not causing them.
Waymo could also consider helping make all robocars safer by contributing their simulation scenarios, and others like them, to the World Economic Forum’s “Safety Pool” project, which allows participants to contributed safety testing scenarios and data and get back a greater number in return from other participants making everybody safer. (Disclosure: I helped initiate this project though Deepen AI, a company in which I am an investor which provides the platform for this pool.) Since any serious accident hurts the entire industry, it can make sense to share tools like this even though they may also be seen as competitive assets. We want all cars to be tested and know they would prevent these accident situations, though Waymo can certainly say it is the leader.
Read/Leave comments at this page