Since the first dream of self-driving, there has always been a sharp focus (including from myself) on safety. The ability to reduce accidents is the grandest purpose of these projects, and every team declares that safety is at the core of everything they do. There’s a business reason behind that focus, not just a moral one — you can’t deploy and start making money until you’ve made a vehicle that is “safe enough.” Defining what “safe enough” means has been one of the more perplexing problems of the industry.
Waymo made waves in 2020 by releasing the safety report for their robotaxi service outside Phoenix, Arizona. 6.2 million miles (60,000 at the time with no safety driver at the wheel) and no accident in which their vehicle was at fault. Since then the number has increased even more, surpassing 10 human lifetimes of driving — a clearly superhuman safety record.
Yet Waymo’s efforts in Arizona remain a pilot project, and while there are rumours of a push soon in San Francisco, it makes sense to ask why this hasn’t happened already when the safety targets are reached. Is it just because Phoenix is not an interesting market (it probably isn’t a great one) and safety in SF is not as “there” yet? Or is it not even ready for Phoenix for some reason other than safety?
Even while stating that “Safety is the top priority” everybody knows that it’s not the only one. In fact, it’s not even the top one — safety is useless without functionality and affordability. But there has always been the idea that safety is the hardest one. What if that’s not true, and good road citizenship is actually the harder one?
Good road citizenship
One must be safe, but one must also be well behaved on the road. You should not block the road, even if it’s safe to do so. You should not stop and hesitate frequently if you don’t understand the situation. It may be safe, and it may be legal, but there is a limit on how often you can do it, and it’s not a high limit. In addition, some legal actions are not very safe, including just stopping in lanes, causing traffic to divert around you, or behaving in a manner quite different from how others might expect you to. It can even be dangerous to travel the speed limit when everybody else is going 10mph faster.
As Waymo deployed more vehicles, many noted that while it was not at fault in accidents, it seemed to be getting hit from behind by others from time to time at a level that seemed high. It was speculated that Waymo’s cars were being timid when other drivers are aggressive, and this led aggressive drivers to hit them when they didn’t go when anybody else would go.
According to Chris Urmson, who ran Waymo at the time, they did an examination of this question, and the conclusion was this was not happening. The high number of rear-ended collisions was high simply because Waymo was recording and reporting all of them, while human drivers often don’t report minor dings, fender benders and particularly fender non-benders. This was many years ago, and there’s no further update on that statistic.
Indeed, Waymo did a studied of fatal human-caused accidents in their service area and found their cars would have avoided most of the accidents that humans didn’t avoid, even when they were the not-at-fault car and somebody else caused the original accident. While that does not speak to the low-speed dings, it’s a good sign.
There’s been much discussion about how to measure safety. In the earliest days, California asked all those testing cars to report every year how many safety driver “disengagements” they had. This turned out to be a misleading and non-useful number, denounced by almost all, but came about due to the “They should report something. This is something! Let’s have them report that!” syndrome.
Later, people have measured simulated and real contacts, or just measured at-fault accidents. I have proposed that insurance companies, who spend all day trying to quantify driver safety results with a dollar amount, computer what the cost would be to insure a driver with the same safety record, and work to make that number competitive with the 6 cents/mile of the typical insurance policy.
It is often said that however we measure it, cars should surpass an average or a good human driver before deploying. That’s an incorrectly high bar — we only require that human drivers be as good as a teen-ager who barely passed their written and driving test, and that’s much worse than average.
While we puzzle over how to measure safety, there has been even less effort to quantify road citizenship. It has many components, including what might be considered road etiquette (avoiding things that would annoy a reasonable road user) and mild non-safety (doing things which, while legal and technically safe, significantly increase the risk that somebody else’s mistake will cause a collision.)
We have a sense the robots are doing poorly, but some of this is because we watch them more carefully, and they are well identified. We get annoyed at other human drivers on the road quite frequently, and it causes accidents frequently, but we treat all these as individual events about that driver, not as part of a pattern. There have been some naturalistic driving studies (where you put cameras on volunteers, and after they get used to the cameras, record statistics on their driving) and of course all the self-driving teams have immense data logs about the activities of other road users.
There’s been a lot of press about “dreaded unprotected left turns” and cars being too hesitant to make these turns, thus sitting paused and blocking traffic. These are indeed challenging, especially when there are pedestrians possibly using the crosswalk that the vehicle must drive through. In theory, in the long run, cars should be much better than humans at the basic physics and be able to safety make such turns in a way that would scare humans, missing other vehicles by a short amount because they know they can. That’s not yet true (and they will always leave margins for vehicle failure and sudden speed changes) and so there is press. But we should start measuring how often it’s really happening, and ideally its impact, and compare it to existing drivers.
While we quest for a metric for safety, we also must quest for a simple metric for road citizenship, and then we can judge progress in each.
Can it be harder?
While teams clearly have more to do on road citizenship, it is not clear if it’s actually harder than full legal safety, or just slower to happen because safety has been the top priority. The problem may be caused by the high safety priority, because in many cases there is a trade-off. You can be safer by being a more timid driver, but that can cause inferior road citizenship. In many situations, the safe solution is to stop and get to a “safe state,” including in the middle of the road. That’s not perfectly safe, but in a bad situation, it can be safer than moving with uncertainty.
This means that in many cases in order to dial up safety, we will dial down road citizenship, and the hard problem is to find either the right compromise, or a way to avoid that trade-off.
There is another trade-off to consider, and indeed it’s the reason why the focus has been so much on safety. We truly do want these vehicles to be out on the roads, driving more safely than humans do, preventing accidents and saving lives. We want that to happen as soon as possible. Every day of delay in deployment is a delay in reaching saturation, and thus it causes a full day’s more casualties in the future. Because of that, we actually want to tolerate both poor road citizenship and less than perfect road safety today, when the fleets are tiny, to speed up the great benefits to both when the fleets are large. At least, if we were utilitarians simply trying to maximize total benefit to society.