Transportation

California Robocar Disengagement Reports Are Not So Engaging


California regulations require parties testing robocars in the state to send in an annual report on their operations, including how many “disengagements” they had, where the safety driver took over control for safety issues, caution or software problems. The 2019 reports were just released and are available online as a series of spreadsheets.

Last year, I wrote that the collecting of these disengagement reports should stop and several of the other big players, including Waymo, Cruise, Aurora and Zoox have been critical of the reports. Since they began, players have developed their own individual definitions of what a disengagement is, and so the numbers can’t readily be compared. It is not clear if they can even be compared year-to-year within the same company. The most you can say is that those with frequent disengagements are still in the early-prototype phase, and those with low disengagement numbers may be more advanced or may just be trying to look that way.

The chart above shows a more objective number, the total number of miles driven in California. This tells you who is majorly into on-road testing (Waymo, Cruise, Pony.AI, Baido, Nuro, Zoox and Lyft mainly) in California. Some companies such as Waymo, even though it is the clear leader in California, also do a lot of testing out of state. Others, notably foreign companies, are doing testing in other countries.

Another expression is the number of cars in use for testing. This chart shows all teams with more than 3 cars on the road. Once again, unless they are doing most testing outside California, it gives you an idea of their budget and the size of their effort.

The large budgets of Cruise and Waymo become clear in this chart.

It should be noted that companies are now debating how much on-road testing to do. It is expensive, of course, which naturally limits what lower budget teams can do, but for some teams it also has reached a point of diminishing routines. On-road testing helps you verify your system, but its other goal is to encounter new and unusual situations you’ve never seen before, and make sure you can handle them or fix any problems they cause. The more miles you drive, the harder it is to discover important new situations — it is supposed to be harder to do that. Waymo now has driven for 40 typical human lifetimes of driving, so at least in the areas they have cover, they have seen far more than any human will see, and learned how to deal with it.

Teams are doing a lot more in simulator, and in fact that’s a better place to expose your vehicle to new and dangerous situations. The main problem with simulation for safety verification is that there is only one allowed score on a simulation run, namely perfect. The first time you do the run, you may find flaws, and then you fix them until they are gone, and you have a perfect run. After that, the run can only tell you if you’ve broken a fix from the past with changes you’ve made to the system. (Disclosure: A company I advise is, together with the World Economic Forum, building a system to let companies share simulation scenarios in a “contribute one, get several in return” pool which could greatly increase the number of novel and worthwhile scenarios a team can put their car through every week. This could help solve this problem.)

I haven’t made a chart of the disengagements per mile because of the issues around that, but if you want to see them, my colleague Mario Herger has some at his blog. The most unusual claim is from Baidu. Even though only driving 4 cars it claims 108K miles and only 6 disengagements. That’s not just the best number of any team, it is vastly better than their previous scores. I have reached out to Baidu for comment.

Another odd player is Tesla. In spite of promising “full self driving” any day now, and being based in California, the reported very few miles driven. One presumes they do this by claiming that because their “full” self-driving product will be sold to be used only with constant driver supervision, that it’s still a driver-assist product and not a robocar that is covered by this law. Problem is, Uber tried that line several years ago, because all the testing done is currently done with safety drivers, and so looks from the outside to be just like Driver Assist. Back then, the DMV said, “hell no” to Uber and threatened to not just deny them a testing licence, but to revoke the licence plates on Uber’s cars. Tesla is skating a fine line here.

As I outlined in the prior article there are many better ways to define a disengagement. What California replaces it with, if it decides to do so, must be well crafted so it won’t be gamed. It may be better to get less information as long as it stays objective. The DMV’s goal is to assure public safety on the roads, not to bend the course of the industry. They don’t do anything with the numbers. Having a high or low number does not alter whether you drive. It is nice to mandate some transparency, but it has to be useful.

Chances are a more useful approach has to come from industry, though an independent testing lab (akin to Consumer Reports,UL or other private certifiers.) It needs to have teeth, not in discouraging teams or development, but in assuring that whatever it measures is measured without bias. There isn’t a lot of motive on the part of the teams to do this, though. Nobody has a full blown commercial service and needs independent verification to boost consumer confidence.

I have proposed that insurance companies might play a role here, by calculating what it would cost to insure a car if it operated with no safety driver, presuming it had the same record of incidents with a human behind the wheel. (ie. remove the complex question of how damages in robot-caused accidents might be very different from those in regular human caused ones.) Today, for humans, that number is around 6 cents/mile. The advanced teams, with every intervention, run it again in simulator to find out what would have happened and if there would have been any “contact” and damage. Insurers could calculate what all that would cost — they’re pretty good at that.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.