Transportation

Views Of Safety And Risk On Robocars Are Evolving


Aurora attempts to compensate for risks from some parts of its system with robustness in others

Aurora Presentation

The Automated Vehicle Symposium 2019 opened with a session on safety, led by Chris Urmson of Aurora (and formerly Waymo/Google) who talked on less obvious aspects of Aurora’s safety plan. It and other presentations hinted at a change in thinking on measuring and assuring safety, away from a focus on miles and incidents, and towards a more nuanced estimation of risk. It’s a move beyond the common “functional safety” approach in automotive (with a focus on preventing and mitigating failure) to what is known as SOTIF (Safety Of The Intended Function,) a strategy to provide safety even when no system fails.

At Waymo, Chris led the effort that drove Waymo test vehicles for well over ten million miles on real roads — far more than everybody else put together. Early stages of all teams have involved this core task of getting a vehicle that can drive the roads with a diminishing number of incidents, but this is not enough.

Urmson identified several expanded areas of safety thinking. While computer security has always been on the mind of all serious developers, and deployment gets closer it must become a central pillar of the safety plan.

Their plan involves a risk-based analysis, aiming for “better than human” but not perfection. With this approach, you measure the acceptable risk, and you calculate the uncertainties in the various components of your system, such as perception, motion planning, mapping, and others. Each system has some extra capability to make up for uncertainties at other levels.

There is nothing greatly surprising in this plan, but it’s a sign of a maturing of the safety plans of teams as they get closer to deployment. The “old school” plan of “keep driving as much as you can to encounter new situations and problems, then fix them” worked in the early days, but is being refined.

There are also many efforts going on in simulation testing, including a wide variety of companies offering different simulators. Simulation testing offers the ability to constantly be testing only in stressful situations, even if they are not entirely real.

There is more effort to generate simulation scenarios from real traffic situations. The EU “Pegasus” project presented right after Chris Urmson, showing the involved system they built to make these scenarios. In one mode, they have cars drive a busy road while a drone flies overhead. The drone is able to track all the cars easily from above, and this can be combined with data from cars on the ground to generate a simulation scenario matching the real traffic. Generally, this is “ordinary” traffic, as you have to do this for a long time to get recordings of unusual situations or even dangerous situations.

Pegasus wants to put the resulting scenarios in new open formats like ASAM OpenScenario. Right now these formats are nascent, and it’s not simple to convert them to the proprietary formats of teams. Importing a scenario is no small thing, if you want to simulate things like localization on a map and a custom configuration of sensors. Indeed, it probably requires that the road in question be mapped by a mapping car of the team in question, which is no small effort.

Pegasus is also promoting the risk-based approach, measuring risk rather than safety. I suspect this will become the predominant approach. That’s good news for insurance companies, who of course have risk understanding and management as their specialty.

EU Pegasus project outlines different levels of acceptable risk in different contexts

Pegasus

Zoah Zych, chief of staff for Uber ATG came out to talk about Uber’s new safety plan, which they have released and are encouraging others to follow. On the one hand, people will be reluctant to use anything from Uber, whose actions not only resulted in the death of a woman, but set the industry back considerably and eroded public confidence. On the other hand, to claw their way back, they are motivated to actually work hard on a safety plan and possibly even be over-cautious. Zych made mention of how Boeing’s recent failures had made the public question if it could trust not just technologies but companies; he apologized later for not being clear that it is Uber, most of all, who created that lack of trust when it came to self driving cars.

Trent Victor, who leads crash avoidance at Volvo cars, also decried the mere use of miles as a safety metric, which has become a popular thing to decry (though of course, mainly by those who don’t have a lot of miles logged.) Volvo did go further though, and reminded us that while most teams have set a goal of matching human-level safety (when measuring miles and incidents) that the real goal is to match human drivers at their best — skilled, sober and alert.

Risk vs incidents

The evolution in thinking from measuring incidents to measuring risk is a two-edged sword. Humans as individuals put a strong focus on incidents — how could we not, as each injury incident, and especially each fatality incident is a tragedy. Yet our moral codes and the law focus much more on risk. From an elevated perspective, we care about intent. Normally, no incident is intended, in fact, it is intended to avoid them. On the other hand, we regularly expose ourselves, and others to risk, and we do so intentionally. Society tolerates exposing others to small amounts of risk if the benefit is worth it and the risks are small — we call that an acceptable risk. What we want to punish and regulate is unacceptable risk. The problem is, we often only learn about risks after the fact, after incidents, and we often only know how to calculate risk by taking statistics on incidents.

I’ll be covering this question in-depth in a later article.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.