The California DMV has ordered GM’s Cruise robotaxi unit to suspend autonomous vehicle operations declaring that Cruise’s vehicles are unsafe and that Cruise had misrepresented their safety level.
In particular, the DMV expressed concern over the incident earlier this month where a pedestrian crossing against a red light was struck by a human driven Nissan, and her body was thrown in front of the Cruise vehicle in the next lane. The Cruise vehicle braked hard but hit her. Cruise revealed today that after it came to a stop, the vehicle decided to pull to the side of the road to avoid blocking traffic. In doing so, it dragged the victim to the side and came to a stop with the wheel upon the pedestrian’s leg. Emergency crews instructed Cruise not to move the vehicle, and quite some time later lifted it off her to get her to hospital. There has been no update on her condition.
The DMV states that Cruise, when it met with them about the event the day after, only showed them the video of it up to the point of stopping, and not the later move to the side. The DMV writes that: “The video footage presented to the department ended with the AV initial stop following the hard-braking maneuver. Footage of the subsequent movement of the AV to perform a pullover maneuver was not shown to the department and Cruise did not disclose that any additional movement of the vehicle had occurred after the initial stop of the vehicle.”
Cruise states that this is incorrect, and they showed the entire video “multiple times” to the DMV. When I witnessed the video, I deliberately requested not to be shown the impact parts as they did not relate to my coverage of what took place before. What took place well after the impact turned out to be highly relevant.
While the DMV is particularly upset by their belief that they were shown the partial story, that should get resolved, and the real issue is whether the vehicle is unsafe, unless some deliberate deception can be shown.
It’s likely to conclude that the software in the Cruise vehicle was unaware the pedestrian was being dragged by their vehicle, as it seems unlikely the system would wish to get out of the lane in that situation. It’s possible that the vehicle’s urge to clear the lane relates to the number of complaints that have been lodged about Cruise vehicles blocking lanes, which would be a tragic irony. It is common for robocars to remain in place blocking a lane when they are not 100% sure it is safe to move to another spot, so either the vehicle felt (incorrectly) that it was 100% sure, or perhaps the calculation has changed. Cruise has not yet responded to requests for information on that decision process.
Over 10 years ago, I outlined a situation very much like this, advising that a robotaxi should make special effort to assure it never obliviously drags a vulnerable road user. This is a challenge, because vehicles don’t have sensors under the car, and any visual sensors there would quickly get dirty. Ultrasonic sensors could help detect this, but otherwise the detection must come from implied clues, such as a change in the driving characteristics, the bumps of driving over something (or someone) and the disappearance of anything under the car without its reappearance. Side LIDAR can detect anything not completely under the vehicle but that can’t be depended on. With some irony, I wrote about this problem just a few days before this incident as well. Obviously it’s a terrible scenario to be prevented, but it’s also in the class of special situation where the behavior is very non-human and in a frightening way. While human drivers do regularly hit and drag others on the road, they are less likely to do it because they are oblivious to it.
While 100% of the legal fault for this event is upon the driver of the Nissan who hit the pedestrian and fled the scene, we want the robotaxi to still do what it can to make things better, and especially not make them worse. It does appear it made them worse in this situation.
This won’t be the last time that pilot deployments of robocars reveal a problem of this sort. The good news is that as problems are found they are fixed for the entire fleet, a pattern where robots are much better than humans. In August a Cruise car was struck by a fire engine which was improperly crossing through a red light without first assuring the path was clear, however the Cruise vehicle could have done better, and last week Cruise released a report on how they had improved their vehicles to prevent this sort of thing from happening again—that’s how it should work.
In this case, though, Cruise should have played out this situation in both digital and real world simulations to detect it in advance. This would have involved playing out scenarios very much like this one, where a VRU is thrown in front of their vehicle, but also test track simulations where a crash test dummy is thrown in front of the vehicle. Perhaps Cruise did those tests and they did not reveal these problems—no simulation is perfect—but if so, they should examine why that was.
In addition, other companies watching this situation should now go back and run their own track tests and digital simulations to assure they do well in this situation.
Cruise is allowed to continue testing with safety drivers, but the order seems to prevent offering any public vehicle service for now. This sort of error is indeed the sort that would not happen with a safety driver. That also means it’s the sort of error that would not be readily discovered while they are in place.
DMV’s Standards
The DMV concludes the vehicle is not able to handle this sort of situation, and as such these vehicles are not safe for the roads. And there does appear to be a clear and strong flaw in the vehicle’s handling of this situation. The harder question is where to set the bar. Because these is a rare and unlikely event, there could be debate over whether it justifies shutting down all robotaxi operations. Robotaxis will always have safety flaws. A policy that shuts down a fleet when a safety flaw is revealed will not be tenable, particularly after wide scale deployment. Instead, a system needs to be in place to judge the risk level of any safety flaw, including:
- The severity of the safety risk (in this case, high)
- The probability of the event (probably fairly low)
- If any trade-offs resulted in the safety issue, or if it was a simple error. (Including one type of safety vs. another, safety vs. road citizenship, safety vs. cost. Here, the decision to pull over may have resulted from safety vs. road citizenship, but the inability to detect the pedestrian may be a simple error.)
- Role of the law and fault.
Robocar companies put a strong emphasis (perhaps too strong) on avoiding errors for which they will be legally at fault over errors for which they will not face liability. This will sometimes cause undesired consequences.
Generally, in chain-reaction road incidents, legal fault goes to the party that started the chain. In this case, however, the Cruise made an error after everything had stopped, which may arguably break that chain, though it would never be in that situation without the hit-and-run. The DMV’s concern could lie in the fact that this might happen even if the Cruise itself had first struck the pedestrian.
This sort of shutdown is new, so the DMVs of the world are learning how to do this. In time they should set standards on all of these factors, and more, to decide what sort of action to take. NHTSA’s safety recall system, for example, is a slow process, though in the future it’s not impossible that they could decide there is a safety problem with a car so severe that all the cars should be remotely disabled. In the past, remote shutdown wasn’t even possible, but there are a few car models today where it is.
Today, the Cruise fleet is small, and though they have reported statistics indicating their overall safety record is superior to human drivers in their area, the shutdown of Cruise operations—with the switch of most of those riders to driving themselves or using human driven taxi style services—should not create a noticeable worsening of road safety. However, in the future, once a fleet is large, that won’t be true. If a large fleet with good overall statistical safety is shut down, the switch to human driving will probably harm many more people than are protected. It is for this reason that the factors above need to be considered.