GM’s Cruise self-driving unit has filed the mandatory report with the DMV regarding their crash with a San Francisco Muni bus on March 22. All self-driving companies are required to file these reports within a short time of any crash. The document reveals only a small amount of new information, namely that the bus was coming to a stop when it was hit. The yes/no checkboxes on whether a vehicle code violation citation was issued are left blank, but this is very common on these reports.
The description of the event is succinct:
“A Cruise autonomous vehicle (“Cruise AV”), operating in driverless autonomous mode, was traveling on westbound Haight Street between Ashbury Street and Masonic Avenue. At the same time, a San Francisco Municipal Railway (“MUNI”) Bus stopped ahead of the Cruise AV. Shortly thereafter, the Cruise AV made contact with the rear bumper of the MUNI Bus, damaging the front fascia and front fender of the Cruise AV. The parties exchanged information. There were no reported injuries and the Cruise AV was towed from the scene.”
The fact that a vehicle was towed means an entry will also appear in NHTSA’s crash database, though the damage barely seemed enough to require a tow, unless the battery pack suffered damage.
Nothing in the report even implies a fault for the bus; this appears to be a classic rear-end collision which is the fault of the rear vehicle and should result in a ticket for it.
Cruise continues to say little about the event. An at-fault car crash is a serious thing for any self-driving company, and they have been extremely rare. Indeed, as a rule (with the obvious exception of Uber ATG’s horrible fatality blamed on safety driver negligence) the safety record of the top self-driving companies around the world has been superior to that of human drivers when it comes to at-fault crashes. This is in part due to the fact that most vehicles are supervised by humans behind the wheel, ready to intervene, but it’s even true for the new vehicles with no human safety driver. Cruise and Waymo both recently announced over 1 million miles of such operations, and companies in China like Baidu and AutoX have also operated this way, and new companies will be coming on board there shortly.
That self-driving vehicles will make mistakes is not at all surprising, particularly at the dawn of the technology. Developers have tended to wait until they felt the rate of such mistakes would be similar to human drivers before deploying. In this case, what’s odd is the basic simplicity — at least as far as we know — of the crash.
Why it shouldn’t happen
The trouble is there is essentially little that’s more obvious to the sensors of a self-driving car than a bus slowing to a stop in front of it. The Cruise Bolt is bristling with sensors, all of which easily detected that bus, but for some reason the vehicle did not act correctly. The Bolt has 5 LIDARS which would see the bus extremely clearly — even the tilted LIDARs meant to see around the sides of the vehicle would spot it. It has multiple long range radars and 10 short range radars, many of which would have solidly sensed this bus, particularly while it was moving. It has 14 cameras, and while computer vision is still imperfect, again this is an obvious target, and the cameras would see the bus via other vision methods that are not machine learning based, such as motion parallax and possibly stereo vision, if Cruise uses that.
A self-driving car has to combine what it learns from all these sensors in a process called sensor fusion. There are many approaches to this. It’s possible the failure was here, or at other levels, like prediction or planning. Or even a failure in actuation, such as a “stuck” signal to the accelerator or a failure in the signal to the brake. As GM makes the Bolt, it is likely the car talks to the accelerator and brake with digital signals. The Bolt has 3 ways to stop — reverse torque from the electric motors, electrically pumped brake fluid on the pads, and an electronic parking brake. They would not have all failed.
Because hardware and software systems fail, a self-driving car is designed with backup systems which watch for failure in components and correct it. What should have happened is that such a system should have noticed that the LIDAR was 100% sure there was an obstacle ahead, and that the car was nonetheless driving toward it, and take some step to override this. That didn’t happen.
One potential explanation might be the human override system. Cruise cars, like most, can be overseen by human remote operators. These operators can help the car solve problems. They do not typically “remote drive” them by steering and braking from a remote game-style console, though some systems do this. They can, however, give them advice and commands. When this happens, the car may obey its human operator even though its own senses are sure it should not proceed. That’s what human override is all about. But humans can make mistakes. In a famous incident recorded on video, a Waymo car in Chandler behaved badly and led its rescue drivers on a merry chase because a remote human operator had told it something wrong (presumably about driving in a lane blocked off by cones.) A Cruise vehicle was pulled over by police last year because its lights were off — turned off by a remote operator.
This is just speculation, but again, it’s very out of place for a car to run into something as obvious as a bus. There should be many systems in place to make sure that never happens, not just to protect buses but to stop it happening with anything, including vulnerable road users. They are not as obvious as bus, but pretty obvious when right in front of the vehicle.
That the bus was moving makes it worse, because radar is a much more effective sensor on moving targets. Radar bounces off stopped things all the time, but the world is full of stopped things, so it’s hard to tell the useful signal from the noise — but all moving things in front of you are signal. A bus reflects lots of radar, and while radar beams can bounce off things — it’s like you’re in a hall of mirrors and you must spot what are not reflections — when something is close in front and moving, you can be very confident.
This does show that while LIDAR and radar would have clearly detected the bus, and vision probably would too, having great sensors is only the start. You still need to do the right thing with the sensor data to understand what’s around you, and then make the right predictions about where everything is going, and then issue the right driving commands to move on the road with those things. While some have wondered if a crash like this is a strike against LIDAR, it’s actually very unlikely that any failing of LIDAR or radar was at the root of the crash, or even frankly of vision. The problem probably lies at a different level.
It’s up to Cruise to be more transparent about this crash. Why it happened, and how they have fixed whatever allowed it to happen. It seems likely that they fixed it very quickly, as they are out on the road. Or perhaps they know why it’s just super unlikely to ever happen again, but that would be worth sharing with the public as well.
While I worked at Waymo, in its early stages, a crash was dreaded, and policies were in place in the event of a crash to, among other things, put a hold on the fleet. This was during a time when all vehicles had safety drivers, who could just turn off the system until the source of the problem was identified and fixed or understood not to pose further risk. After an all-clear, driving would start again. Today, with empty cars driving the streets, a fleet hold might involve switching back to having safety drivers for a short time. It is likely that Cruise discerned the reason for this crash very quickly from the logs in the vehicle. If it was a software bug, they would have worked to immediately test the fix to make sure it doesn’t break something else, and then deploy it. Cruise announced deployment of a large new software release earlier, which perhaps already had a fix for this bug.
If it was a bug. It’s still possible this was something that could only happen there, or to that car, in which case there might not be anything to urgently fix. Rather than speculate, it would be good to know what took place.
Crashes will occur, and it’s actually good if they cause no significant harm like this one. That’s the best way to have them happen. If somebody knew how to make a car that created no risk, they would be very smart and already own the industry. What is important is how these events are handled, and how the learning