Transportation

New NTSB Reports On Uber Fatality Reveal Major Errors By Uber


I have written extensive coverage of the March, 2018 fatality involving an Uber test robocar and the death of Elaine Herzberg in Tempe Arizona. On Nov 19, the NTSB will hold its hearing on its investigation, but preliminary documents already reveal that Uber’s errors were even worse than already reported.

Uber’s vehicle was a prototype, and like all prototypes other than Waymo’s near-release vans, it is only operated on public roads under the supervision of a “safety driver.” All vehicles in prototype phase regularly encounter problems which require the safety driver to take control to prevent an accident, especially in their early days. That is not surprising or even particularly dangerous when you have a good safety driver protocol and good safety drivers. Uber had neither – they selected bad safety drivers, and only had one in the vehicle where all similar teams use two. Worse, their safety driver, in complete abrogation of her assigned duties, was allegedly ignoring the road and watching a video on her phone at the time of the incident. The core blame remains there.

The new documents, however, disclose a system which was in a poorer state than expected, and which should not have been deployed with just one safety driver, and, I will declare for the first time, in many ways should not even have been deployed with two – though in all other teams, two-safety-driver operation has an excellent safety record.

The core issues

We now have a better view of the mistakes here. I will rank them in the order of blame that should be assigned to them, with the new revelations in bold:

  1. Safety driver ignoring road and allegedly watching video
  2. Only having one safety driver instead of two
  3. Poor decisions in the hiring and training of safety drivers
  4. General poor safety culture at Uber ATG
  5. Possible special tuning of “false positive” thresholds due to VIP demo
  6. System not allowing for pedestrians outside of crosswalks
  7. System forgetting past information when it changes how it classifies something on the road, spoiling path predictions
  8. Delay of one second before sounding alarm in urgent situations
  9. Decision to not allow emergency braking with high deceleration
  10. Bad predictions for future path of perceived “cyclist”
  11. Pedestrian crossing the road at signs indicating not to cross there

Not ranked (but discussed below): Disabling of built in Volvo emergency braking system.

No pedestrians outside crosswalks

The most shocking disclosure is that Uber’s system for classifying things on the road acted as though pedestrians only appear in crosswalks. Herzberg was “jaywalking,” crossing not just away from a crosswalk, but at a spot with signs forbidding crossing at that location. (The signs are there because otherwise, it looks very much like a place to cross, which the city is being sued over.)

The key system in a robocar for safety is the perception system, which begins by receiving sensor data and attempting to understand it. The key task is to split the observed world up into different surfaces and objects (known as obstacles if they might be in your way) and identify what they are, how they are moving, and where they are likely to go. The prediction is really important, and classifying them helps a lot with predicting where they might go. If their predicted path might intersect your path, that’s of top concern. Due to this mistake, Uber’s car took far too long to decide there was a danger of collision, and it also failed in handling that realization.

With the system unable to classify Herzberg as a pedestrian, it struggled. Because she was in the left turn lane, the first classification from just radar, a full 5.6 seconds out, was that she was probably a stopped vehicle. You don’t concern yourself with vehicles stopped in the left turn lane when you are turning right.

Quickly thereafter, the LIDAR came in and considered her an “other,” which is also expected at this sort of range, and on your first LIDAR observation you don’t know how things are moving. That needs several observations. (Radar doesn’t tell you how things are moving across your path, only how they move to or away from you.)

Forgetting the past

Over the next few seconds, the system alternately classified her as a vehicle or an “other.” A second flaw in Uber’s systems was that when an object got reclassified, it forgot everything it had learned up to then from past trajectory points. Each time it changed thinking on what she was, it ignored the most important fact, that she was walking across the road, and going to enter the Uber’s lane. Perhaps if it had stuck with one classification, it might have built a better prediction for what the unknown object was doing and predicted the risk of collision.

Reclassifying obstacles is fairly common in robocar systems. In vision based systems, it is not at all uncommon to see obstacles “flicker” as the system identifies them in one frame and not in the next frame. This is normally fine because we know that obstacles don’t vanish and reappear, and conclusions can be combined across several frames by remembering the past. Classifications may vary and converge as data get better, too. But you must remember the past to deal with this issue.

At 2.6 seconds out, the classifier thought she was a bicyclist. She was indeed walking a bicycle. Again, with no history, her path was unknown. Oddly, after another LIDAR frame she was classed as moving along the lane to the left of the Uber car. This may be because the system doesn’t expect bicycles to be going sideways in the middle of a traffic lane, which would be another error. Either way, it isn’t until 1.5 seconds out that the system (switching to Unknown again) realizes she is coming into the Uber’s lane. Correctly, it plots to swerve around her.

The fatal moment comes at 1.2 seconds out. She is reclassified as a bicyclist and in the path of the vehicle. The swerving plan no longer can help. It’s time to do an emergency braking. It’s really time.

Action suppression

Unfortunate, another serious design flaw comes into play. Uber’s system, like many prototype systems, often sees sensor “ghosts” which are not there, and wants to brake for them. A car which is constantly braking for ghosts isn’t usable. You can tune your system to help deal with this. If you tune it to be paranoid and brake for everything, it’s not usable. If you tune it to be lax it might not brake for something real, which is even worse. It’s hard to find the right balance, and you actually can’t find it in a prototype. This is one of the reasons you have safety drivers in your car – their job is to use human judgment to make sure the dangerous flaw – not braking for something real – doesn’t happen because the human hits the brakes. Of course, the human has to have eyes on the road and be ready, not watching a video.

In order to leave this decision to the safety driver, Uber’s car, when making a very sudden decision to brake, waits one second to give the human driver time to do the job. After one second, it does two things: It sounds an alarm, and it initiates its own braking.

It’s not clear why Uber would wait for one second to sound the alarm. It’s also a clear flaw that the system treated this as a new and sudden problem when in fact it had been tracking her for over 4 seconds. It’s more clear why it would wait to activate the brakes with a longer time window. This is how the emergency braking system found in many cars works – first let the driver brake, then give an audible warning, then apply the brakes automatically if they don’t act. Such a delay is less reasonable in a self-driving car once it has passed the point of “collision inevitable.”

It didn’t initiate its own breaking, even after the one second not just because such braking could now do very little, but because Uber’s fear of braking for ghosts had led them to forbid the system to do super hard brake jabs. Any situation requiring super hard braking was supposed to be up to the safety driver.

There has been speculation from earlier reports and leaks that Uber added additional delays because the system was indeed braking for ghosts too much, and they had some very important demo rides for the new CEO coming up. If you give a demo ride and it brakes for ghosts, that’s very bad. It’s not necessarily bad to do this in a demo ride for a VIP, because you know that during the VIP ride you will have extra diligent safety drivers who will manually brake if it’s needed. You are being dishonest with your VIP, of course, and not showing them the true quality of your systems, but you aren’t putting the public at risk if you have those diligent safety drivers. Uber did not just do this for the CEO, though, they had this delay present during routine safety driving in the weeks before, which was a serious error.

One safety driver

As noted earlier, Uber had decided, again for unknown reasons, to switch from having 2 safety drivers per vehicle to having just one. This was clearly a premature switch – while Waymo did this some time ago, their project is much more advanced. The reason for this is baffling. In well-funded robocar development projects, it is usually the number of vehicles that limits the amount of testing you can do, not the number of people available to work safety driving, or the cost of hiring twice as many safety drivers. This should not have been a time for cost saving. With two safety drivers you not only get to have two sets of eyes on the road (thought the second driver does this only from time to time) you also get the social dynamic of having two co-workers. With two in the car, it’s unlikely the second safety driver (known as the software operator) would have tolerated the main driver watching a video.

And, as noted, the one driver did allegedly ignore her duties and watch a video. Without that, none of this would have happened, it should be clear. Safety drivers aren’t perfect of course, but in other teams, the record seems to show they provide a level of safety similar to other humans driving cars. Of course, there are many human drivers who fiddle with phones on the road today, and they often end up in accidents.

Volvo Emergency Braking

There has been controversy around the fact that Uber disabled the Volvo’s built in “automatic emergency braking” and warning functions. This is a fairly common practice in robocar development. You are building a vastly more sophisticated emergency braking and warning system into your robocar, with much better sensors and logic. It makes sense not to also use the much more primitive system that comes with the car. You don’t want the car to have two masters, to have two different systems braking the car. If that system has false positives it stops you from testing what yours does.

It is possible that this common thinking should be reconsidered, however. The existing systems are indeed more primitive, but that means they are also simpler. It may be that a better course is to find a way to feed their outputs through the robocar system, so that it has the ability to deliberately override them when it needs to. The audible alarm should never be disabled, except when you want to do a fake demo for a VIP and don’t care too much about how honest you are.

In fact, the report says that Uber has re-enabled the Volvo system and had no problems with it, though they also are doing very minimal operations.

Uber’s fate

Uber was immediately kicked out of Arizona by the governor, and they stopped all testing in all their testing centers. Since then, they have resumed test track operations and mapping and shadow driving operations with humans always in control of the vehicles. They want to return to doing full testing of course.

Uber quickly settled with the family of the victim – perhaps they will now feel it was too quickly. Because the victim was jaywalking, Uber’s liability under the vehicle code was low – she had the duty to yield right of way to the car. Another section of the vehicle code requires drivers to do their best not to hit pedestrians even when the car has the right of way. Uber was probably in violation of that but police did not elect to pursue this.

The safety driver is still at legal risk for her negligence.

Uber has, apparently, fixed all the things that went wrong in this case, including the flaws in the safety culture at the company. This exact accident will not happen again, and it’s pretty likely that these sorts of flaws will happen again. That might be cause to bless their return to the roads.

Alternately, society, through laws and public pressure, might want to tell Uber to take its cars and go home, shutting down their self-drive project, giving them no chance for improvement and redemption. This would send a strong signal to all teams that they can’t have this level of things going wrong, though it would have to be clear that you need a lot going wrong, because there will certainly be other accidents in future, and all of them will have complex causes and things which went wrong, though hopefully not as wrong as this.

I suspect the industry and other teams might welcome this “death penalty” because Uber has definitely tarnished the entire industry and eroded public trust and support for what is a very important and eventually life saving technology. Uber’s departure would actually help restore some of that trust. At the same time, they might fear that less negligent activity could also invoke a team death penalty, and thus become too conservative. As much as we don’t want accidents, we don’t want the teams to be too conservative on the path to eventual massive life saving. Each year that deployment is delayed will result in the death of hundreds of thousands of people at the hands of drivers who did not have the opportunity to give up the wheel and make their trip more safely in a robocar.

Uber’s stockholders of course will want to avoid the end of their project. Uber is correct that self-driving will be the core issue governing the future of their company. On the other hand, Uber today does not own or operate cars, it just sells rides (it claims it doesn’t even do that, just hooking up riders and drivers.) It can continue that business in a world where it connects riders to other people’s robocars. That is, if the business of middleman continues to be the keystone role in hailed transportation, which is a big if.

Read/leave comments at this site.



READ NEWS SOURCE

Also Read  Bentley Reveals The New Bentayga Luxury SUV