Transportation

Chinese Robotaxi WeRide Safety Driver Apparently Falls Asleep At The Wheel; How Can That Still Happen?


The video above, captured last month on highway 85 in San Jose, shows a WeRide test robocar with the safety driver apparently asleep at the wheel for at least 45 seconds. The eventual resolution is not shown, but presumably the safety driver later awakened as no problem was reported. WeRide has stated this driver was suspended, and then terminated after an investigation concluded he did not follow their safety procedures and policies.

This opens up many questions about what procedures are, or should be in place to prevent safety driver errors, or to handle them if they happen. In particular:

  1. How is WeRide monitoring their safety drivers to assure they are watching the road?
  2. What fault does the safety driver have for falling asleep? Or is the fault higher up?
  3. Are any other companies not monitoring safety driver attention/sleep?
  4. Is enforcement necessary by adding this as a requirement to testing permits?

The most infamous incident in robocar history involved a safety driver from Uber

UBER
ATG who started watching a video on her phone instead of working. Her vehicle’s software failed and it fatally struck a pedestrian crossing the road. In addition to the tragic fatality, Uber’s project was taken off the roads for over a year, and was eventually “sold” to Aurora for a negative price (though it got Uber stock in Aurora which should be very valuable when they go public next month via a SPAC.)

Uber was, as it should be, grilled on how well they trained their safety drivers, and how well they monitored them. The safety driver faces criminal charges. Uber faced a long NTSB investigation which primarily blamed the safety driver but also put significant blame on Uber. It was revealed that Uber was not monitoring their safety drivers in real time, though it had the rarely used ability to review how they did after the fact. In addition, Uber had switched from two staff in the car to one. A second staffer would never have allowed the main safety driver to watch a video, and would have also been a set of extra eyes on the road.

Part of the NTSB recommendation was to monitor safety drivers, and at the time of the incident, an open source software package was released for free that could let any team do that with a driver facing camera. Many ADAS “Pilot” systems — including finally Tesla

TSLA
Autopilot, monitor customer drivers to make sure they are paying attention to the road while using those systems.

How do safety drivers work?

Companies testing prototype robocars always start by having them drive with a “safety driver” behind the wheel, watching the road and ready to take over if the system makes an error, or the driver fears it might make a dangerous error. Many teams (and all of them initially) would have two staff in the car, one at the wheel and the other monitoring the software and also watching the road and the other safety driver. When the system is new, the safety drivers are intervening regularly. As they get better it gets less often. At a certain point, some teams decide to reduce to having only one safety driver in the vehicle, as Uber did. It’s a probable step on the path to having zero, which is everybody’s eventual goal.

Generally, the safety driver approach has worked very well. In many tens of millions of miles driving with safety drivers, incidents are extremely rare. The Uber incident involved total dereliction of duty. Waymo has had one low speed at-fault accident, with a bus, where it was concluded that the safety drivers would have done the same thing as the car, and thus didn’t intervene. When safety drivers are properly trained and doing their job, the system has allowed prototype cars to be tested without significant risk to the public — in fact, less risk than ordinary drivers present just by driving.

This WeRide incident

As you would expect for a good robocar system, the car kept driving properly in the lane even though the safety driver had fallen asleep. That’s a lot better than what happens in an ordinary car. It is feared that falling asleep may cause more fatalities than alcohol, but it is impossible to tell because no blood test on the driver can detect that they were asleep at the time of the crash.

WeRide provided only limited answers to inquiries about what sort of driver monitoring they have. WeRide stated that the “operator in a control center and the in-vehicle driver cross check each other in regular mode throughout the road test to closely monitor the performance of the driver. However this method is not the best way to monitor safety drivers’ status. We continue optimizing our system and mechanism to reduce human intervention during our test to ensure the safety.”

They declined to answer about what happened here and why the safety driver was inattentive for at least 45 seconds. They declined to answer about what other driver monitoring they have, strongly implying they do not have any camera based driver monitoring. They also declined to state what happened after the 45 second video.

In one brief moment of the clip, the safety driver appears to have a phone in his hands, but the tilt of the head for 45 seconds suggests sleep rather than extended phone use. If it was phone use, this becomes more like the Uber incident, a driver deliberately ignoring the road, rather than unintentionally.

WeRide appears to place blame on the safety driver for not following their training and rules. However, it is highly likely that no amount of training and procedures can stop drivers from sometimes falling asleep. As we know it is an all too common event for regular drivers, and it happens in spite of it commonly causing an accident, often costing the driver their life. The incentives not to fall asleep could not be higher. A company can make efforts to assure drivers are well rested and told to cease work if they are drowsy, but this will never be perfect.

The question has also come up whether safety driving presents more risk of falling asleep than ordinary driving. It is not as engaging, with nothing to do but watch in a well-performing car. Unlike driving a regular car, the penalty for falling asleep will usually not be an accident unless the sleep is long. Research at Waymo and other teams has often found untrained test subjects falling asleep while in a robocar, even when there were clear instructions to try to avoid it. This was one of the reasons Waymo abandoned efforts to make a so-called “Level 3” car which allows the operator to take eyes off the road, as long as they can be called to retake control with a 10 second warning. Drivers who sleep may not be able to resume control in 10 seconds.

Training and rules will reduce the chance of falling asleep. They may also reduce deliberate inattention, as happened in the Uber fatality. (We know that regular car drivers routinely do things like write text messages even in cars that have no safety systems to protect them.)

The two main approaches to fully preventing this are to have a second person in the car who will spot if somebody falls asleep or is inattentive, or to have a computer system monitor the safety driver, usually with a camera to track the gaze of their eyes or the position of their head. Tesla uses a system where supervising drivers must keep their hands on the wheel and apply regular torque force. There are arguments about which is better. Camera systems allow “hand free” operation. Having regular chats with a remote co-worker seems like it should help, but not sufficiently in this case.

If a driver is detected nodding off, usually they can be alerted with loud noises, or in more extreme cases, a short, sharp brake jab if safe to do so. Failing that, the vehicle can attempt to quickly pull to the side of the road, this time with real braking.

In talking with some teams after the Uber incident, they stated that they either already did, or planned to implement driver monitoring in the system. WeRide has not stated that they did this. If this is indeed the case, it could be one of those rare situations that calls for regulation. If companies won’t do a clearly worthwhile step even after an incident like the Uber fatality, it may be necessary that they be made to comply.

It is also time for all teams which are testing on public roads to make a declaration about what sort of monitoring they have in place for their safety drivers. This protects the public, but also protects the industry, but any incident of this sort reflects poorly on the whole industry, especially happening after the sad lesson of Uber.

WeRide advances that human error was at fault here, and that “As we continue advancing our self-driving technology, we recognize we are not immune to this factor in our testing. This is why we believe self-driving technology is an important advancement that can provide safer mobility solutions.”

WeRide is in a special position in that they have one of the few California permits that allows testing with no safety driver at all, and they are the only company to also have a similar permit in China. In a way, this is somewhat orthogonal. Testing under that permit requires confidence that the safety driver is no longer needed. A system that good can tolerate the safety driver sleeping if it can tolerate them being absent. We don’t necessarily have to be scared about how their unmanned vehicles will perform if they failed to properly monitor safety drivers. Even so, it would be good to see more transparency from WeRide and other companies on what sort of monitoring they are doing. Safety driver inattention caused the most obvious and terrible failure in the history of robocars, and it’s not good to ignore it.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.