One of the most interesting panel sessions about the future of mobility at the TechCrunch Disrupt conference in San Francisco last week brought together Jesse Levinson, cofounder of Zoox, along with Manik Gupta, Chief Product Officer at Uber, and Sebastian Thrun, cofounder of Udacity and keenly focused nowadays on his newest role as CEO of Kitty Hawk (aiming to bring flying cars to reality).
A standout comment made by Jesse hit the proverbial nail on the head, a remark that he and I zealously discussed after the panel and one that I’ve been exhorting repeatedly, namely: wireless should not be a safety case for self-driving cars.
Many would let this insightful remark roll in one ear and come out the other, not necessarily realizing the gravity of its proclamation.
As background, there are some in the driverless car industry that are touting their use of teleoperations to aid in so-called self-driving car advancements. This means that when a self-driving car gets stuck in some manner such as confused when reaching a roadway construction site, it will “phone home” by activating a communications link to a human operator.
The human operator might be anywhere on planet Earth, perhaps just miles from the bewildered driverless car or the operator might be sitting in a dark room on the other side of the globe, ready to take over the driving task from the on-board AI system that was driving the vehicle.
Though at first glance it might seem like having a human “driver” as a remote back-up makes sense, you need to carefully consider the ramifications of such a strategy.
As per the sentiment of Jesse’s remark, betting on a wireless connection for the safety of a self-driving car and its human occupants, and likewise the lives of those nearby the driverless car doesn’t seem like an appropriate approach to many.
Indeed, it is downright dangerous when used as a crutch in any safety-related real-time on-demand urgency of a true self-driving car.
Let’s unpack the matter.
Self-Driving Car Levels
It is important to clarify what I mean when referring to true self-driving cars.
True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since the semi-autonomous cars require a human driver, such cars aren’t particularly pertinent to the teleoperation question.
Most would concede that the human driver will be physically present in the vehicle for any Level 2 and Level 3 car. Thus, having a connection to a remote driver would be rather unusual and seemingly unnecessary (unless you believe that the in-car driver might have suffered a heart attack or otherwise falter in the driving task).
In most jurisdictions, the driver sitting in the driver’s seat is considered the responsible party for the driving actions of the car. It is notable to point out that in spite of those dumbbells that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, do not be misled into believing that you can take away your attention from the driving task while driving a semi-autonomous car.
Remote Operations Aspects
For remote driving of a car, the mainstay of the focus involves Level 4 and Level 5 driverless vehicles.
Some liken the concept of a remote operator for Earth-bound driverless cars to the notion of landing a rover on Mars and being able to remotely operate the rover.
If we can remotely control a vehicle that’s all the way on Mars at 140 million miles distant from us, and presumably be able to adroitly drive around on the unforgiving Martian landscape, certainly we can do the same for a driverless car that’s just a few hundred miles or maybe several thousand miles away here on Earth.
The faulty logic in the analogy to a Mars rover is that if you were to consult with any versed space scientists and engineers, you’d realize that they aren’t betting on a tightly woven and fully assured remote connection occurring in real-time per se.
Problematic aspects can readily arise, including:
· The connection can have delays or latency
· The connection can be intermittent
· The connection can fail completely
· The connection can have noise and be unintelligible
· The connection can be fraudulently overtaken
· Etc.
I realize that the vast distance to Mars tends to exacerbate these connection woes, but don’t be fooled into thinking that those same issues cannot arise when the distance is just hundreds or thousands of miles away.
When safety comes to play, any reliance on a remote operator means that you are willing to increase your risks that the remote operation will not be able to occur on a timely basis.
Suppose a self-driving car comes upon a construction zone and gets overwhelmed by a slew of cones and a construction crew that is meandering around.
Handing over the driving controls to a remote human operator seems like a clever and bona fide means to handle the situation.
Imagine though that the remote human operator instructs the car to proceed ahead, doing so under the watchful eye of the faraway driver.
Suddenly, the connection drops.
What happens next?
Maybe the driverless car should continue ahead since that was the last instruction is received. Of course, if the car is about to fall into a gaping hole in the road, we probably wouldn’t want the vehicle to blindly proceed on its own.
You might be tempted to say that it is obvious that if the connection is broken then the driverless car should immediately come to a halt. Unfortunately, there are plenty of circumstances whereby halting the car can lead to other consequential dangers, such as being stranded now in the middle of a road or sitting in a place that other vehicles might ram into the stuck car.
Overall, a remote operator could inadvertently cause a driverless car to get itself into a worsened posture than if the AI was tending to its own affairs.
And, purists would assert that a true self-driving car is one that has no human driving at all, for which the use of a remote operator violates that, rule.
There is a subtle and potent urge to have a human driver potentially be at-the-ready to jump into the midst of a driving task that an AI system is supposed to be doing. You might think of this as a parent that is aiding their teenage driver learning how to drive. The parent is willing to reach over and take the wheel since they are being sincere in their desire to aid their child.
One concern is that we aren’t going to arrive at true self-driving cars if we bake into the equation a human driver. Sure, the human driver isn’t sitting inside the car, but nonetheless, you are saying that at some point the AI can’t cut it and therefore a human driver is needed.
Will your teenage ever really be able to drive a car on their own, if they rely upon the parent sitting nearby to take over the controls?
In the case of a driverless car, would a driverless car that requires a remote human operator be properly referred to as a driverless car, though it apparently needs a human driver from time-to-time?
Many would assert that it is not a true self-driving car and merely another kind of semi-autonomous car.
Remote Operators Aspects
As mentioned, there are lots of potential connection disruptions and issues that can impede a real-time effort to drive a vehicle remotely.
You also need to consider the nature of the remote human operator as another risk factor.
In theory, the human remote operator will be instantly available and will be remarkably aware and astute whenever a driverless car needs help.
You cannot say for sure that any of those facets will always occur.
Perhaps there is a delay in notifying the remote human operator, thus losing precious seconds when the touch of a human driver is apparently required.
By the way, for those of you thinking that 5G will somehow magically guarantee instantaneous and continuous communications with self-driving cars, you’re wrong, sorry.
Returning to examining the role of the human remote operator, suppose the operator is distracted by some other task, perhaps already engaged in aiding another driverless car that has requested assistance.
And, even if the human operator is attentive and alert, they still are reliant on whatever the driverless car is showing them about the driving environment. The cameras might be obscured, or other sensors might be reporting data insufficiently for the remote operator to get a comprehensive understanding of the driving scene.
There might also be a remote human operator that has received inadequate training and perhaps has been working a 12-hour shift. They are worn out and won’t be remotely operating your car as carefully as you might so wish.
If you were a passenger inside a car that had a remote human operator, how safe would you feel about the matter?
At least when you are in an Uber or Lyft, you can see the driver with your own eyes.
You can assess the aptitude and awareness of the driver.
You are obviously still dependent upon the driver, but at least you can have a greater assurance of the driver’s capabilities and attention to the driving task (some argue that a driverless car being piloted by a remote operator could have a camera pointed at the remote operator, thereby providing a semblance of having the driver sitting in the car, though this has its own downsides and complications).
From a safety perspective, the remote human operator presents at least three key safety gaps:
1) Risks of connection or networking disrupts and latencies during the driving task
2) Risks of the human operator not being attentive to the driving task
3) Risks of the driving environment not being well-conveyed to the remote human operator
Other Remote Connections
One of the most frequent assumptions that many have about self-driving cars is that these autonomous cars will be immersed in all kinds of connected communications.
There are V2V (vehicle-to-vehicle) electronic communications that will allow a self-driving car to communicate with other self-driving cars. This might involve alerting that there is debris on the roadway and thus one self-driving car that comes upon the matter will notify upcoming driverless cars to be watchful.
There could be V2I (vehicle-to-infrastructure) electronic communications. Roadway infrastructure such as bridges might send out a signal to forewarn driverless cars that the bridge is impassable.
There is OTA (Over-The-Air) electronic communications, meaning that a driverless car can send its data up to the cloud and also receive software patches and updates from the cloud.
Since those capabilities also depend upon making remote connections, you would be right in believing that a true self-driving car should not be reliant upon any of those added features during the performance of the driving task in real-time.
A true self-driving car should be able to conduct the driving task without the need for V2V, without the need for V2I, and without the need for OTA.
Don’t misunderstand that statement.
I’m not saying that self-driving cars should eschew the use of V2V, V2I, or OTA.
Instead, the point is that those should not be safety-critical elements. The safety of the self-driving car must not depend upon the expectation that a reliable and robust connection to V2V, V2I, or OTA is available.
The goal is a fully standalone and safe self-driving car.
A human driver can drive a car without any kind of remote connection. You can get into a car and drive it without having a cell phone. You can drive it on a fully standalone basis.
True self-driving cars are supposed to be able to drive a car in any manner that a human driver could drive a car (some exceptions apply).
Pundits of self-driving cars might argue that a driverless car needs at least one kind of remote connection, namely the use of GPS to be able to navigate while driving.
Well, you might recall that human drivers once drove cars without GPS.
Presumably, a true self-driving car should be able to drive even without a GPS (using SLAM), or when a GPS is having remote connection issues. Many would say that the use of on-board stored maps and in combination with the IMU of the car should suffice when needed (in a pinch).
Conclusion
There is a myriad of variants on the remote operator matter.
For example, some agree that remote operators should not be a safety case but can be allowed for other kinds of use cases of self-driving cars.
Suppose you have self-driving cars in a parking lot that has no people in it. There aren’t any humans in the cars, and there aren’t any humans in the restricted parking lot. Since there aren’t any humans that might get hurt, perhaps its okay to allow remote operators to control the self-driving cars.
The worst-case scenario is that a car bumps into another car.
One concern about allowing any kind of remote operation is that it might be invoked in situations whereby there are humans that could get hurt. Imagine that a human inadvertently wanders into the restricted parking lot, or a human was asleep in one of the parked cars and was not noticed. And so on.
A more everyday example would be a remote dispatcher that connects to a self-driving car and indicates a destination. The remote dispatcher though is not acting as a remote operator or driver. Instead, the remote dispatcher only provides overall directives and it is then entirely up to the on-board AI to drive the car.
Another viewpoint is that maybe we consider using human remote operators temporarily, allowing us to start using self-driving cars right away, and once self-driving cars are fully readied then we revoke the remote operations capability.
Here’s another angle on allowing remote human operators that might pique your interest.
There are conspiracy theorists that are worried that AI systems will take over all our cars, making us reliant on AI and becoming essentially enslaved to AI. In that use case, the relief valve is that there would be human operators that could take control of the AI-slave cars and save us from utter domination and destruction.
Overall, the remote operation of self-driving cars is a controversial topic and there are vigorous debates and disagreements involved.
Some companies are betting that remote operators for driving will become a booming business.
They are readying offices that will house dozens or even hundreds of remote drivers. The locations are dispersed around the world so that a 24 x 7 human driving remote operator will be available at any time of the day or night.
An autonomous car is presumably autonomous, or can we have autonomous cars that are also able to use remote human operators?
Yes, some emphasize, human remote operators can be established, and we can use them if we realize the safety risks involved and are willing to accept those risks.
For those of you that believe the AI will be safer, you would be alarmed that we are introducing humans into the loop.
If you believe that the AI won’t be safe enough, perhaps keeping humans in the loop will help, though it could also hurt.
Can we use remote operators as an armchair driver, co-sharing the driving task with the AI, allowing the AI to deal with the moment-to-moment driving and then having the remote human deal with less time-dependent facets?
There are lots of combinations and permutations to be resolved, and the safety of us all is at stake.