There is an elephant in the room, but it seems as though nobody is noticing it.
In the ongoing push toward the fielding of self-driving cars, there is a crucial safety-related factor that has not gotten much attention and will regrettably and predictably become a sorrowful point and a colossal problem that will readily undermine and impede ongoing progress for self-driving cars.
The matter inadvertently and indirectly came up in the recently seen series of tweets between Elon Musk and Waymo that had to do with Waymo’s announcement of removing the back-up human drivers in their Phoenix-area self-driving showcase.
The notable significance of this announcement is that Waymo’s self-driving cars will proceed apace with their driving efforts, carrying human passengers, but do so without any human driver on-board (there is customarily a back-up or safety driver that sits in the driver’s seat, monitoring the AI-based driving, and remains presumably attentive to intervene in the driving task if the self-driving system seems to have faltered or otherwise appears to be unable to handle a driving situation).
Many would say that this is a gutsy move by Waymo.
Some emphasize that this abundantly demonstrates the faith and belief that Waymo must have about the autonomous capability of their devised self-driving system and represents an important step forward for them and the entire self-driving industry.
Imagine though the potential backlash if this giant leap ends-up with a self-driving car crash or incident that involves injuries or possibly even fatalities. Undoubtedly, the news coverage would be massive, likely rousing the public and giving added fodder to regulators that might seek to slowdown such roadway tryouts. The spillover effect would not simply be a blackeye for Waymo, it would seemingly put the entire realm of self-driving into the doghouse.
You might at first glance be assuming that the facet about safety that I am alluding to must be about the omission of a back-up or safety driver from the vehicle.
Nope, that’s not the elephant per se (one might suggest that removing the back-up driver is the obvious 600-pound gorilla that clearly resides at the front-and-center of the brouhaha and cannot be missed from being seen).
The unseen or unnoticed elephant within all of this Musk-versus-Waymo twitter storm can be stated in two simple words: Steering Wheel.
In today’s column, I’ll explain why there is an unrealized danger associated with the everyday commonplace steering wheel and how it is potentially going to trip-up the self-driving cars amongst nearly all the entities underway on these state-of-the-art self-driving endeavors.
It is the story within the story, as it were.
Tweets Aplenty And Self-Driving Maneuvering
Let’s start with the spat between Elon Musk and Waymo.
The tweet dustup that took place involved Elon Musk commenting about Waymo’s announcement by stating this (his October 8, 2020 tweet): “Waymo is impressive, but a highly specialized solution. The Tesla approach is a general solution. The latest build is capable of zero intervention drives. Will release limited beta in a few weeks.”
For those of you that don’t perchance have a decoder ring to ferret out the insider lingo embedded in that tweet, Musk was essentially claiming in the first part of his tweet that since Waymo is focused on particular locales, in this instance the Phoenix area, this aspect might ergo be construed by some as a “highly specialized” approach. In short, Musk has generally been a critic of relying upon detailed digital pre-mapping of an area for use by a self-driving car, and among those that vehemently contend that such maps are an unnecessary crutch (for my coverage on these heated and contested topics, see the link here).
As a brief explanation, some argue for AI-based driving systems that should be able to drive without the aid of specially created digital maps, which one could assert is akin to how humans drive. Namely, a human driver does not need detailed maps per se to drive around and can readily drive in an area that the human has never driven before, being able to drive just fine. The argument goes that if self-driving cars are reliant upon pre-constructed detailed digital maps it means that the AI won’t be able to drive in places that have not already been subjected to that kind of specialized and deep mapping.
Thus, those self-driving cars are not “generalized” for driving and are dependent upon a pre-mapping effort, potentially limiting where that particular AI-based driving system can do its driving (a counter-argument is that inevitably such detailed digital maps will be prevalent, plus there is an argument made that driving with such maps is likely to be a safer means of driving than without such maps being available).
Self-driving cars are at times deployed by defining an Operational Design Domain (ODD), meaning that the vehicle has been established by the maker to be able to operate within a specific domain such as a given geographic boundary, along with perhaps weather-related stipulations such as not operating in say snow, and possible indications of time of day operation for only daylight hours, etc. This is considered part-and-parcel of Level 4 self-driving cars.
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3.
The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. Meanwhile, the Level 4 efforts such as Waymo’s roadway tryouts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).
For semi-autonomous cars, which do require a human driver at the wheel, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. Tesla and its Autopilot and self-named FSD (Full Self-Driving) are currently at a Level 2 and have received harsh remarks by those in the self-driving realm due to the dangerous situation brewing. The concern is that drivers of Tesla’s using those features tend to become complacent and assume that the ADAS can do more than it really can.
Returning to Musk’s tweet, some would say that he has once again proffered his usual brashness and boldness by claiming that the Tesla approach will be a generalized solution, and will be “capable of zero intervention” during the act of driving the vehicle (meaning that there is no need for a human driver, neither a back-up driver and nor any driver of any ilk).
In another tweet on that same day, Musk indicated: “Our new system is capable of driving in locations we never seen even once” (the wording written in a typical tweet-oriented shorthand manner).
All told, he has continued to make promises about this upcoming “limited beta” that has both whetted the appetite to see what he has, and also garnered a great deal of criticism by industry pundits for as yet being unproven and essentially construed as a vaporware kind of claim, for the moment. For my extensive coverage on all of this, see the link here.
Shortly after Musk’s tweets, Waymo responded with this tweet: “Yep, we specialize in zero intervention driving. Check out our steering wheel labels.”
A picture was included with the tweet and showed a steering wheel that had on its hub an indication of “Please keep your hands off the wheel” and an indication stating “The Waymo Driver is in control at all times.”
Waymo refers to their AI-based driving system as the Waymo Driver, and are known for indicating that they are not in the business of making self-driving cars per se and clarify that they are in the business of making AI-based driving systems, suggesting that they ultimately will be able to port the AI capability to whatever suitable self-driving equipped cars might eventually emerge.
It seems relatively apparent that the Waymo tweet was a retort to Musk about potentially trying to lay claim to Tesla being the only game in town that is going to be able to do “zero intervention” driving. Waymo was seemingly trying to set the record straight and point out that they are in the active midst of going the zero-intervention path right now.
In so doing, they added a bit of icing on the cake that is what I am suggesting is the elephant in the room, perhaps done without a realization of doing so and otherwise just offering an eye-catching rejoinder.
Eye-catching but in another way for those that are in the know.
Time to shift gears and look directly at the elephant.
The Steering Wheel Conundrum
I think we can all agree that a steering wheel is quite an important thing.
In the early days of cars, they experimented with using tillers for the steering and gradually gave way to the use of a steering wheel instead. The steering wheels were purposely made large to ease the burden when trying to turn the wheels of the car. This was before power steering emerged. Inexorably, the steering wheel became a relatively standardized piece of equipment and in so doing made it simpler for people to drive cars. You can imagine the issues we would have today if there were dozens of different ways to steer a car and you had to figure out each idiosyncratic approach whenever you rented a car or bought a new car.
Federal regulations provide a very detailed specification about the steering wheels of cars that are used in the United States. The Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA) are integral to devising and updating the Federal Motor Vehicle Safety Standards, including those rules and regulations governing the ordinary steering wheel.
In short, essentially, a car must have a steering wheel and it must meet the stated regulations.
I doubt that any of you would be surprised at such a revelation and it seems extraordinarily perfunctory. Yes, that is perhaps the case, though you need to add a new twist to the whole topic, consisting of the advent of the self-driving car.
Should a self-driving car have a steering wheel?
One ardent viewpoint is that a true self-driving car should not have a steering wheel, and nor have any driving controls such as the accelerator and nor brake pedal. The notion is that if you include the driving controls, you are basically telling passengers that they potentially can drive the car. But the foundational assumption about a Level 4 and Level 5 self-driving car is that the AI is supposed to be doing the driving. Indeed, if you believe that the 40,000 annual car crash fatalities and the 2.5 million related injuries will be reduced or possibly reach zero (I’ve continued to exhort that zero fatalities are a zero chance), via the advent of self-driving cars, you would likewise fervently argue that this well justifies that there should not be any human driving allowed within self-driving cars.
Meanwhile, here’s the rub.
For those crafting self-driving tech, right now, do you put your AI-enabled driving system into a car that doesn’t have a steering wheel or do you put it into a car that does have a steering wheel?
The future is supposed to include self-driving cars that have no steering wheel in the vehicle. Existing federal regulations though generally prevent the use of a car on public roadways that lack a steering wheel. To try and allow for exceptions in the case of efforts to develop self-driving cars, an exemption can be applied for. I’ve covered for example the GM Cruise’s Origin which has no steering wheel, and Nuro’s R2 that doesn’t have a steering wheel (see my analyses at this link here).
There have been repeated calls by those in the self-driving car realm to have the regulations changed to more readily accept and assume that a car does not have to have a steering wheel. Thus, rather than having to apply for an exemption (usually a somewhat lengthy, complicated, and some assert overly arduous process), the new norm would be that cars can optionally have a steering wheel, rather than the mandated aspect of a steering wheel having to be present.
It is perhaps obvious to state that nearly all cars today have a steering wheel and if you want to try out your AI-enabled driving system it would be a lot easier to do so on a conventional car that has the usual driving controls, rather than creating the futuristic self-driving cars that will slowly and gradually emerge.
Plus, another advantage of a conventional car with a steering wheel is that you can readily make use of human back-up drivers. By augmenting a conventional car with the self-driving sensors and other gear, you can still easily ensure that a human driver can command the vehicle. A true self-driving car that is made without the steering and other driving controls would seemingly preclude the use of a human back-up driver (though, some elaborate designs allow for a steering wheel and pedals that “disappear” from view when not needed, allowing therefore for human driving at times).
Welcome to the quagmire.
The reason this is a quagmire is that putting AI-based self-driving into a conventional car with an everyday steering wheel means that there is a possibility of the two-drivers-at-once scenario.
Let’s unpack that.
Suppose that a conventional car is being used for self-driving purposes and has been augmented with self-driving tech and sensors such as video cameras, radar, LIDAR, and so on. Assume that the driver’s seat is blocked out for use by any passengers. Passengers can sit in the front passenger seat or sit in the back seats of the vehicle.
There is nothing more amazing than to witness the steering wheel moving to-and-fro while you are seated as a passenger in such a self-driving car. It seems as though an ethereal hand is on the steering and a ghostlike apparition must be seated in the driver’s seat, moving the steering wheel mysteriously and mystically. Some view this was rapt attention, others watch with great trepidation and hope that the steering is being done appropriately.
The problem arises if the passenger or passengers were to suddenly decide to make use of the steering wheel.
Your first thought might be that nobody would ever do such a thing, especially if there are clearly displayed signs inside the vehicle that warns to not touch the steering wheel. Certainly, everyone will abide by this decree and realize the seriousness of failing to accede to the stern indication.
Do people obey the signs at national parks that firmly warn “Do NOT feed the animals” or do they violate this frequently and at times end-up getting run down by an angered wild animal?
We all know how human behavior can be. People disobey signs. People do stupid things. People get themselves into trouble when they even sometimes are clueless about the trouble that they face.
Imagine a self-driving car that has an adult and a toddler in the vehicle. The adult is looking intently at their smartphone and not especially paying attention to the child. The toddler quietly slips into the blocked-off driver’s seat and starts to play with the steering wheel, doing so under the thinking that it is simply a toy and it would be joyous to steer a real car.
Here’s another example.
Teenagers pile into a self-driving car for a trip to a school party. A teen seated in the front seat wants to show off to their buddies. The teenager reaches over to the steering wheel and gives it a huge tug, aiming to make the self-driving car do a sudden swerve, figuring this will be uproariously funny and really get the goat of the others in the car.
Yet another example.
A person that after work went to a bar and had one too many drinks is astute enough to request a self-driving car to give them a lift home (better than them getting behind the wheel of a car while in a drunken stupor). Once inside the self-driving car, and while intoxicated, the passenger watches in fascination as the steering wheel is rotating left and right. Without any sober thinking, the passenger reaches from the backseat and grabs the steering wheel, curious to see what will happen.
One more example, and then I think it will have been sufficient altogether in showcasing how people might opt to use a steering wheel despite being inside a self-driving car.
Someone inside a self-driving car is fearful of how the AI is driving. Based on this fear, they believe that the self-driving car is about to hit another car, so the person reaches out to the steering wheel in hopes of turning the wheels and averting what they believe is an imminent car crash (without knowing what the AI is doing and that perhaps the AI fully realizes that the car ahead needs to be avoided).
The overarching point is that if the steering wheel is still present inside a self-driving car, and assuming that it is functioning and not otherwise disconnected, there is a chance that a human will attempt to use the steering wheel, despite any kind of admonishments to not do so.
Conclusion
You might argue that the AI will detect the untoward steering usage and can opt to ignore it, refusing to relay the steering indications into the actual steering function of the vehicle.
This is not so easily done.
In fact, for so-called self-driving cars that are intended to be dually driven by both AI and by a human, the question arises as to which is right and which is wrong in terms of the steering control commanding. We usually assume that the moment a human acts upon the steering that they are overtly taking over control.
This is the crux of the two-drivers-at-a-time dilemma. You might remember years ago that some specially equipped cars for driver training purposes would have two sets of driving controls, a set of the usual controls in front of the driver’s seat, and another added set at the front seat passenger position. The instructor would sit at the second steering wheel and would be able to take over the driving if the newbie learning to drive maybe froze-up or made a bad steering choice. Despite this appearing to be a clever arrangement, you nonetheless always confronted the danger that either party would make a move that would be wrong, or that the one taking over would inadvertently make a wrong move, etc.
As they say, two cooks in the kitchen can lead to some badly cooked meals.
Note that this twofer confronts nearly everyone in this realm, other than those that already are using specialized cars that omit (or completely hide) the steering wheel and the pedals, thus essentially putting only the AI into the true driver’s seat and excluding any human driving intervention.
This facet about steering is the hidden multi-ton elephant that is a hand wringing qualm within the self-driving industry, entailing the vexing conundrum about the seemingly mundane and unheralded steering wheel that we see and use every day of our driving lives. You can bet your bottom dollar that all of this will eventually entail some rather ugly legal wrangling, once there are adverse incidents and the lawsuits start getting slung at the automakers, the self-driving tech firms, and the self-driving car fleet operators. Seems nearly inevitable.
Let’s hope upon hope that we can steer clear of big troubles on all of this and avoid getting trampled by a stampeding elephant (metaphorically speaking).