Transportation

On Frankenstein Results When Self-Driving Cars Go Rogue


It’s Halloween and time to get into those scary outfits, including famous outcasts such as Dracula and Frankenstein.

Speaking of Frankenstein, we all know the classic story about the monster that was pieced together and seemingly turned on its master and opted to go after humans.

Some worry that AI developers are going to do something similar, making use of AI capabilities that will ultimately create a “monster” that will turn against all mankind. For my explanation about the chances of AI fostering a Frankenstein, take a look at my posted piece here.

Meanwhile, there is a specific area of AI that some think we might get the first inkling of an AI take over or at least an AI gone berserk scenario. It has to do with cars and the advent of driverless cars.

First, consider how a car can turn on its owner, even conventional cars have the capacity to do so.

A recent news item decried a seemingly bizarre but true story involving a woman that was run over by her own vehicle.

While driving her car, she was threatening to shoot some trespassers on her property and one of the intruders opened her car door to apparently grab the gun in her hand, she fell out, and was run over because the car was still in gear and lurched forward. She survived with leg injuries and was briefly hospitalized.

Would you say that the vehicle deliberately rolled over her legs?

I don’t think that many would ascribe a foul intention to the vehicle.

You could say that it was her fault as a driver, having exited the vehicle without putting it into park, though she was apparently caught off-guard and didn’t have time to disengage the gear.

No matter how the untoward encounter occurred, there is a bit of irony whenever someone’s own car perchance runs them over, doing so in the case where there is no one driving the car at the time of the incident. These situations happen from time-to-time and catch our attention, perhaps because we often tend to think that our cars might have a mind of their own.

People anthropomorphize their cars, ascribing human-like qualities to their prized vehicles. We give pet names to our cars. We gingerly bathe our cars via taking them to pricey car washes. We sometimes adore our cars when they get us quickly to our desired destinations. We can as readily loathe our cars when they breakdown on hectic freeways in the middle of scowling traffic.

Here’s an intriguing question: Could a self-driving car go rogue?

True self-driving cars are going to be imbued with an AI system that is acting as the driver of the vehicle.

The story about the woman that got run over entailed a car that had no driver at the time and was simply mechanically in gear, mindlessly proceeding forward. If cars are going to have AI systems at the wheel, maybe they could run over someone and do so for either intentional or unintentional reasons.

Let’s unpack the matter.

Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Semi-autonomous cars require a human driver, and yet such cars could indeed run over the very human driver that is responsible for the driving of the car.

How could that happen?

One obvious example involves semi-autonomous cars that are outfitted with a remote driver summons capability, which Tesla has recently opted to roll-out (see my piece here about the dangers of the Tesla summons feature).

The driver can stand outside of the car, such as standing at the curb near a restaurant door, and remotely turn-on their car that’s parked in a nearby parking lot. Using a special app on their smartphone, the human can then “drive” the car to their standing position, directing the car to drive to them via buttons and switches visually portrayed on their cell phone.

In theory, the human using the smartphone is still considered the driver of the car, even though the car might be guided by ADAS automation that instructs the vehicle to back out of the parking spot and drive throughout the parking area to try and reach the human summoning it.

If the car manages to get confused, it could indeed run into the human that’s summoning it.

There are plenty of online videos that showcase Tesla’s that nearly hit pedestrians or other cars during the summoning activity, and some videos of someone or something getting hit (luckily, for now, doing so at low speeds and tending to not incur any substantive injuries or property damage).

In the case of true self-driving cars at the Level 4 and Level 5, the possibility of having the car hit someone or something is quite real, especially since there is no human driver involved in guiding or preventing the car from doing so.

When I make such a statement, those in the self-driving car industry get instantly riled-up because they point out that human drivers hit people and things all the time. Yes, that’s absolutely true. Sadly, there are about 40,000 deaths caused by car crashes each year in the United States, and an estimated 6.2 million injuries via car-related incidents.

All I’m saying is that the belief that self-driving cars will do away with all car crashes and incidents is rather far-fetched and unrealistic.

My tag line has been that zero fatalities is a zero chance of happening.

Presumably, true self-driving cars will radically reduce the number of car deaths and injuries, and aiming to reach zero is a laudable goal, but let’s not mislead the public into believing that the number will drop to zero. It won’t.

Of course, every life saved or injury avoided that might have been incurred due to a human driver is a blessing and we can rejoice if self-driving cars are able to achieve those savings.

Determining Rogue Behavior

What does it mean to say that someone has exhibited rogue behavior when driving a car?

Suppose I told you that a driver the other day opted to maneuver their car up onto the sidewalk.

This certainly seems like an obvious example of rogue driving.

We aren’t supposed to drive cars on sidewalks.

Pedestrians on sidewalks could get hit. Artifacts such as fire hydrants and postal boxes could get smashed into by a car on the sidewalk. All in all, driving onto sidewalks only seems to occur in movies wherein they are trying to dramatize a character that is going outside the bounds of normal driving behavior.

Imagine though that the car drove onto the sidewalk due to the road being closed, as a result of a car accident, and furthermore that a cop was directing cars to slowly go onto the sidewalk and drive around the accident scene. Pedestrians were prevented from being on that portion of the sidewalk and the cars were slowly and carefully using the sidewalk as a temporary path.

Is that rogue driving?

Probably not.

Some pundits define rogue driving as any type of illegal driving act, but this doesn’t seem to be an especially robust way to define rogue aspects. You could argue that driving on the sidewalk was an illegal act, though it was perhaps made legal or at least allowable due to being instructed by an authority figure (the duly authorized police officer).

There is also the matter of whether a rogue act while driving causes any actual harm or not.

Some suggest that if a rogue driving maneuver doesn’t lead to anyone getting hurt, and there’s no property damaged, it isn’t a rogue situation. This definition doesn’t seem especially satisfying since it would appear to allow rogue driving actions that could happen anywhere and at any time if the act perchance did not lead to any adverse outcomes.

One other factor involves the intent of the driver.

It becomes a rather murky matter if you say that rogue driving must be intertwined with some form of foul intent. A driver that opts to drive down an alleyway at 50 miles per hour and endangers people and things, well, were they intending to cause that endangerment or were they innocently driving down the alley, perhaps unaware of the dangers they were creating.

For ease of discussion herein, let’s go with the notion that rogue driving consists of any driving action that violates societal norms and expectations for the proper driving of a car, excising intent from the definition, and the rogue effort can range from being highly egregious to minimally egregious in terms of adverse outcomes (the magnitude only referring to the severity of the rogue action and not whether the act is considered a rogue effort).

For true self-driving cars, the intent aspect comes up quite a bit due to the voiced concern that someday we might have sentient AI, and potentially the AI would have “intent” akin to humans. When we reach that vaunted point, some decry that the AI could arise and decide to start running over humans, perhaps doing so out of spite or maybe to rid the world of humans.

I can assure you that we are eons away from having sentient AI and therefore I am not going to entertain the aspects about AI with that kind of human-like intent (for some interesting AI conspiracy theories, see my posting here).

Self-Driving Car Rogue Acts

Consider the myriad of ways that a true self-driving car could commit a rogue driving act.

Keep in mind that the rogue driving act as defined herein is without intent per se and deals with violating societal norms of driving.

·        A self-driving car could veer out of its lane and threaten another nearby car, one that’s being driven by a human driver.

·        A self-driving car could take a tight turn at a corner and nearly hit a pedestrian standing there.

·        A self-driving car could ram into a car ahead that has come to a sudden stop due to debris in the roadway.

·        Etc.

It might be shocking to think that those kinds of driving actions by a true self-driving car could ever occur, particularly since the media oftentimes portrays that self-driving cars will be perfect drivers. By a magical incantation, self-driving cars will seemingly never do anything that could imperil other cars or pedestrians.

It’s troublesome that the media makes such outlandish assertions. Please be aware that those portrayals are utterly out-of-whack and regrettably are establishing false expectations that no viable self-driving car and no automaker could achieve.

How could such rogue driving actions be performed by the AI of a truly driverless car?

Here’s a sampling of the ways that these actions could happen:

·        Bugs Or Errors. The software that’s on-board the self-driving car could contain a bug or error, one that escaped being caught during testing. Sadly, the bug could arise at the worst of times and lead to the AI making a bad choice in the midst of a life-or-death real-time driving act.

·        Bad Programming. AI software can contain programming logic that is not a bug and nor an error per se, and instead was thought to provide the right kind of effort when executed, yet nonetheless, when an edge or unusual circumstance arises, the software takes action that we all would agree is not a desirable choice.

·        Machine Learning Runs Afoul. Much of the AI for self-driving cars consists of Machine Learning or Deep Learning software that has been trained based on patterns found in lots of driving-related data. Often, the automaker or tech firm has no direct means of ascertaining why the AI has opted to take various actions (an acknowledged issue involving the lack of XAI, known as explainable AI). Unfortunately, Machine Learning might have found a pattern that only makes sense in a specific context, yet the AI doesn’t embody the contextual elements needed.

·        Hardware Glitches. The AI on-board the driverless car is being executed on computer processors akin to the computing that you find in your smartphone or laptop, though often juiced up and much faster than everyday personal computers. There is always a chance that the computer processors or the electronic memory could suffer a glitch, for which in the middle of a driving act could have untoward consequences.

·        Sensors Issues. The sensors on a driverless car act as the eyes and ears of the AI system. If the sensors are obscured by dirt and mud, they might not convey a proper indication of the surrounding scene. The sensors can wear out, they can misreport data, and otherwise suffer from any number of system-related difficulties. The AI might become blind to the roadway or worse still get false readings about what is or might not be ahead of the vehicle.

·        System Overwhelmed. A self-driving car could potentially get overwhelmed during the driving act. Imagine if the driverless car is zooming along at 80 miles per hour, doing so amidst lots of other cars, and the driving scene is a hairy mess. A car ahead gets hit by a rock that fell off a truck and bounces onto the roadway. All the cars get into a frenzy. Was the AI system stress-tested for these kinds of all-out mania driving situations?

·        Implanted Virus. One of the greatest worries expressed about self-driving cars is the chance that a computer virus could get planted into driverless cars. A handy aspect of driverless cars is that they will be using OTA (Over-The-Air) electronic communications to get the latest updates pushed to them, but this also provides a conduit for a mass virus to be sneakily shoved into self-driving cars.

·        Hack Attack. In addition to the implanting of a computer virus, another cyber-security concern is that a human hacker might be able to crack into a driverless car. Self-driving cars are going to be using V2V (vehicle-to-vehicle) electronic communications, which is helpful so that one driverless car could warn another one that a cow is standing in the middle of the highway up ahead. On the other hand, a hacker in a car next to you might be able to send commands to your self-driving car via V2V that could commandeer your vehicle and allow the hacker to do dastardly things.

·        Human Remote Operators. Some automakers are going to allow a remote human operator to intervene in the driving task of a driverless car. I’ve warned that this is opening a can of worms and that teleoperations of a true self-driving car should not be a safety case. Nonetheless, for those automakers that do opt to allow a remote human operator to take over the driving, this could be troubling due to the remote operator getting disconnected during a crucial moment, or other miscues might undesirably occur.

Prevention And Mitigation

Society is unlikely to accept true self-driving cars if the chances of rogue acts happening raises the ire of us all, including that regulators would likely be spurred to action to suppress or stop driverless car roll-outs.

A key obligation for self-driving car makers is to try and stridently prevent and mitigate the odds of rogue activities by their driverless cars.

Per the Uber deadly incident in Phoenix last year, all it takes is one bad apple, so to speak, and the rest of the barrel can get spoiled.

Many would agree that the Uber incident generated a backlash that has partially slowed or at least made for more cautious efforts by many of the automakers and tech firms in the driverless car industry. Some say it didn’t do enough and we are still at unfettered heightened risks.

Ways to try and deal with the emergence of rogue acts in driverless cars include:

·        Tightened cyber-security for all external connections

·        Cyber-protecting the cloud-based OTA

·        Revisiting systems security across the automotive ecosystem

·        Implementing fail-safe features for on-board hardware

·        Self-detecting sensors that realize when they are awry

·        Real-time double-checking of AI driving commands

·        Extensive simulation to test beyond roadway efforts

·        Wide-ranging closed track or proving ground testing

·        Boosting of spending and attention to driverless car QA

·        Putting in place a high-level Corporate Safety Officer

·        Instill in the AI developers a safety-first mindset

·        Etc.

Conclusion

In the intense pressure to get a self-driving car onto our roadways, the automakers and tech firms could be spurred into cutting corners.

Everyone wants to get the acclaim and attention from being the first to “win” at the driverless car moonshot-like race.

Furthermore, the cost to undertake the numerous preventatives or mitigating approaches for undercutting rouge actions is expensive and time-consuming. When push comes to shove, and since driverless cars are still a draining R&D effort, there is temptation to cut back now and assume that later you’ll add-in the “bells and whistles” to ensure greater safety.

Nobody can end-up affording that kind of thinking.

The societal reaction will clamp down on progress toward driverless cars. Firms that opt to prematurely release their self-driving cars will ultimately get sued, likely putting them out-of-business once those lawsuits succeed.

As Homer famously warned in The Odyssey, one rogue is usher to another still. For true self-driving cars to succeed, we must pull out all the stops and seek to prevent or mitigate rogue driving actions.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.