Transportation

Is A Malicious Mass Control Takeover Of Self-Driving Cars Whilst On Our Roadways Really Possible?


There are a lot of movies and TV shows that depict a mass control takeover of self-driving cars.

This seems to be on our minds.

For quite good reasons.

If a malicious evildoer was somehow able to take command of Autonomous Vehicles (AVs) such as self-driving cars, the outcome could be disastrous. This almost goes without saying. The usual portrayal in films is that the villain opts to have cars crash into each other. Well, that’s just for starters. The self-driving cars are rammed into anything that isn’t nailed down, and by gosh also steer into and collide with objects that are ostensibly nailed down too.

Humans riding inside self-driving cars are harmed. Humans in nearby conventional cars are harmed. Pedestrians are harmed. The gist is that self-driving cars under the command of a baddie will become a frightening and seemingly unstoppable destructive force.

How would you avoid becoming prey?

The key would seem to be staying off the roadways.

By keeping yourself indoors, it is highly unlikely that a devilishly directed self-driving car could reach you. Sure, one supposes that the autonomous vehicle could be instructed to try and ram your house.

We already know that this happens today from time to time when a human driver loses control of the wheel and swerves into a domicile (thus, it occurs at the direct hands of the human driver). Perhaps if several self-driving cars were directed to this malevolent task, it might become endangering. The odds are though that being indoors would be sufficiently protective.

We would need to abundantly curtail our travel activities.

Anyone on a bicycle becomes a potential target. Driving a conventional car is also going to be a target. Armored trucks would be a target, plus if they were self-driving capable, they could be added to the arsenal of the wrongdoer.

Pedestrians could potentially sprint from spot to spot, going through narrow passages that a self-driving car could not traverse. That won’t be much satisfaction for trying to go from any given point A to point B on any long-distance basis. We would be essentially trapped in our nearest domains and be unable to do any extensive traveling.

When you see these kinds of portrayals, one question that some ask entails the matter of the vehicles running out of gas. Wouldn’t self-driving cars eventually run out of gas? And if so, they would become useless paperweights. Without the fuel needed to be underway, any such self-driving car that is empty would no longer be much of a threat to anyone.

This might seem like a means of overcoming the feared invasion of the overlord-snatched AI-based self-driving cars, but regrettably, there is a catch. You see, there are plans afoot of trying to make refueling for self-driving cars an unmanned chore. A self-driving car pulls up to a gas pump or charging station, and an automated robotic arm makes sure that the hose connects with the vehicle.

You would certainly want this if we had a lot of everyday self-driving cars on our roadways.

Suppose you owned a self-driving car (for my extensive coverage about self-driving cars, see the link here). Rather than having to take it over to the gas station or charging station to keep refueling the darned thing, you would just send it there on its own. In fact, assuming that you are likely using the self-driving car as ridesharing or ride-hailing vehicle, making you some extra dough, you would merely indicate to the AI driving system that whenever there is idle time and nothing else to do, route over to the nearest fueling station and get stocked up.

I realize this seems to be a disappointment for those hoping that it was the Achilles heel of self-driving cars that are surreptitiously overtaken by awful people.

You might say that another angle on how to stop this incursion would be that those self-driving cars are bound to eventually have something go awry since they are still cars when it comes down to it. A tire might blow out. The engine might conk out. Some kind of mechanical failure or breakage has got to ultimately arise.

There is some goods news and bad news on that front.

First, the bad news is that it is generally presumed that self-driving cars are going to be kept in a rather pristine shape. Those that own and run their self-driving cars are going to want to keep the maintenance at tiptop condition. The logic is somewhat obvious. If you are using your self-driving car to make money by providing rides, the time spent in a repair facility is lost income. To avoid extended stays in a repair shop, you’d better keep the vehicle in the best possible order.

This implies that upon the takeover of self-driving cars, the vehicles will most likely be in great shape and able to go quite a while before things start to fall apart. That’s the bad news about pinning your hopes on the breakage vulnerability.

The good news somewhat is that eventually wear-and-tear would occur, and assuming that nobody will be willing to then fix up a self-driving car, we’ll have those vehicles sloppily stuck and completely out of sorts. They will come to a grinding halt of their own accord, gradually and inevitably. Of course, we would be living in fear and uncertainty until that day arrived.

Here’s yet another believed way to deal with those wayward self-driving cars.

You sneak up to a self-driving car. In your hands, you have a USB memory stick or equivalent. Upon getting into the vehicle, you plug in the device. It contains some form of programming that can undercut the existing programming. This will allow you to get the AI driving system to no longer be under the command or spell of the mastermind.

Aha, humanity and goodness will triumph once again.

Hold on for a second, there are problems with that proposed solution.

Sneaking up on a self-driving car could be a challenging affair.

Most self-driving cars are chockful of handy sensors such as video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and the like. We can most likely assume that the takeover of the self-driving car has kept those sensory devices intact and underway. Without the use of those sensors, the AI driving system can’t do much in terms of ascertaining where to drive the car (as a side note, the use of maps could be resorted upon, though this could be undermined in various ways by those trying to prevent the now-blinded self-driving cars from getting around readily).

The evildoer would presumably have the AI driving system be on constant alert for anyone approaching the vehicle, and then have the self-driving car skirt away (or try to drive at the person). The idea of trying to secretly get into a self-driving car in this takeover scenario is rather outstretched. I’m not saying that it cannot be done and simply emphasizing that it would be difficult to successfully achieve.

Even if you did get into the vehicle, the other problem is that your effort to reprogram the AI driving system might not be feasible.

Think of it this way.

If self-driving cars could easily be reprogrammed by merely getting into the vehicle and installing a memory stick or other programming tool, this would be a huge and dangerous exposure for all self-driving cars. It would imply that anybody riding in a self-driving car can pretty much willy-nilly change the AI driving system.

It is hoped and generally anticipated that there will be rigorous cybersecurity protections included in self-driving cars. This is intended to avert the odds of someone doing the very thing that we are now suggesting is wanted. Most of the time, we won’t want someone to be able to take such an act. In the rare and so far fictional notion of a mass takeover, we might regret that we built such impenetrable protections, but meanwhile, it makes indubitable sense to have them.

For those of you that perchance know something about cybersecurity, you know that it is an ongoing cat and mouse game. As such, even if self-driving cars are well-protected from such an attack, there is still the chance of finding a way around the fortress of barriers and blockages. One guesses that if a mass takeover occurred, and upon virtually bringing together the most notable of cybersecurity experts and potential good-oriented hackers, perhaps a means could be figured out to make this USB memory stick containing an AI driving system reprogramming become a reality.

Remember though that getting into the car would be a challenge unto itself.

Plus, imagine that you were trying to go to every self-driving car and do this corrective action. We don’t know how many self-driving cars we might eventually have on our roadways, so the number is still unknown. On the other hand, there are about 250 million conventional cars in the United States alone (for my analysis about the various stats entailing conventional cars, see the link here). Some predict that by and large, we will gradually wean ourselves away from conventional cars. If that happens, we might end up with the same number of self-driving cars, namely on the order of 250 million or so.

There are arguments about the number of self-driving cars that we will need.

Since most conventional cars are only used about 5% or less of their available time (sitting parked about 95% or more of the time), we might not need the same number of self-driving cars due to the built-in AI driving system and that they can operate 24×7. Some believe we can have a lot fewer self-driving cars to make up for the eventual junking of all those conventional cars.

Other experts suggest that our demand to use self-driving cars will skyrocket as we realize how convenient they are. This suggests we all might go for journeys in cars more so than we do today. Ergo, it is conceivable that we might need just as many self-driving cars as the number of today’s conventional cars, or perhaps even more self-driving cars will be desired.

The key point is that trying to go to each and every self-driving car to insert the antidote, as it were, would be laborious and seemingly impractical.

That doesn’t end the case for trying to do the AI driving system reprogramming. There is another potential avenue.

Self-driving cars will be making use of OTA (Over-The-Air) electronic communication capabilities. This is akin to what happens when you get your laptop or smartphone updated. It is expected that self-driving cars will connect to the cloud of the fleet owner or operator, doing so to get updates to the onboard software. In addition, the self-driving car can upload data to the cloud. This might include the status of the vehicle, along with the sensory data collected during a driving journey.

I think you know where I am heading.

Rather than having to physically get into each self-driving car, perhaps we could exploit the OTA to try and change the AI driving system. All we would need to do is get the OTA to willingly download some patches that we had put together. This might either turn the AI driving system back into what it was supposed to be doing or maybe we have it be blanked out entirely. The approach of blanking out the AI driving system would seem the more assured approach to stop the evildoer takeover, though this also means that the vehicle becomes a doorstop.

Turns out that there are lots of twists and turns now that we’ve introduced the OTA aspects into this scenario.

Before jumping further into the details, I’d like to clarify what is meant when referring to AI-based true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Malicious Mass Takeover

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into more of the myriad of aspects that come to play on this topic.

We were earlier discussing the use of OTA as a means to try and infuse a devised cure or antidote that might curtail the malicious mass takeover of self-driving cars. The beauty of using OTA is that we could presumably do so massively across the gamut of self-driving cars that have been villainously overtaken. Rather than trying to physically go to each self-driving car by hand, we can on a widespread basis electronically communicate the presumed remedy that we’ve hurriedly crafted.

Sounds good, and a relief that the matter is seemingly settled.

Not so fast.

There are plenty of gotchas and holes in that corrective scheme.

One aspect that we’ve not yet discussed is how in the heck did the takeover happen to begin with? In other words, we took at face value that somehow the maniac mastermind has been able to take over self-driving cars. Let’s explore that overthrow and then return to considering how to undo or overtake the overtaking.

As already suggested, one approach would be to physically get into a self-driving car and use some immediate means to try and reprogram the AI driving system, doing so for evildoer purposes rather than to correct whatever an evildoer has already done. There are a variety of ways that this in-car invasion could be approached. One would hope that the cybersecurity of the AI driving system has been mindfully constructed and will not be especially readily breached. That’s not to say it cannot necessarily be flouted, only that it will be hard to do.

Of course, the laborious chore of going from self-driving car to self-driving car would seem not very expedient if the fiend is aiming to perform a mass takeover. The alternative would be to use the OTA. Yes, the same OTA that is purposely included by the automaker or self-driving tech firm for aiding in making desired updates could be sneakily used for wicked intent.

The OTA provides an opportunity for badness. If a crook can somehow plant something into the cloud of the self-driving car fleet, there is a chance of getting that distributed out and into the self-driving cars via the OTA. This might occur by hiding the untoward element inside an otherwise accepted and everyday update going down to the autonomous vehicles. Or it could consist of a special update package that convinces the OTA mechanism that everything is okay and the element should be sent out.

As might be evident, the OTA is a dual-edged sword.

Then again, maybe it is a tri-edged sword.

Here’s why.

The OTA is normally used to send approved updates that are downloaded and installed into the AI driving system. There is the chance that a malicious code leverages the OTA and gets downloaded and installed. And there is the possibility of using the OTA to correct or delete the malicious code. Perhaps that’s at least three sides of the sword, though we could continue this line of thought indefinitely and continue the cat and mouse game indefinitely.

Since the OTA is the drawbridge that connects the fortress to the rest of the world, there needs to be a lot of protection about allowing the drawbridge to be used. For those automakers and self-driving tech firms, they do not want the drawbridge to be used by those devilish intruders.

Meanwhile, the devilish intruder would likely not only realize that their best shot would be to exploit the OTA drawbridge, but they also might decide to try and cut off the drawbridge after successfully making an invasion. Either burn the drawbridge to the ground and stop any further OTA or try to alter the drawbridge so that only they can continue to make use of it.

We need to also bring up another crucial factor.

Not all self-driving cars will be the same.

A particular automaker might choose to develop or select a self-driving stack of code that befits their specific brands and makes of self-driving cars. When you see a self-driving car that is made by company X, the odds are that the software and systems running onboard are quite different from the self-driving cars made by company Y. The same can be generally said about the OTA being used.

The reason this is notable in the case of a mass takeover of self-driving cars is that it is going to be somewhat harder to intrude across all variants of self-driving cars. An evildoer might focus on a particular set of self-driving cars made by company X. Whether they can do the same against the self-driving cars of company Y is not necessarily assured.

What this means is that it might not be a mass per se, and only a subset of self-driving cars that the maniacal takeover occurs. Suppose that we had a city with hundreds of self-driving cars made by company X, and other hundreds of self-driving cars made by company Y. It could be that only the company X self-driving cars are part of the takeover, while an attempt to overrun the company Y was rebuffed or not even tried at all.

This might not be of much solace if there are millions upon millions of those company X self-driving cars out on our public roadways. You could say that only those company X self-driving cars are being conquered, but that still is a potentially large number of autonomous vehicles and ostensibly dangerous to have in the takeover column.

Conclusion

Much more can be examined in this doom-and-gloom scenario.

When or if such a mass takeover were to occur, one key question is whether the self-driving cars would now have wholly contained whatever their foul mission is to be, or would they rely upon the OTA or some other form of electronic communication to get instructions during their adverse takeover journeys.

In essence, one approach involves infusing the AI driving system with whatever bad acts it is supposed to undertake, and then try to break off any further electronic communications. Another approach involves seeking to communicate on a mass basis to direct the self-driving cars while they are in this deranged takeover mode. This allowance for communicating though becomes a point of potential weakness in the takeover plot since it can possibly be used to disrupt or ultimately nullify the malevolent scheme.

Makes your head spin at all of the trickery involved.

Not wanting to further be glum, but there is a similar chance of doing the same kind of mass takeover for cars that are not self-driving cars. Those vehicles that are at Level 2 and Level 3 are bound to eventually have OTA. Everything we’ve just discussed applies to Level 2 and Level 3 and has an added variation. Since Level 2 and Level 3 presume that a human driver is available at the wheel, this means that the programmed takeover could end up becoming a struggle between the human driver and the automation.

You might be thinking that the solution to nearly all of this would be to have a big red button or an on-off switch inside all cars that could be used to turn off the automation. Some pundits are pushing for this arguable solution. For my explanation about how it won’t especially be much of a solution, see my analysis at this link here.

Okay, so where does this leave us?

For those that are devising self-driving cars (and pretty much any automotive vehicles containing advanced automation), the importance of establishing rigorous cybersecurity protections is paramount.

That is a tall order. There is a myriad of ways to undercut the computer system security of an automobile. Lots and lots of possible attack vectors and vulnerable paths exist.

Discussing the notion of a mass takeover seems like crying wolf to some. Those skeptics ought to also take into account the role of nation-states as it pertains to self-driving cars (see my analysis at this link here). The purpose herein was not to suggest that a mass takeover will of necessity happen. The intent is to awaken the awareness of what all automakers and self-driving tech firms need to be doing, namely devoting a tremendous amount of energy, expertise, and devoted attention to securing their automation from cyberattacks.

In the rush to try and deploy self-driving cars onto the roadways, seeking to have those vehicles safely proceed from point A to point B, it can be easy to lose sight of the cybersecurity considerations. Those leading the existent self-driving car efforts are at times preoccupied with the point A to point B tryouts (naturally so), and not equally garnering focus on the cybersecurity elements. Sadly, some even have that “we’ll worry about it later” perspective, figuring they have plenty of time to one day get the cybersecurity in place for their AI driving systems.

Let’s not allow the strident pursuit of self-driving cars to become a vulnerability by undervaluing or being distracted from the concerns and dangers that are looming from those evildoers that would seek to do malicious cyberhacking on a mass sized basis.

The veritable hand wringing on this topic is well worth the stress and abject worry, mark my words.



READ NEWS SOURCE

Also Read  COVID-19 And Managed Transportation: Providers Work To Handle Shipping Volatility