Transportation

The Absurdity And Misleading Baloney About AI Self-Driving Cars That Will Be Uncrashable


Please cover your ears since I am about to tender some rather harsh language.

Are you ready?

For those vendors and pundits claiming that AI-based self-driving cars are going to be uncrashable, I say hogwash.

The absurdist claim isn’t just your normal kind of hogwash, it is some of that amazingly potent unadulterated hogwash.

Claptrap. Nonsense. Rubbish. Preposterous.

Sorry about the use of such unsavory terms.

Maybe it would be simpler for me to make this bold and unequivocal assertion: Self-driving cars are not going to be uncrashable, ergo, they absolutely can in fact get into a car crash or similar form of automotive collision.

If you’ve not been following the latest news about self-driving cars, you might not have seen or heard the recent claims that self-driving cars are going to be uncrashable. To be clear, this type of talk actually (sadly) goes back numerous years. The baloney about having uncrashable self-driving cars has a lengthy history and seems to be revitalized from time to time.

For my candid and myth-busting ongoing analyses and coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Today, in this discussion, I’d like to help put a “destroy the zombie” steel pole directly through the heart of such tripe.

One supposes that the proper and balanced place to start the discussion would be to provide the alleged basis for why self-driving cars will be uncrashable. In essence, let’s see if we can surface the rationale for those that make such a claim. It is usually best in these types of arguments or debates to ensure that each side has a chance to share its posture on outsized controversial and contested matters.

After seeing what the uncrashable camp has been proffering, I’d like to then take a step-by-step dismantling of their argument and showcase the tomfoolery involved. You will abundantly then be aware of why the position of the uncrashables is entirely untenable.

I realize that you might be tempted to think that this is nothing more than an insider’s bruhaha and doesn’t seem to be of concern to anyone other than those that are in the midst of crafting AI self-driving cars.

You know how these things sometimes go. There are internal heated debates amongst experts in a given field. Eventually, things get worked out, presumably, and the ruckus falls by the wayside.

I’d like to suggest that this is altogether a strident matter that has an outstretched current impact and extends far beyond the usual inner sanctum of talking heads heated discourse.

If vendors and others go around touting to everyone that self-driving cars are going to be uncrashable, it sets some wholly unsettling and unrealistic expectations. With all of the potentially positive aspects that self-driving cars are going to produce (I’ll list those aspects momentarily herein), having the uncrashable allegation on the list is the bad apple in the bunch.

The thing is, since the uncrashable mantra won’t be achievable, it mars the other realistically feasible positive aspects in the barrel of societal gains from self-driving cars.

The public will be wondering what the heck ever happened with the uncrashable nature of self-driving cars. Where did it disappear into? And, if that element wasn’t achievable, perhaps others on the list are also tainted (the bad apple spoils the rest of the barrel).

There are enough false impressions and exaggerations about self-driving cars and we assuredly don’t need more to be piled onto the already stacked up pile.

I’ll add something else to the factor about why referring to self-driving cars as presumably able to be uncrashable creates a series of problems.

First, regulators might be so enamored of this possibility that they would be willing to take undue chances at how self-driving cars are to be regulated. Anyone that genuinely believed that self-driving cars will never crash would certainly be doing a service to society by trying to get self-driving cars on our public roadways at the soonest possible moment.

It just stands to reason.

Ponder some crucial statistics.

In the United States alone, there are a reported 6.3 million car crashes per year. See my handy collection and analysis of those stats at this link here.

Now, those are just the reported car crashes, thus, there are likely more. Another way to construe the count is to consider that there are about 700 or so car crashes every hour of every day, somewhere in the United States. International stats that account for global car crashes across all countries is a daunting set of figures and will make you feel queasy and dampen your heart.

For the U.S., those reported car crashes generate about 2.5 million injuries per year. I’m talking about serious injuries, life-altering injuries. And the number of deaths or annual fatalities in the U.S. due to car crashes is about 40,000 souls. Take a moment to let that sorrowful number sink in.

Anyway, the hope certainly is that self-driving cars will incur a much lower number of car crashes, and therefore the number of car crash-related fatalities will drop tremendously, as will the number of associated injuries. You can also expect that this would mean that the societal costs associated with car crashes would also be reduced immensely, such as the costs devoted to car crash cleanups, court cases, car repairs, and the like.

You can sensibly assume that the number of car crashes ought to go down because of the differences between how humans drive and how AI driving systems are being devised. Human drivers are rife with human foibles, such as drinking while driving, being distracted while driving (such as watching cat videos or texting), and so on. The AI driving systems should not incur those car crashes that otherwise would have happened if a human being was at the wheel and exhibited those types of human foibles.

If we can create AI driving systems that are at least on par with human drivers in terms of undertaking the driving task, we could logically opt to deduct the sour instances involving car crashes that were due principally to human foibles while at the wheel. The AI driving systems ought to not be making those kinds of driving mistakes or incurring those kinds of driving risks, all else being equal.

That’s all part of the anticipated benefit of having self-driving cars.

Furthermore, the hope is that self-driving cars will bring forth a semblance of mobility-for-all.

Those in our society that today are mobility constrained and do not have ready access to a car will potentially readily have such access in the future. This is partially due to the ease of making use of self-driving cars because there is no human driver needed. In addition, the expectation is that the per-mile price associated with riding in a self-driving car will be a lot less than the comparable driving journey in a human-driven car.

I think you can already discern that there are lots of big reasons to be trumpeting the eventual attainment of self-driving cars.

We don’t need to include falsehoods.

Such as self-driving cars being uncrashable.

For those of you interested in some of the various viable reasons that are given as outright opposition to self-driving cars, many of which are ostensibly genuine and have sound logic to them, you can see my analysis at this link here.

Before I dive into the uncrashable viewpoint to highlight what their underlying assertions are, it might be handy to provide a bit of background about the emergence of self-driving cars.

Allow me a moment to elaborate.

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.

Here’s an intriguing question that is worth pondering: What is the basis for claiming that self-driving cars will allegedly be uncrashable, and how is this a nonsensical assertion?

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Uncrashable Nonsense

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

One quick aside on the uncrashable notion all told. You’ll quickly see that is a useful tangent for strictly clarification purposes.

Some people interpret the word “uncrashable” to mean that the autonomous vehicle is made up of or composed of some kind of material that is perfectly impervious to car crashes or automotive collisions of any sort.

Think of an armored truck. Armored trucks are made of specially strengthened materials and can take quite a beating. If you were to inadvertently ram into an armored truck with a conventional car, the car would likely be the loser and the armored truck would be the winner, at least regarding the amount of respective damage suffered.

I think we might all agree that this is decidedly not any semblance of being truly uncrashable.

It is instead a version of crashing that just so happens to seemingly lessen the adverse outcomes of an instance or crash-related encounter. Crashes are still going to occur in this scenario. They might be less bruising but are going to happen and we will still have damages, injuries, and fatalities.

In the context of self-driving cars, perhaps one might envision a special wrapping of protective armor that would encompass the autonomous vehicle. Or maybe the autonomous vehicle would be made from the strongest materials known to humanity.

Nobody but nobody that is seriously in the uncrashable camp would genuinely try to use the armoring angle as the basis for why self-driving cars are going to be uncrashable. They know this is a laughable proposition because it still entails the act of crashing.

So, we can dispense with the armoring consideration and instead focus on why the uncrashable camp usually touts the uncrashable banner.

An additional supplemental point to the impervious material concept is an alternative that employs a forcefield akin to something you might see in those wonderous and exceedingly imaginative sci-fi movies. The self-driving car could be made out of balsawood or even flimsy cardboard since it is miraculously protected by an all-encompassing forcefield. Anything that strikes the forcefield is summarily pushed away. If the self-driving car perchance hits something, no damage at all occurs to the self-driving car.

I’d like to right now place an order for that forcefield device and I am vociferously eager to have it arrive, hopefully via next-day overnight shipping service. Wait for a second, darn, turns out that there is no sense in hanging out at my front door tomorrow for such a delivery. There isn’t that kind of a device yet available, despite what the sci-fi movies showcase.

I will acknowledge this, it might be possible to someday have devices that can produce this kind of forcefield. It might further be possible to mount such a device onto a car or vehicle, conventional or autonomous. I guess in that case you could argue that the result is an uncrashable vehicle. Sure, go ahead and place this forcefield-fueled uncrashable concept into the futures file.

Not wishing to be finicky, but doesn’t this still imply that a crash occurs?

Imagine that two objects collide with each other. The forcefields keep them from getting smushed. They did though indeed crash into each other if you count their forcefields as part of them, correspondingly. Fortunately, no damage resulted from the collision. It was seemingly a crash in the strictest dictionary sense of two or more objects that violently collide.

Ponder that quirky conundrum for a brief instant.

Moving on, here’s the key everyday logic for the customary uncrashable exhortations.

Self-driving cars will be making use of an extensive sensor suite, including video cameras, radar, LIDAR, ultrasonic units, thermal imagining, etc. These sensors collect data during a driving journey and the onboard AI driving system computationally analyses the data in real-time. This oftentimes makes use of contemporary Machine Learning (ML) and Deep Learning (DL), which is simply the use of computational pattern matching techniques. There isn’t anything magical about the ML/DL.

In any case, a cornerstone assertion is that the AI driving system will utilize these sensors so ably that any and all potential car crash scenarios will be indubitably and fully anticipated before the point of potentially getting into a car crash.

Plus, because the AI driving system will be always forewarned about an imminent crash, the AI driving system will take evasive action, and the self-driving car will never ever get entangled in a crash.

Thus, self-driving cars will be uncrashable.

This meaning of the word “uncrashable” is bona fide if indeed the aforementioned could really take place. There would never be any crashes involving self-driving cars. The AI driving systems would ensure this. Since there wouldn’t be any crashes, you can gamely say that those autonomous vehicles are uncrashable.

Sounds wonderful!

We would all welcome a means of transit that never involves any crashes or collisions. Imagine that we had zero crashes. This means zero fatalities as a result of crashes (because there aren’t any crashes). This means zero injuries as a result of crashes (because there aren’t any crashes). I would dare to suggest that this is Nobel Prize-worthy.

How do you feel about the uncrashable self-driving car?

Gives you a heartfelt feeling.

More like heartburn.

Zero crashes have a zero chance of happening.

Here’s why.

The logic proffers that the sensors of a self-driving car will always and unfailingly detect any and all objects that might be within the feasible realm of invoking a car crash. These are perfect sensors that do this, along with idealized or absolutely perfect computational analyses of the sensory data that identify and are able to mathematically detect all such objects. Always. Every time.

The logic proffers too that the AI driving system will utilize these perfectly detecting sensors and perfectly in turn always avoid any potential car crashes by maneuvering the self-driving car, perfectly so.

A whole lot of perfects are in there.

That’s not the real world that we happen to live in.

The sensors are not perfect. They cannot detect everything totally within a driving environment such that all objects of any potential crash-related demeanor will be detected. In addition, sensors are electronic and mechanical devices that have various inherent limitations. They also can break, falter, or have other anomalies.

Even if the first part of this could exist in some fantasy land, the second part is the other crucial and added weakness to the whole shenanigans. Namely, the AI driving system has to be so perfectly devised that it can always and without fail to find a means to avert a crash.

Plus, if you don’t mind my saying this perhaps obvious point, another hefty constraint that belies the uncrashable pipe dream is the laws of physics. Sorry, those have to be observed too.

Consider the following example.

A self-driving car is going down a neighborhood street at 25 miles per hour. This is approximately a movement of about 37 feet of distance for every one second of time.

There are cars parked on the street, legally so. It is an ordinary residential area. Nothing unusual or out of the ordinary.

A small child is hiding between two of the parked cars. The toddler cannot be seen due to the higher stature of the parked cars.

Just as the self-driving car rolls alongside the parked cars where the child is hiding, the toddler regrettably decides to dart into the street. No prior notification is given. In case you are thinking this would never happen, I believe you might want to visit some residential areas that have children residing and playing in these communities. This can happen. It does happen.

The self-driving car is now within about 15 feet of the child, for which the toddler has emerged directly into the oncoming path of the self-driving car.

Let’s go ahead and assume that at this juncture the sensors detect the object in the roadway, and using the ML/DL pattern matching identify the object as a child. This is passed along to the AI driving system. The AI driving system is now faced with a situation involving a child that is within a distance of 15 feet of the autonomous vehicle that is going at 37 feet per second.

You might say that the AI driving system will summarily and without hesitation hit the brakes. That’s the hope.

The braking distance at this stated speed on a good set of tires with a good set of brakes on a dry road that is sealed and level would be about 40 feet or so. You would usually need to include the human driver reaction time and as such the stopping distance is a lot more, but we are going to be generous and assume that the AI driving system can instantaneously apply the brakes.

A multi-ton self-driving car that instantaneously hits the brakes and that has fully detected the toddler in the roadway is going to crash into that wayward child.

This is a crash.

I don’t believe you can argue about that. I suppose you might try claiming that the self-driving car could attempt to steer away from the child. I assure you, that doesn’t work out either. The amount of time available to avoid the crash is insufficient to indeed avoid the crash.

Note that the sensors did their best, but they were unable to detect the hiding child. Note that the AI driving system did its best, doing so by immediately applying the brakes. In the end, the child was embroiled in a car crash.

Unless you change the law of physics, this would be undeniably a car crash.

During this example, we have given the benefit of the doubt that the sensors were working flawlessly and the AI driving system was working flawlessly. If we were to add into this example the probabilities that the sensors might be faltering, or that the AI driving system might be faltering, we would have an even more pronounced basis for the crash. In that case, the crash might be worse in terms of severity.

There are many, many, many examples of how self-driving cars will not be able to assuredly detect objects that are potentially going to invoke or involve a car crash. Likewise, we cannot expect that AI driving systems will run perfectly and that they will be programmed to handle all possible permutations and combinations of all possible car crash scenarios.

Nor can we pretend that there will always be adequate time available to have the AI driving system divert from a car crash.

In addition, as I’ve repeatedly stated in my columns, we need to deal with the Trolley Problem.

This is a notion that there are going to be circumstances of two undesirable choices and having to make a tough decision as to which ought to be selected. Some industry pundits refuse to acknowledge that the Trolley Problem is applicable to AI self-driving cars, for which I have tried to state the case as elaborated at this link here.

Conclusion

There won’t be uncrashable self-driving cars.

At least not in any reasonably foreseeable future of what cars conventionally consist of. We might eventually have flying self-driving cars, in which case you could reconsider the toddler in the street instance and try to determine whether the autonomous vehicle would have sufficient time to fly over the child.

Fine.

But for the real world as we know it today, do not put your money down on any uncrashable cars because it is hogwash.

Meanwhile, you should gird yourself for the oft-used kneejerk standard retort about there not being any uncrashable self-driving cars. Here’s what the riposte is.

Hold on, brace yourself.

Those spouting the uncrashable self-driving car, and when backed into a corner about their claim, will say that they meant to emphasize that self-driving cars will be veritably uncrashable.

They will raise their eyebrows and tell you that they never thought anyone would wink-wink think that a self-driving car was fully uncrashable. People would certainly know that this is just a handwaving form of speaking. By uncrashable, this is just pointing out that the likelihood of crashes and the frequency of crashes will diminish.

That’s a sneaky way to wiggle out from the uncrashable moniker. It is cheating. It is a cheat because it still leaves the impression of self-driving cars as being uncrashable. When you use that word, trying to qualify it is not a proper way to go. Weaseling out of the demonstrative impact and power of the word “uncrashable” cannot be done by tossing a word salad around the phrasing.

Remove the uncrashable self-driving car from your vocabulary.

We have enough to do about the real issues of self-driving cars and do not need to chase our tails by dealing with the mythical beast known as an uncrashable self-driving car.

As they say, all is fair in love and war, but for self-driving cars, the “uncrashable” is scandalously out-of-bounds.



READ NEWS SOURCE

Also Read  Amazon’s Multibillion-Dollar Bet On Electric Delivery Vans Is Game-Changer For Startup Rivian