Transportation

Infusing A Dose Of Human Driver Skepticism Into The AI Driving Systems Of Self-Driving Cars


Are you skeptical about most things?

Maybe you are rarely skeptical.

In that case, it might be that you take life as it seems to exist and seldom question what comes your way. Some might say that this puts you at undue risk, namely that you aren’t skeptical enough about the world around you. Arguably, this might be construed as being altogether gullible or susceptible to being scammed or falling into untoward traps.

On the other hand, there is a chance that you are the type of person that is skeptical about nearly everything. Doubt continually enters your mind as to whether the things you are seeing, hearing, or otherwise come in contact with are true. The world is composed of falsehoods and it is your natural bent to assume that the whole shebang around you is false. Skepticism runs rampant in your noggin, which some would say is excessive and might undercut the joys of being in the moment and going with the flow of existence.

A healthy dose of skepticism seems to be the usual prescription of how to approach things. In a kind of Goldilocks manner, the notion is to have a sufficient amount of skepticism. Not too much, and not too little.

Of course, a strident argument can be made that skepticism ought to vary depending upon the circumstance or situation at hand. Being middle of the road on skepticism might not be the best course of action in all settings. There are bound to be occasions where the level of skepticism should be ratcheted up or potentially ratcheted down.

Let’s use the act of driving a car to help illustrate the applicability of skepticism or the role of being skeptical (for more about the nature of various driving behaviors, see my coverage at this link here).

You are driving along and happen to notice a car up ahead that is straying toward the edge of the lane. This car seems to go right up to the lane markers, as though possibly getting ready to change lanes, and then gradually steers back into the midst of the lane. Over and over, this tendency to ever so slightly veer, just a tad, keeps occurring.

One thought is that this is just a random event. The driver perchance is merely moving back and forth in the lane. Nothing to write home about.

If you were a bit more skeptical, perhaps this gingerly act of toying with the lane boundaries is a sign of something deeper. It could be that the driver has had a few drinks at the local bar and decided to take a drive. They are not completely sauced, but nonetheless, the liquor is impacting their ability to drive. Based on this sense of doubt about the driver and their mental state, you opt to stay well behind this moderately weaving car and play it safe by keeping your distance.

Perhaps a hefty dollop of skepticism might lead you to believe that for sure this other driver is out of their gourd. There is no doubt in your mind whatsoever that this is absolutely an intoxicated driver that should not be on the roadways. You quickly dial 911 and report that there is a drunken driver, and the police must stop the car before someone gets badly hurt.

One aspect of the situational aspects of employing skeptical thinking is that the stakes can potentially alter the degree of skepticism seemingly warranted.

When driving a car, the stakes are quite enormous. We take driving as a routine everyday task and don’t usually consider it to be particularly risky. The thing is, driving a car and being around other moving cars is decidedly a life-or-death matter. Sadly, there are about 40,000 car crash-related fatalities each year in the United States alone, and approximately 2.5 million crash-related injuries (for my collection of stats, see the link here).

Some would assert that being skeptical while at the wheel of a car is assuredly prudent. The old sage line seems to apply, namely that it is better to be safe than sorry.

When discussing skeptical thinking, I usually bring up the notion of directional vectors of skepticism. For example, you can be directing your skepticism at the acts of others. This was evident in the scenario of the car that was somewhat veering within its lane. Your skeptical way of thinking was directed toward the car ahead of you and the driver of that vehicle.

Another directional angle would be to consider skepticism aimed at yourself.

Consider this setting.

Suppose you are driving down a busy street. A lot is going on in terms of pedestrians bustling along, there is a slew of other cars jockeying in traffic around you, some bicyclists are nearby, and so on. The driving scene is hectic.

Out of the corner of your eye, you notice what seems to look like a dog at the side of the street and hiding in some bushes. You only though caught a very brief glimpse. It seemed to be a dog, but you aren’t really sure. Maybe it was an object that merely resembled a dog. Could be that there is a child’s toy that was shoved into the bushes and the shape perchance made you think about a dog.

You are skeptical that what you saw was truly there.

In that manner, you are expressing skepticism towards yourself.

You are doubting that what your eyes glimpsed was real versus perhaps an optical illusion. Maybe nothing at all was there in the shrubbery and you assumed that there might be, simply based on the overgrowth having a shape and coloring that resembled what a dog looks like. Or perhaps there is some inanimate object in the bushes that approximates the shape of a dog.

The same kind of self-oriented skepticism could occur with your eyes and also with your ears.

Have you ever been driving and thought you heard a siren? You strain to listen carefully and determine if there is a siren in the distance. The sound could indeed be a siren and thus a handy forewarning that an ambulance or firetruck might be coming down the road soon. On the other hand, maybe it was just a whistling sound that was similar to a siren. Then again, there might not have been any noise at all and you entirely imagined that a siren had blared (it was all in your head, one might say).

In each of those instances, you were doubting your sensory capacities. You can also have similar doubts about your mind.

Up ahead is a busy intersection. The light is green. Your mind was momentarily wandering, and you weren’t paying close attention to the traffic light. At this juncture, you are unsure whether the light has been green for a while and therefore might soon be changing to yellow and then red. It could be that the light just recently turned green and there is plenty of time to make it through the intersection.

You mull this over in your mind.

The light might be getting ready to switch to yellow. At your current speed and the remaining distance to the intersection, it is going to be dicey to come to a complete stop in time for the red light that will occur after the yellow. Darn it, if you knew that the light had been green for a lengthy time, the odds are that a yellow is imminent and thus it would be best to slow down in anticipation that you’ll need to stop at the red light once it appears.

Did you mentally zone out for a fraction of a second, or did you allow several seconds to lapse while somewhat daydreaming and not concentrating fully on the roadway? There are no easy means to know.

In any case, you are skeptical about your own frame of mind. The directional vector of your skepticism is now being directed at yourself and your mental facility.

Before we proceed into the reasons why the topic of skepticism is being noted in this discussion, I can relieve your anxiety by saying that the bushes did not contain a dog (it was a small tricycle that at a glance could be construed as a dog), the driver in the car that was weaving amidst the lane was, in fact, DUI, and the green light stayed green long enough that it was possible to safely proceed through the intersection. Just wanted to tie-up those loose ends, else I realize that some of you might be on the edge of your seat for the rest of the discussion.

Shifting gears, the future of cars will consist of self-driving cars.

These are cars that have an AI-based driving system at the wheel. There isn’t a human driver involved.

Let’s contemplate how well the AI driving systems will be able to drive a car.

The hope is that self-driving cars will drive at least as safe as human drivers. Indeed, the assumption is that a self-driving car will be safer overall due to the aspect that the AI driving system won’t be drinking and driving, nor will the AI driving system be distracted by watching cat videos. The human foibles of driving are potentially going to be wrung out and the AI will achieve a safer driving record accordingly (that’s the dream).

Here is today’s intriguing question: Will AI-based true self-driving cars be utilizing skepticism or a variant thereof while driving, such that the AI driving system will invoke a semblance of doubt as an aid in performing the driving task?

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Role Of Skepticism

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that today’s AI is not sentient.

In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. I mention this aspect because many headlines boldly proclaim or imply that AI has turned the corner and become equal to human intelligence. As if that wasn’t bad enough, the outsized headlines seek to amp further the matter by contending that AI is reaching superhuman capabilities (for why the use of “superhuman” as a moniker is especially misleading and inappropriate, see my discussion at this link here).

Why this emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of skepticism as infused into an AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

In a moment, I’ll explain how skepticism (or its variant), which we would normally consider an intrinsic human trait, can be programmatically rendered into an AI system. There isn’t any magic or mystery involved. This is something that can be tangibly measured and quantified.

As a side note, readers of a philosophical bent might argue that skepticism must only be a human quality and cannot ever exist in any other manner or form. Some might even claim that no other creature on earth can embody skeptical ways of thinking, other than humans alone. I’m not going to get bogged down in that debate herein.

The assumption here is that being skeptical is a quantifiable factor of incorporating doubt into a matter at hand and that this can be overtly taken into account when devising a computer-based system. Not everyone will agree with this assumption, so I thought it important to let you know that your mileage may vary on whether anything other than humans can exploit or employ skepticism.

Okay, with that foundational setting, let’s jump into the fray.

A self-driving car will have an abundance of sensors that are the proverbial eyes and ears for the AI driving system. The typical sensor suite consists of video cameras, radar, LIDAR, ultrasonic devices, and the like. These sensors are continually collecting data about the driving scene and the data is then mathematically analyzed to try and ferret out what the surrounding driving environment consists of.

Suppose that a video camera on the front of the car is aimed at the road ahead. The video images are being streamed into the on-board computer processors. There are image processing algorithms that computationally examine the imagery. This is done to try and figure out where the roadway is, where other cars are, where the sidewalk is, and so on. There are numerous challenges in doing so, and you cannot falsely think that the image processing capability is going to be perfect and unerringly ascertain all aspects of the driving environment.

Have you been driving your car and suddenly had a glare of light from bright sunshine that momentarily obscured your vision?

We all have had that happen. The same lighting effect can adversely impact a video camera. Perhaps you’ve used the video camera on your smartphone and had occasions whereby the images were hard to make out because a shiny ray of light blotted out what was able to be seen.

Tying these aspects together, imagine that a video camera being used on a self-driving car provides images that are oversaturated with light due to a sunny spot that the vehicle just encountered. The image processing tries to nonetheless use various filters and mathematical models to recover whatever might be within that driving scene as available via the somewhat confusing images.

Upon mathematically analyzing the scene, the image processing software identifies what seems to be a possible dog, hiding in a set of bushes next to the street. The lighting though is impacting the computational task of identifying the objects in the driving scene as embodied in the video imagery collected.

Should the rest of the AI driving system assume that indeed there is a dog there, and as such perhaps start to slow down the car because the dog might suddenly rush into the street?

Perhaps you recognize that this is similar to the earlier scenario. There might be a dog in those nearby bushes, there might not be.

Mull over the ramifications. The rest of the AI driving system might be programmed to assume that whatever the image processing algorithms indicate is the unvarnished truth. If the video camera image analysis is reporting that a dog might be there, it must be there (a seemingly most-safe assumption).

In a manner of speaking, one might hope that the AI driving system would be somewhat “skeptical” about the image processing report.

If the AI driving system of the self-driving car is going to slow down the vehicle or bring the car to a halt because there might or might not be a dog present, realize that this is going to likely impact the rest of the traffic. A human-driven car behind the self-driving car might not realize why the self-driving car is slowing down, and the human driver could become irked accordingly or caught off-guard, possibly rear-ending the self-driving car.

I realize that this particular example seems relatively benign, ostensibly in this scenario it indubitably seems to make sense to have the AI driving system be extremely cautious. Let’s though ramp up this notion. Suppose the AI driving system is constantly taking the most extreme cautious approach to the act of driving. All the time. Incessantly. Would a self-driving car in that mode be viable on our public roadways? Overall, such a design would likely only be able to proceed at a crawling speed, and it would have to do repetitive starts and stops, never proceeding at a full and uninterrupted pace.

We would find such self-driving cars to be impractical as anything that could viably be used in our everyday world of having to deal with the uncertainties associated with driving a car.

How is a semblance of “skepticism” rendered into an AI driving system?

Revisit the earlier point that skepticism can have a directional vector.

The AI driving system ought to essentially employ doubt about itself, incorporating probabilities and uncertainties, along with having cross-checks and computational resiliency purposely built into the system.

Are the sensors working correctly? Is a sensor misreporting something? Can the results of one sensor be compared to another sensor to try and resolve what might be in the driving scene (this is referred to as MSDF, or multi-sensor data fusion)? Etc.

Another realm of self-doubt would entail the on-board computer processors. Are the computer processors working correctly? Is the internal computer memory corrupted? And so on.

And there is the matter of raising doubt or questioning of the AI system by the AI system.

Has the AI driving system encountered an internal error or bug? Is the AI driving system not responding on a timely basis to the roadway conditions? I’ve covered these kinds of issues in my column and they are all important as integral to a sound and reliable AI-based driving system.

Refocusing the directional vector, consider the skepticism aimed at others.

A self-driving car comes upon a car up ahead that is not quite staying in its lane. The car is straying toward the edges of the lane.

An AI driving system that is not devised with a kind of skeptical programmatic method will not especially notice that another car is weaving like this. Only once the other car dramatically swings out of its lane will such an AI driving system ostensibly take notice of what the other car is doing.

Conclusion

The problem facing the developers of self-driving cars is somewhat similar to the earlier point about how much skepticism is healthy to adopt.

We obviously want AI driving systems that are double-checking what is taking place. Perhaps we want them to be triple checking and quadruple checking. This seems warranted given that the act of driving a car involves life-or-death matters.

How far though should this kind of multitudes of checking be extended? You could devise the AI driving system to be spinning its wheels (metaphorically) by checking and repeatedly rechecking to the nth degree.

This might lead to the AI driving system not proceeding at all.

Though I am loath to draw a seemingly direct comparison to a human driver per se, perhaps you’ve helped a teenage newbie driver learn to drive a car (I am not suggesting that a teenage driver is somehow the equivalent of an AI driving system!). I recall that one teen got into the car, sat at the wheel, and after contemplating all the possible things that could go awry, became racked with doubt and seemed to conclude that it was too dangerous to take the car for a drive.

By whatever wording you wish to use, whether skepticism is a fitting choice or maybe not, an AI driving system has to have a healthy dose of it, ensuring that the self-driving car is calculating the chances that whatever is seen or detected might be false or misleading, and that other drivers might be going askew, and that the AI system itself might oddly veer and go awry.

Unless you are stridently skeptical, it seems little doubt that this kind of doubt is a necessity when driving a car by humans and also by machines.



READ NEWS SOURCE

Also Read  Volkswagen Golf GTi Hot Hatch Makes A Brilliant Leap Forward With Mk8