Have you ever eaten a papple?
Probably not, and the odds are that you might not have even heard of one before either.
ADVERTISEMENT
It is a type of pear that resembles an apple in appearance and texture, though once you take a bite out of this juicy fruit, you would realize right away that it tastes quite different from an apple. From the looks of things, a papple is usually round like an apple, reddish-orange like an apple, crunchy feeling like an apple, and yet it is decidedly not an apple. It is a member of the pear family.
Suppose you were reaching to grab an apple and by mistake grabbed a papple.
This could easily happen if you were not paying close attention, perhaps looking intently at your smartphone at some important cat video and meanwhile absentmindedly reaching over to a nearby plate that seemed to be filled with apples. If you shifted your otherwise diverted attention to the fruit that was now in your hand, the chances are that upon closer inspection you might suspect that this delicious-looking item was not an apple.
Of course, those cat videos are so mesmerizing that you likely would simply bring the fruit up to your mouth and start chomping down on the assuredly tasty item.
Imagine the look on your face when your tastebuds registered that something was amiss.
The problem would not be an especially harmful or disastrous issue and merely would be a mismatch of expectations. Your brain was expecting an apple, and your mouth got something else. The taste would be familiar, akin to the taste of other pears you’ve had before. Perhaps your mind would contemplate that this was an apple that just so happened to somehow have a taste reminiscent of a pear. You might take a startled glance at the fruit, trying to calculate what was going on.
Imagine that by a bad stroke of luck that it turns out you are allergic to pears. You’ve spent your life avoiding pears. Now, unbeknownst to you, by your own hand, you have inflicted yourself with a pear that you’ll now have an allergic reaction to.
Admittedly, that would seem like a pretty remote chance, namely that you happen to have an allergy to pears and mistakenly thought a pear that looked like an apple was readily edible as an apple. Maybe this is the same chance of winning the lottery or finding yourself taking a flight to the moon.
Stepping back for a moment, the crux to this whole apple-versus-pear conundrum arose due to a misclassification.
You misclassified a pear as an apple. No need to be especially hard on your self about this misclassification since this special kind of pear does have a notable resemblance to an apple, especially by visual examination alone.
ADVERTISEMENT
We misclassify a lot of things, all the time, daily, and at any moment.
You are waiting in a restaurant for a friend to come and have lunch with you. Your eyes are scanning the people that are entering the busy eatery. Assume that it is a cold day and raining or snowing, which means that most of those coming into the restaurant are wearing heavy clothes and generally covered up. It would be quite easy to spot someone that appeared to be your friend, based perhaps on their height and overall shape, yet once they removed their coat and hat, presumably by now seeing clearly the face of the person, you would realize it is not the person you were waiting for.
No harm, no foul.
Consider another example of a misclassification, though one with greater consequences.
You are driving your car on a winding road. It is hard to see very far ahead. As you come around a sharp curve, there is something in the middle of the roadway. What is it? Your mind races to quickly assess the nature of the object. Time is a key factor. You need to decide whether to try and swerve around the object, which is going to be dangerous to perform, or directly plow into the object, another potentially dangerous act.
ADVERTISEMENT
In a split second of available attention, your mind decides it is a tumbleweed.
Usually, it is feasible to ram into a tumbleweed and do so without any notably adverse results. Sure, your car paint might get scratched, but at least you stayed in your lane and did not incur the dangers of swerving, especially on this winding road that was (let’s say) rounding on sheer cliffs. So, you drive immediately ahead, and the tumbleweed lightly smacks your car. You are still thankfully safe and sound, able to continue the driving journey unabated.
But imagine that in that brief moment of classification, you inadvertently misclassified the object.
Turns out it was a meshy ball of steel cables that had come from a construction site and fallen off the back of a truck on this same winding road. The mesh was rolling and bobbling, just like a tumbleweed, and happened to be painted white and resembled a tumbleweed in both looks and actions on the roadway. Yikes, your decision to proceed ahead based on the belief that this was a tumbleweed is now quite problematic. You strike the object and it smashes your left headlight and gets entangled with your tires. A tire blows out. The car is now difficult to control.
That’s an example of how misclassification can ruin your day (let’s assume, for sake of discussion, you, fortunately, survive the incident and live to tell the tale of the misclassified tumbleweed, so go ahead and let out a sigh of relief, and continue reading herein).
ADVERTISEMENT
Why bring up this discussion about classifications and misclassifications?
Besides humans making classifications, there is an expectation that AI systems will be making classifications. Consider the use case of AI-based true self-driving cars that routinely need to classify the roadway objects that are encountered during a driving journey.
The sensors of a self-driving car are collecting voluminous data about the world surrounding the vehicle. This includes data from the on-board video cameras, radar, LIDAR, ultrasonic units, and the like. As the data gets collected by the sensors, the AI system has to discern what is out there in the world and thus inspects the data mathematically accordingly. Various computational pattern matching techniques are often utilized, including the employment of Machine Learning and Deep Learning (ML/DL).
Some people seem to think that AI is amazingly infallible and an idealistic form of mechanized perfection.
Please toss that absurd notion out of your mind.
ADVERTISEMENT
You already would seemingly agree that humans can misclassify things, and as such, you need to realize and expect that AI systems can and will also misclassify things too (I’m not suggesting that humans and AI of today are equivalent, and do not wish to somehow anthropomorphize current AI, thus merely pointing out that AI can misclassify, in the same semblance of a notion of misclassification as that which befalls humans).
Per the driving example of the human driver that was on the winding road, a misclassification while at the wheel of a car can be life-threatening. We can presumably allow some slack about misclassifying a pear as an apple, based on the logic that this is rarely a life-or-death kind of classification, but providing latitude toward misclassifying objects that are in a driving environment can have quite serious and sobering consequences.
A recent social media post by Oliver Cameron, CEO of Voyage, brought up an interesting question about the classification and misclassification aspects of self-driving cars. In particular, I’m referring to a posted indication of a snowman that was purportedly misclassified as a pedestrian by an AI driving system.
ADVERTISEMENT
I’ll give you a moment to ponder the ramifications of that type of misclassification.
Almost as though you were playing a chess game, consider what kind of moves and countermoves that specific misclassification portends.
Is it more akin to the misclassifying of a pear as an apple, or closer to the misclassification of the tumbleweed?
Before we get into the details, first let’s clarify what I mean when referring to AI-based true self-driving cars.
Let’s unpack that and see.
Understanding The Levels Of Self-Driving Cars
ADVERTISEMENT
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
ADVERTISEMENT
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Misclassifications
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
ADVERTISEMENT
The AI is doing the driving.
Here’s our scenario: A self-driving car is going for a jaunt, doing so in an area that had a recent snowfall (for more about how self-driving cars cope with driving in the snow, see my article at this link here).
Assume that the self-driving car is either heading to pick-up a ridesharing passenger or maybe is simply roaming and awaiting a request for a lift (for aspects about the roaming of self-driving cars, see the link here).
Imagine that a snowman has been assembled on a somewhat snow-covered grassy area that is adjacent to the roadway.
This happens all the time and we can certainly expect that once the snow season arrives, there will be lots of bustling children (and adults) that opt to craft a snowman. Perhaps this amounts to one of the most delightful aspects of living in an area that gets snow. You might carp about having to shovel snow from your driveway or complain bitterly about how treacherous the streets become when coated with snow and ice, but by gosh, you can make snowmen!
ADVERTISEMENT
As the self-driving car comes up on the road that has the snowman, the sensors of the vehicle are all doing their thing, such as visual imagery pouring in from the cameras, radar data being obtained, LIDAR data being collected, etc.
This data is assessed computationally to classify the objects that are in the driving environment. A properly devised AI driving system makes use of Multi-Sensor Data Fusion (MSDF), meaning that the interpretations that are being derived via each of the types of sensory data are being aligned and compared, aiding in trying to discern and classify objects (think of this as though you might use your eyes and your ears, in combination, when trying to decide what an object is, thus, a multi-sensory form of classification, for my coverage of this crucial capability see the link here).
Upon collecting the sensory data, the AI driving system determines that a thing consisting of puffy white balls (of snow) and that has a hat and some seeming arms (made of sticks) might be a pedestrian.
ADVERTISEMENT
Most people do not realize that these kinds of AI-based classifications are usually assigned a probability or, if you will, an uncertainty value. Perhaps the AI classifier has computed a 90% chance that this is a pedestrian or maybe only a 5% chance. Depending upon the threshold devised for the AI driving system, and the nature of the object as it is estimated to be, the result can be that the AI stipulates that the object is a pedestrian, though with an assigned chance that it is and an assigned chance that it is not.
Anyway, let’s assume that the snowman has been classified, or more worrisome, misclassified as a pedestrian.
Your first thought is that this is funny and not at all a concern. It is seemingly a lot better to misclassify a snowman as a pedestrian than to do the opposite of misclassifying a pedestrian as a snowman. If an AI-based classifier mistook a pedestrian to be a snowman, and if the AI system was devised to assume that snowmen do not move and otherwise are not to be a matter of attention, this could lead to some rather unfortunate and possibly ugly consequences.
Hopefully, even in this reversal of a misclassification, once the pedestrian (“snowman”) started to walk or move, the sensors would detect the action, and the AI classifier would reclassify the object to be considered a pedestrian. That doesn’t quite solve the issue though. Perhaps, if the AI had correctly classified the pedestrian at the start of the process, it would have maybe slowed down the self-driving car since there appeared to a pedestrian near the roadway. Now, somewhat after-the-fact, having reclassified, the available time to take an avoiding driving action might be diminished and thus increase the risks associated with the existing driving scene.
ADVERTISEMENT
Back to the situation of misclassifying the snowman as a pedestrian.
You are perhaps now thinking that it is “safest” to have made the misclassification in that direction rather than somehow doing the reverse misclassification. All told, this would seem to be a “get out of jail free” card, namely that it is better to misclassify (if misclassification is inevitable) toward being a human than being a non-human (i.e., a pedestrian in lieu of a snowman).
Yes and no.
It partially depends upon what actions the AI driving system has either prior devised via the use of ML/DL or been explicitly programmed to do when encountering a pedestrian.
Suppose the AI determines that since this does seem to be a pedestrian, the self-driving car should slow down. This seems quite prudent. The pedestrian is standing near the curb. There is a possibility that the pedestrian might opt to suddenly leap into the street or dart across the road. Jaywalking happens all the time.
ADVERTISEMENT
Admittedly, this pedestrian is not moving around, and nor crouched as though about to lunge into the street. By all appearances, the pedestrian seems to be at a standstill and not an immediate threat to the path of the self-driving car. But, better to be safe than sorry, as they say in self-driving cars.
The self-driving car slows down.
Meanwhile, the sensors continue to feed data about the object (and the other myriad of objects in the scene), just in case this particular object (which is now assumed to be a pedestrian), makes any sudden moves.
You could argue that the act of slowing down, when slowing isn’t required, would be a somewhat unintended and possibly adverse consequence of this misclassification. Perhaps a human driver in a car behind the self-driving car is caught off-guard. There doesn’t seem to be any reason whatsoever to be surprisingly slowing down. The human driver wouldn’t even imagine that the snowman is the culprit in this case. Human drivers see snowmen all the time and realize right away that it is a snowman, mindfully classifying the snowman as indeed being a snowman.
ADVERTISEMENT
You can still assert that the slowing down is fine, and though perhaps disturbing to the human driver in the car behind, nonetheless not a big deal.
Let me take you on a slippery slope about this. Assume that there are lots and lots of self-driving cars on the roadways. Envision that they are all using the same AI-based classifiers (at least for a given brand and model). Whenever they spot a snowman, they each of their own accord will slowdown. This happens by the thousands upon thousands of those self-driving cars. None of them classifying a snowman as indeed a snowman (well, in some instances), and always opting to slow down under the misclassification of the snowman-as-pedestrian.
If the world only consisted of self-driving cars, perhaps this would be dandy. But, the reality is that there will be a mix of self-driving cars and human-driven cars for quite a while, likely decades (there are about 250 million conventional cars in the U.S. alone, and they are not going to overnight be replaced by self-driving cars). These “safety first” self-driving cars are going to disrupt on a large-scale the human-driving population. In theory, this could end-up leading to human drivers rear-ending those self-driving cars (being caught off-guard by the slowing action) or lead to road rage against self-driving cars (we’ll get in a moment to the counterargument about the nature of human drivers, hold on).
I don’t want to extend that futuristic vision very far since it does fall apart rather quickly.
Presumably, the automakers and self-driving tech firms would get feedback about the exasperating misclassifications and take action to enhance the classifier for dealing with the “snowman apocalypse” if you will.
ADVERTISEMENT
And, for those seeking to ultimately ban human driving, under the assumption that AI driving systems will be safer as drivers (not drinking and driving, not driving distracted, etc.), they would undoubtedly use this snowman-as-pedestrian reaction by human drivers as yet additional evidence that human drivers need to go (which, some human drivers insist you will only take away their driving whence you pry their dead cold hands from the wheel).
Conclusion
There is a slew of other considerations on this rather simple but telling snowman-as-pedestrian dilemma.
Someone opts to purposely build a snowman in the street as a joke or prank on self-driving cars, which is distinctly not a good idea, and I’ve discussed repeatedly in my columns that people pranking self-driving cars ought to not do so. By the way, in case you are worried that I’ve just let the cat out of the bag, please know that people do sometimes build snowmen in the street just for fun, not due to self-driving cars, and hence this is something that self-driving cars need to be prepared for.
ADVERTISEMENT
What does the self-driving car do?
Human drivers would presumably ascertain that it is a snowman, and in a civil manner drive slowly around the obstruction. Some self-driving cars of today would do the same, while other brands or models might get logically jammed-up about what to do and send an alert to the fleet operator. And so on.
One argument is that self-driving cars ought to not be on our public roadways until they have been taught or “learned” how to deal with all these various roadway aspects. Others argue that the only viable way for the AI driving systems to be readied involves being on public roadways, rather than relying solely on simulations and special closed training tracks. This is an ongoing and at times acrimonious debate.
Here’s another twist on the snowman-as-pedestrian.
Even if you believe that defaulting to the snowman-as-pedestrian is a safer way to treat the matter, nonetheless the public at large might become concerned that self-driving cars cannot seem to differentiate between the likes of a snowman and a pedestrian.
ADVERTISEMENT
Say what?
This to most humans is a rather obvious and ordinary form of classification. If self-driving cars cannot figure this out, it bodes for some grave concern.
Furthermore, those same qualms might be extended further, leading to the trepidation that maybe there are lots of other misclassification going on. Maybe fire hydrants are being classified as pedestrians. Maybe small trees are being classified as pedestrians. Where does this end? Indeed, maybe the AI-based classifier is classifying all objects as pedestrians.
This potentially opens a Pandora’s box that unleashes the viral idea that where there is one “error” there are indubitably many more to be found (the tip of the iceberg, so to speak).
For those in the self-driving car industry, they would say that kind of thinking is misguided and outright hysteria. Maybe so, but it is useful to keep in mind that the public at large is the determiner of whether self-driving cars will be on our roadways, doing so via their elected officials and the regulations that are ultimately put in place or as laws are adjusted based on public opinion.
ADVERTISEMENT
There is also the ever-present specter of lawsuits that might one day be launched against those that make self-driving cars. Suppose a self-driving car gets into a car accident, one that the AI ought to arguably have avoided. An astute attorney during the trial might cleverly undertake a showy indication to the jury that this AI was (by implication) so bad that it couldn’t even identify a snowman. Unfair, you say? Well, the counterargument is that it is true, though perhaps meanwhile your side is counterarguing that it is a misleading and poisoning alliteration, depending upon the court case issues at hand.
All because of a normally joyous and completely uneventful snowman.
Despite all of that bit of an icy tale about snowmen, one might say that we ought to not take this instance and turn it into a snowball that runs down a snow hill and becomes a larger issue than it deserves (let’s avoid making a mountain out of a molehill, one would contend).
Wait for a second, here’s another viewpoint, maybe tell children they can no longer make snowmen near the street anymore. This comports with the belief by some that the real-world will need to conform to what self-driving cars can do (such as revamping lane markers, curb sizes, etc.), rather than self-driving cars being sufficiently improved to handle the real-world that there are immersed in.
ADVERTISEMENT
One shudders to think that kids would no longer be able to make snowmen out in front of their homes, and assuredly is not the spirit of the snowy season and an absurdly upside-down way of solving things.
As they say, snowmen aren’t forever, but their memories are.