Transportation

Artificial Stupidity Could Be The Crux To AI And Achieving True Self-Driving Cars


When someone says that another person is intelligent, you pretty much assume that this is a praising of how smart or bright the other person might be.

In contrast, if someone is labeled as being stupid, there is a reflexive notion that the person is essentially unintelligent. Generally, the common definition of being stupid is that stupidity consists of a lack of intelligence.

This brings up a curious aspect.

Suppose we somehow had a bucket filled with intelligence. We are going to pretend that intelligence is akin to something tangible and that we can essentially pour it into and possibly out of a bucket that we happen to have handy.

Upon pouring this bucket filled with intelligence onto say the floor, what do you have left?

One answer is that the bucket is now entirely empty and there is nothing left inside the bucket at all. The bucket has become vacuous and contains absolutely nothing.

Another answer is that the bucket upon being emptied of intelligence has a leftover that consists of stupidity. In other words, once you’ve removed so-called intelligence, the thing that you have remaining is stupidity.

I realize this is a seemingly esoteric discussion but, in a moment, you’ll see that the point being made has a rather significant ramification for many important things, including and particularly for the development and rise of Artificial Intelligence (AI).

Ponder these weighty questions:

·        Can intelligence exist without stupidity, or in a practical sense is there always some amount of stupidity that must exist if there is also the existence of intelligence?

Some assert that intelligence and stupidity are a zen-like yin and yang.

In this perspective, you cannot grasp the nature of intelligence unless you also have a semblance of stupidity as a kind of measuring stick.

It is said that humans become increasingly intelligent over time, and thus are reducing their levels of stupidity. You might suggest that intelligence and stupidity are playing a zero-sum game, namely that as your intelligence rises you are simultaneously reducing your level of stupidity (similarly, if perchance your stupidity rises, this implies that your intelligence lowers).

·        Can humans arrive at a 100% intelligence and a zero amount of stupidity, or are we fated to always have some amount of stupidity, no matter how hard we might try to become fully intelligent?

Returning to the bucket metaphor, some would claim that there will never be the case that you are completely and exclusively intelligent and have expunged stupidity. There will always be some amount of stupidity that’s sitting in that bucket.

If you are clever and try hard, you might be able to narrow down how much stupidity you have, though nonetheless there is still some amount of stupidity in that bucket, albeit perhaps at some kind of minimal state.

·        Does having stupidity help intelligence or is it harmful to intelligence?

You might be tempted to assume that any amount of stupidity is a bad thing and therefore we must always be striving to keep it caged or otherwise avoid its appearance.

But we need to ask whether that simplistic view of tossing stupidity into the “bad” category and placing intelligence into the “good” category is potentially missing something more complex. You could argue that by being stupid, at times, in limited ways, doing so offers a means for intelligence to get even better.

When you were a child, suppose you stupidly tripped over your own feet, and after doing so, you came to the realization that you were not carefully lifting your feet. Henceforth, you became more mindful of how to walk and thus became intelligent at the act of walking. Maybe later in life, while walking on a thin curb, you managed to save yourself from falling off the edge of the curb, partially due to the earlier in life lesson that was sparked by stupidity and became part of your intelligence.

Of course, stupidity can also get us into trouble.

Despite having learned via stupidity to be careful as you walk, one day you decide to strut on the edge of the Grand Canyon. While doing so, oops, you fall off and plunge into the chasm.

Was it an intelligent act to perch yourself on the edge like that?  

Apparently not.

As such, we might want to note that stupidity can be a friend or a foe, and it is up to the intelligence portion to figure out which is which in any given circumstance and any given moment.

You might envision that there is an eternal struggle going on between the intelligence side and the stupidity side.

On the other hand, you might equally envision that the intelligence side and stupidity side are pals, each of which tugs at the other, and therefore it is not especially a fight as it is a delicate dance and form of tension about which should prevail (at times) and how they can each moderate or even aid the other.

This preamble provides a foundation to discuss something increasingly becoming worthy of attention, namely the role of Artificial Intelligence and (surprisingly) the role of Artificial Stupidity.

Thinking Seriously About Artificial Stupidity

We hear every day about how our lives are being changed via the advent of Artificial Intelligence.

AI is being infused into our smartphones, and into our refrigerators, and into our cars, and so on.

If we are intending to place AI into the things we use, it begs the question as to whether we need to consider the yang of the yin, specifically do we need to be cognizant of Artificial Stupidity?

Most people snicker upon hearing or seeing the phrase “Artificial Stupidity,” and they assume it must be some kind of insider joke to refer to such a thing.

Admittedly, the conjoining of the words artificial and stupidity seems, well, perhaps stupid in of itself.

But, by going back to the earlier discussion about the role of intelligence and the role of stupidity as it exists in humans, you can recast your viewpoint and likely see that whenever you carry on a discussion about intelligence, one way or another you inevitably need to also be considering the role of stupidity.

Some suggest that we ought to use another way of expressing Artificial Stupidity to lessen the amount of snickering that happens. Floated phrases include Artificial Unintelligence, Artificial Humanity, Artificial Dumbness, and others, none of which have caught hold as yet.

Please bear with me and accept the phrasing of Artificial Stupidity and also go along with the belief that it isn’t stupid to be discussing Artificial Stupidity.

Indeed, you could make the case that the act of not discussing Artificial Stupidity is the stupid approach, since you are unwilling or unaccepting of the realization that stupidity exists in the real world and therefore in the artificial world of computer systems that are we attempting to recreate intelligence, you would be ignoring or blind to what is essentially the other half of the overall equation.

In short, some say that true Artificial Intelligence requires a combining of the “smart” or good AI that we think of today and the inclusion of Artificial Stupidity (warts and all), though the inclusion must be done in a smart way.

Indeed, let’s deal with the immediate knee jerk reaction that many have of this notion by dispelling the argument that by including Artificial Stupidity into Artificial Intelligence you are inherently and irrevocably introducing stupidity and presumably, therefore, aiming to make AI stupid.

Sure, if you stupidly add stupidity, you have a solid chance of undermining the AI and rendering it stupid.

On the other hand, in recognition of how humans operate, the inclusion of stupidity, when done thoughtfully, could ultimately aid the AI (think about the story of tripping over your own feet as a child).

Here’s something that might really get your goat.

Perhaps the only means to achieve true and full AI, which is not anywhere near to human intelligence levels to-date, consists of infusing Artificial Stupidity into AI; thus, as long as we keep Artificial Stupidity at arm’s length or as a pariah, we trap ourselves into never reaching the nirvana of utter and complete AI that is able to seemingly be as intelligent as humans are.

Ouch, by excluding Artificial Stupidity from our thinking, we might be damming ourselves to not arriving at the pinnacle of AI.

That’s a punch to the gut and so counter-intuitive that it often stops people in their tracks.

There are emerging signs that the significance of revealing and harnessing artificial stupidity (or whatever it ought to be called), can be quite useful.

At a recent talk sponsored by the Simons Institute for the Theory of Computing at the University of California Berkeley, I chatted with MIT Professor Andrew Lo and discussed his espoused clever inclusion of artificial stupidity into improving financial models, which he has done in recognition that human foibles need to be appropriately recognized and contended with in the burgeoning field of FinTech.

His fascinating co-authored book A Non-Random Walk Down Wall Street is an elegant look at how human behavior is composed of both rationality and irrationality, giving rise to his theory, coined as the Adaptive Markets Hypothesis. His insightful approach goes beyond the prevailing bounds of how financial trading marketplaces do and can best operate.

Are there other areas or applications in which the significance of artificial stupidity might come to play?

Yes.

One such area, I assert, involves the inclusion of artificial stupidity into the advent of true self-driving cars.

Shocking?

Maybe so.

Let’s unpack the matter.

Exploiting Artificial Stupidity For Gain

When referring to true self-driving cars, I’m focusing on Level 4 and Level 5 of the standard scale used to gauge autonomous cars. These are self-driving cars that have an AI system doing the driving and there is no need and typically no provision for a human driver.

The AI does all the driving and any and all occupants are considered passengers.

On the topic of Artificial Stupidity, it is worthwhile to quickly review the history of how the terminology came about.

In the 1950s, the famous mathematician and pioneering computer scientist Alan Turing proposed what has become known as the Turing test for AI.

Simply stated, if you were presented with a situation whereby you could interact with a computer system imbued with AI, and at the same time separately interact with a human too, and you weren’t told beforehand which was which (let’s assume they are both hidden from view), upon your making inquiries of each, you are tasked with deciding which one is the AI and which one is the human.

We could then declare the AI a winner as exhibiting intelligence if you could not distinguish between the two contestants. In that sense, the AI is indistinguishable from the human contestant and must ergo be considered equal in intelligent interaction.

There are some holes in this logic, which I provide a detailed analysis of here, in any case, the Turing test is widely used as a barometer for measuring whether or when AI might be truly achieved.

There is a twist to the original Turing test that many don’t know about.

One qualm expressed was that you might be smarmy and ask the two contestants to calculate say pi to the thousandth digit.

Presumably, the AI would do so wonderfully and readily tell you the answer in the blink of an eye, doing so precisely and abundantly correctly. Meanwhile, the human would struggle to do so, taking quite a while to answer if using paper and pencil to make the laborious calculation, and ultimately would be likely to introduce errors into the answer.

Turing realized this aspect and acknowledged that the AI could be essentially unmasked by asking such arithmetic questions.

He then took the added step, one that some believe opened a Pandora’s box, and suggested that the AI ought to avoid giving the right answers to arithmetic problems.

In short, the AI could try to fool the inquirer by appearing to answer as a human might, including incorporating errors into the answers given and perhaps taking the same length of time that doing the calculations by hand would take.

Starting in the early 1990s, a competition was launched that is akin to the Turing test, offering a modest cash prize and has become known as the Loebner Prize, and in this competition the AI systems are typically infused with human-like errors to aid in fooling the inquirers into believing the AI is the human. There is controversy underlying this, but I won’t go into that herein. A now-classic article appeared in 1991 in The Economist about the competition.

Notice that once again we have a bit of irony that the introduction of stupidity is being done to essentially portray that something is intelligent.

This brief history lesson provides a handy launching pad for the next elements of this discussion.

Let’s boil down the topic of Artificial Stupidity into two main facets or definitions:

1)     Artificial Stupidity is the purposeful incorporation of human-like stupidity into an AI system, doing so to make the AI seem more human-like, and being done not to improve the AI per se but instead to shape the perception of humans about the AI as being seemingly intelligent.

2)     Artificial Stupidity is an acknowledgment of the myriad of human foibles and the potential inclusion of such “stupidity” into or alongside the AI in a conjoined manner that can potentially improve the AI when properly managed.

One common misnomer that I’d like to dispel about the first part of the definition involves a somewhat false assumption that the computer potentially is going to purposefully miscalculate something.

There are some that shriek in horror and disdain that there might be a suggestion that the computer would intentionally seek to incorrectly do a calculation, such as figuring out pi but doing so in a manner that is inaccurate.

That’s not what the definition necessarily implies.

It could be that the computer might correctly calculate pi to the thousandth digit, and then opt to tweak some of the digits, which it would say keep track of, and do this in a blink of the eye, and then wait to display the result after an equivalent of human-by-hand amount of time.

In that manner, the computer has the correct answer internally and has only displayed something that seems to have errors.

Now, that certainly could be bad for the humans that are relying upon what the computer has reported but note that this is decidedly not the same as though the computer has in fact miscalculated the number.

There’s more to be said about such nuances, but for now let’s continue forward.

Both of those definitional variants of Artificial Stupidity can be applied to true self-driving cars.

Doing so carries a certain amount of angst and will be worthwhile to consider.

Artificial Stupidity And True Self-Driving Cars

Today’s self-driving cars that are being tried out on our public roadways have already gotten a somewhat muddled reputation for their stylistic driving prowess. Overall, driverless cars to-date are akin to a novice teenage driver that is timid and somewhat hesitant about the driving task.

For example, when you encounter a self-driving car, it will often try to create a large buffer zone between it and the car ahead, attempting to abide by the car lengths rule-of-thumb that you were taught when first learning to drive.

Human drivers generally don’t care about the car lengths safety zone and edge up on other cars, doing so to their own endangerment.

Here’s another example of such driving practices.

Upon reaching a stop sign, a driverless car will usually come to a full and complete stop. It will wait to see that the coast is clear, and then cautiously proceed. I don’t know about you, but I can say that where I drive, nobody makes complete stops anymore at stop signs. A rolling stop is the norm nowadays.

You could assert that humans are driving in a reckless and somewhat stupid manner.

By not having enough car lengths between your car and the car ahead, you are increasing your chances of a rear-end crash. By not fully stopping at a stop sign, you are increasing your risks of colliding with another car or a pedestrian.

In a Turing test manner, you could stand on the sidewalk and watch cars going past you, and by their driving behavior alone you could likely ascertain which are the self-driving cars and which are the human-driven cars.

Does that sound familiar?

It should, since this is roughly the same as the arithmetic precision issue earlier raised.

How to solve this?

One approach would be to introduce Artificial Stupidity as defined above.

First, you could have the on-board AI purposely shorten the car’s buffer distance settings to cause it to drive in a similar manner as humans do (butting up to other cars). Likewise, the AI could be modified to roll through stop signs. This is all rather easily arranged.

Humans watching a driverless car and a human-driven car would no longer be able to discern one such car from the other since they both would be driving in the same error-laden way.

That seems to solve one problem as it relates to the perception that we humans might have about whether the AI of self-driving cars is intelligent or not.

But, wait a second, aren’t we then making the AI into a riskier driver?

Do we want to replicate and promulgate this car-crash causing risky human driving behaviors?

Sensibly, no.

Thus, we ought to move to the second definitional portion of Artificial Stupidity, namely by incorporating these “stupid” ways of driving into the AI system in a substantive way that allows the AI to leverage those aspects when applicable and yet also be aware enough to avoid them or mitigate them when needed.

Rather than having the AI drive in human error-laden ways and do so blindly, the AI should be developed so that it is well-equipped enough to cope with human driving foibles, detecting those foibles and being a proper defensive driver, along with leveraging those foibles itself when the circumstances make sense to do so (for more on this, see my posting here).

Conclusion

One of the most unspoken secrets about today’s AI is that it does not have any semblance of common-sense reasoning and in no manner whatsoever has the capabilities of overall human reasoning (many refer to such AI as Artificial General Intelligence or AGI).

As such, some would suggest that today’s AI is closer to the Artificial Stupidity side of things than it is to the true Artificial Intelligence side of things.

If there is a duality of intelligence and stupidity in humans, presumably you will need a similar duality in an AI system if it is to be able to exhibit human intelligence (though, some say that AI might not have to be so duplicative).

On our roads today, we are unleashing so-called AI self-driving cars, yet the AI is not sentient and not anywhere close to being sentient.

Will self-driving cars only be successful if they can climb further up the intelligence ladder?

No one yet knows, and it’s certainly not a stupid question to be asked.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.