Transportation

AI Ethics And The Acclaimed AI Uncanny Valley, Which Also Rattles AI-Based Self-Driving Cars


Sometimes there are slightly oddish things that catch your attention and get your intuitive juices going that there is somehow something amiss. The oddness is not blatant, not at all flagrantly in your face. You might not even be able to immediately put your finger on what the incongruity is or how come your proverbial spidery sense is tingling.

Perhaps subtle telltale clues are being sensed. Maybe you happen to in your gut realize a dissonance exists. I guess you could say that there is just the ever so slightest hint of understated eeriness and your delicate human radar is picking up on otherwise seemingly hidden signals.

Welcome to the uncanny valley.

If you’ve never heard of the uncanny valley, a topic relatively popularized in the field of AI and especially robotics, you are in for a bit of a treat since that’s the subject matter I’m going to be closely discussing and analyzing herein.

The overarching notion can apply to a lot of things that we experience in life, though the keystone principles and originating definition entail AI systems and robots. We’ll first explore the origins and initial meaning of the uncanny valley and then proceed into a broadening to see how the phenomena seem to be applicable in larger contexts.

I might also add that we’ll consider whether the uncanny valley exists at all.

You see, some skeptics and cynics argue that the whole matter is a bit of a shenanigan and does not hold water. Be careful when bringing up the topic to those that are in the know. Some will smile gleefully and clap you on the back that you are keenly familiar with the uncanny valley, while others will stridently lecture you that it is a smorgasbord of hogwash and you need to summarily clean out your mind with a sudsy bar of mental cleansing soap.

The good news here is that you get to decide whether the uncanny valley is real or not, along with whether it has merits for mindful application or instead should be tossed unceremoniously onto the techno-ideas junkheap. In that sense, you are in the driver’s seat.

All of this also pertains closely to the burgeoning field of Ethical AI and the rising realization that society has to seriously and soberly be paying attention to the ethics of AI. We’ll make that tie in momentarily.

The best place to start is by directly quoting the professor that came up with the uncanny valley concept and outrightly named this proclaimed phenomenon. In 1970, Professor Masahiro Mori at the Tokyo Institute of Technology published a rather petite article in a somewhat lesser-known journal called Energy (not particularly a hotbed for AI and robotics per se), and said this:

“I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.”

Please note that the above phrasing is shown in English, though the original paper was in Japanese. The English translated version was overseen by the author and later published in the IEEE Spectrum in 2012 and credited to Masahiro Mori as the author. You can read the paper yourself since it is openly and freely available online. It is a decidedly quick read of maybe ten minutes or so and does not contain any techie heavy terminology.

That being said, it is interesting and somewhat amazing that such a brisk article, published in 1970, eventually started an entire arena of inquiry and launched a myriad of related studies, projects, research, and at times a firestorm of controversy about whether the introduced concept of an uncanny valley truly exists. I suppose this showcases that intriguing and at times viewpoint-changing ideas do not have to be massively convoluted or overstuffed with jargon and loftiness. A succinct idea can be just as if not even more powerful than it might seem on the face of things.

I trust that will encourage you to try and foster your new ideas, doing so with the realization that sweet and simple can be as magnificent or sometimes more so than convoluted and complex.

Let’s jump back into the uncanny valley elicitation.

You come across a robotic system that has a face akin to a human-looking face. Imagine that this robotic face has been devised through numerous iterations. AI developers putting together the robot head have been incrementally striving to make the robotic facial portion appear to be more and more like an actual human face.

Their first try was exceedingly primitive. The robot face had that same look that you’ve seen in the sci-fi movies of being entirely metallic and showcasing gears and wires. You know instantly when you gaze at the contraption that it is a robot. No question in your mind about it.

The next try by the AI developers involved wrapping some plastic materials around the metallic pieces. Though this looks somewhat more friendly-looking, you still instantly know that it is a robot head and a robotic face. Again, easy-peasy to detect.

Those AI developers are determined to keep this going. They sculpt the plastic and give it skin tones. They add features that seem very similar to a human face, such as moles, hair, blemishes, and the like.

At first glance, you might be led into believing that this is a human face. If a picture was taken of the robotic face and you were asked to identify whether the picture depicted a person versus a robotic face, you might be stymied at immediately being able to say which it was. On the other hand, if you were standing next to the device, you likely upon close inspection would be able to discern that it is not a human and instead a robotic contrivance.

The thing is, before you got that chance to do closeup scrutiny, there was something about the face that didn’t seem to entirely add up. It sure looked like a human face. But there was something amiss. You had to keep intently staring over and over to put your finger on what doesn’t look quite right. Maybe it is a real face. Then again, maybe it isn’t. Your mind roils accordingly.

A semblance of eeriness enters your mind.

You did not have that same semblance of eeriness when you saw the two earlier versions. You could without pause or hesitation detect that the robot was a robot. Only a child might get fooled into believing that either of those versions was of a real person.

This latest version though was different. It wasn’t yet perfected as to appear like a human face. Nor was it so far off from the real thing that it was obvious that it must be a robot. A kind of muddled ground had been reached.

Suppose the developers pushed further into their research efforts and tided everything so that the robotic face was almost indistinguishable from a human face. No matter how long you stare at the thing, you are unsure whether it is a human or not a human. When informed that it is the robotic face, you are taken aback. Gosh, they’ve done a great job of making it look real.

Notice that so far you’ve only been considering the robotic aspects based on appearance alone.

We could add movement to the equation. This adds an additional dimension for you to discern whether the robot is a robot versus a human. I don’t want to fully entertain this as a multi-dimensional kind of problem in this discussion since it makes an elucidation on this topic more intricate (there are though multi-dimensions inevitably intertwined). In any case, imagine that you not only saw the robotic face but also could watch as the robot moves the facial features, such as the mouth, the eyes, the nose, etc. Obviously, those could be giveaways too about whether this is a robot or a human.

One vital aspect to keep at the forefront of the uncanny valley is that the original conception emphasizes the act of human affinity. The claimed phenomenon is that your affinity is increasing as you see the incrementally being improved robot faces, until the point at which the uncanny variant arises. At that juncture, your sense of affinity is said to drop dramatically, plummeting down into an affinity chasm or valley.

For the particular version that caused you to suspect something was amiss, your affinity has allegedly radically fallen. Furthermore, according to the theory, your affinity can skyrocket back up once you encounter the more advanced version that is nearly identical to a truly human form.

Here’s more about what the author stated about our normal inclination to assume that aspects of life are smoothly increasing: “The mathematical term monotonically increasing function describes a relationship in which the function y = ƒ(x) increases continuously with the variable x. For example, as effort x grows, income y increases, or as a car’s accelerator is pressed, the car moves faster. This kind of relation is ubiquitous and very easily understood. In fact, because such monotonically increasing functions cover most phenomena of everyday life, people may fall under the illusion that they represent all relations. Also attesting to this false impression is the fact that many people struggle through life by persistently pushing without understanding the effectiveness of pulling back. That is why people usually are puzzled when faced with some phenomenon this function cannot represent.” This is quoted per the IEEE Spectrum translated paper.

This nearly universal assumption about always increasing can be overturned when we encounter something amiss. The eeriness and suspicion will cause a relatively abrupt and dramatic drop in affinity, the theory goes, such as a robotic hand that you opted to shake and could not feel the boney characteristics of a human hand: “When this happens, we lose our sense of affinity, and the hand becomes uncanny. In mathematical terms, this can be represented by a negative value.”

If you accept the premise that there is this phenomenon of an uncanny valley, I’m sure you are wondering what good does it to you to know that the uncanny valley apparently exists.

That’s the all-time classic “so what?” test of practicality.

Turns out that a lot of people have come up with a lot of interpretations of what we should or can do about the uncanny valley. There are tons of opinions. I’ll be addressing some of that shortly.

Meanwhile, here’s what Masahiro Mori proffered: “We hope to design and build robots and prosthetic hands that will not fall into the uncanny valley. Thus, because of the risk inherent in trying to increase their degree of human likeness to scale the second peak, I recommend that designers instead take the first peak as their goal, which results in a moderate degree of human likeness and a considerable sense of affinity. In fact, I predict it is possible to create a safe level of affinity by deliberately pursuing a nonhuman design. I ask designers to ponder this.”

A quick condensation on my part of twelve handy seat-of-the-pants rules about what to do concerning the uncanny valley goes like this for AI developers in particular:

1) Be aware of the uncanny valley and be on your toes accordingly

2) You presumably want to attain human affinity for your AI as much as possible

3) Be prepared for a loss of human affinity if your AI falls into the uncanny valley

4) Seek to avoid the uncanny valley by devising your AI thusly

5) It is respectable to have a goal that is short of the uncanny valley

6) Sidle up to the edge of the uncanny valley but don’t fall over the cliff

7) Do not be obsessed with going beyond the uncanny valley

8) There is a chance that you can though leap past the uncanny valley

9) Do not become preoccupied with the leap since you might fall into the valley anyway

10) Maximum human affinity would admittedly be attained by getting past the uncanny valley

11) Nonetheless, there is adequate and suitable affinity found prior to the uncanny valley

12) Be continually mindful of the uncanny valley and do not let it slip your mind

These dozen are all general precepts that can be considered as keystone or anchoring points of knowing about the uncanny valley. I’ll right away acknowledge that there are other points not listed in those scant dozen that might be argued as equally as important. I will also readily acknowledge that there is bound to be disagreement for each of the points identified, and a lengthy heated debate can ensue on each point made.

More so, some would say the twelve points are altogether rubbish because they are based on a falsehood, to begin with. There is no such thing as an uncanny valley, they would argue. It is all merely chicanery and a made-up contrivance that appeals solely and sadly to appeal to weak minds (ouch, that earnestly hurts!). Any attention to the uncanny valley is a wasted breath of air and someone should come along and put a sharp wooden theoretical stake in the heart of the matter (some researchers have tried doing so).

For sake of discussion, let’s go with the flow and assume that there is an uncanny valley and that it purports to generally match with what I’ve indicated so far. Those that disagree with the conceptualization of an uncanny valley are welcome to zone out or continue reading with their teeth gritted and their intellectual anger brewing and boiling (sorry about that).

Here’s how Ethical AI and the focus of devising and fielding ethical AI come to bear. By the way, for my ongoing and in-depth explorations of AI ethics, see my discussion at this link here and this link here, just to name a few.

The uncanny valley is a decidedly love-hate affair for those into Ethical AI.

First, some needed background. One of the most hair-raising ethical AI-related qualms is that humans can be fooled into believing that an AI system is sentient. Please be aware that there isn’t any AI of today that gets anywhere remotely close to being sentient. It just isn’t happening at this time. My ostensibly “brazen” assertion is made despite those incessant and blaring headlines that declare this AI or that AI is either sentient or close enough to be regarded as such. Malarkey. We aren’t at AI sentience.

We don’t know how to get there. We don’t know if it will happen. AI sentience is a worthwhile dream and aspiration, though do not jump the gun and think we are on the cusp of achieving it.

Of course, many are fervently warning that if we somehow do manage to pull off AI sentience, whether we do so by design or by pure accident, we will confront an existential risk. In that manner of thinking, maybe seeking AI sentience is not quite so worthwhile. The risk is that this sentient AI might determine that humans aren’t worth having around. We could get squished like a bug. Or become enslaved to AI. This could occur by the AI overtly choosing to do so, or the AI might end up being our own doomsday machine that destroys us by our own ineptitude. For my coverage on those troubling outcomes of AI sentience or singularity, see the link here.

A crucial Ethical AI concern is that the developers of AI and those that field AI are at times suckering humans into thinking that the AI is sentient. The manner of how the AI exhibits itself, such as by a robotic formulation or by its conversational interactivity can insidiously spur people into assuming that the AI is sentient. This in turn leads you down a potentially foul primrose path.

If you fall into the mental trap of thinking that an AI system is sentient, you are likely going to rely on it to do things that sentient beings would do. But there isn’t as yet any common sense of a human-like quality built into any of today’s AI. The AI we experience currently is extremely brittle and shallow when it comes to human-like capacities. You could get yourself into some unsavory and dangerous waters by believing that an AI system is sentient.

How does that connect with the uncanny valley?

Here’s the deal.

Recall that the uncanny valley seems to tell us that human affinity will be gradually rising as an AI system or a robot gets closer and closer to a human-like formulation. At a juncture whereby the AI system is nearing the point of being pretty close, yet still, not quite there, we get an eerie feeling that something is amiss. Up until then, we knew that AI wasn’t human. Now we aren’t sure. Our human affinity drops. Only once the AI or robot gets utterly convincing as to human-like capacities do we regain our semblance of affinity to the device.

AI developers that take to this heart would presumably intentionally strive to keep their AI out of the uncanny valley, aiming to come to a stop in terms of the features of the AI, just before falling into the uncanny abyss (recall, that’s also what Masahiro Mori emphasized). The developers would apparently do so by making sure that abundant telltale clues still existed to make it rather clear-cut that the AI is less-than-sentient AI and ergo not a human or equally so.

AI ethicists would generally welcome that heartfelt effort.

The reasoning is straightforward. Those so informed and embracing AI developers are trying to make sure that the AI system does not mislead people into falsely ascribing human-like facilities to the AI. That is assuredly good news. The developers will purposely craft the AI to prevent a dive into the uncanny valley. Humans will readily realize that AI is not sentient.

Trying to get AI developers to embrace such an approach is not easy. Indeed, it can be counterintuitive to their usual instincts and driving ambitions.

Many claims are made that AI developers and techies, in general, are consumed with goals. They see a goal and often will blindly pursue it with great gusto. No time to stop and smell the roses. Off to the races, we go. In the AI field, the normative goal would be an idealized AI that is indistinguishable from humans in that the AI could be intelligently equal in parity. But we aren’t there yet. As such, the uncanny valley provides a secondary goal, landing before the otherwise damming uncanny valley, and becomes a goal that is nonetheless acceptable. Sure, it is not the golden prized ring, but the idea is that this “secondary” prize is fine, thank you very much, and you can be proud of it.

We have altered the ruinous goal-seeking topmost ambition and harnessed it into a logical-sounding reasoned basis to do the right thing, as it were.

Hurrah!

Score a win for AI ethics.

But wait for a second, spoiler alert, there’s something else that we need to equally consider.

Now that those savvy AI developers know about the uncanny valley, they might turn their wits and techie prowess toward purposely leaping over the abyss and yet do so with a semblance of deception in mind. Make the AI look and appear to be entirely human-like, even though the developers know this to be untrue.

The roguish thinking goes like this. Do not let your AI system tip its hat and cause people to get that elusive undercurrent of eeriness. Excise the facets that might give any hint or clue that the AI is not of human capability. Do this while secretly realizing and inexcusably knowing that the AI isn’t of human capability and this is all about hiding that truth from those that interact with or are dependent upon the AI.

What devilish plans.

Ironically, the uncanny valley could be a kind of wake-up call for AI developers that if they want to really fool people, they must be clever enough to escape the abyss. They aren’t doing so by attaining complete AI, and instead by erecting smoke and mirrors to make the AI seem misleading like it is human. Had the AI developers not realized that this uncanny valley exists, they by and large would have fallen into it. That’s good for humankind because humans ergo would lose their affinity toward AI in terms of not over-relying on today’s quality of AI.

Lamentably, by knowing that the trap exists, AI developers that want to sneak around it are going to find perniciously clever ways to do so.

Score a hit against the precepts of Ethical AI.

Do you see how this creates a love-hate relationship for AI ethicists about the uncanny valley?

Darned if you do, darned if you don’t.

I realize this has been a somewhat high brow examination of the uncanny valley and you might be hankering for some day-to-day examples. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the uncanny valley, and if so, what does this inform us to do?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Uncanny Valley

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and ethical AI questions entailing the uncanny valley.

There are four aspects pertaining to this matter that will be covered herein:

1. The overall look of self-driving cars

2. The question of where self-driving cars are “looking”

3. AI driving actions of self-driving cars

4. Robots that drive as a means of attaining self-driving cars

Additional facets are also viably encompassed but for space constraints, these four topics will be sufficient to illuminate the bearing of the uncanny valley as related to AI-based self-driving cars.

1. Overall Look Of Self-Driving Cars

I’m betting that you have seen pictures or videos of today’s tryouts of self-driving cars. As such, you might have noticed that most of the vehicles are conventional-looking cars that are outfitted with additional specialized equipment. For example, there might be a rooftop rack that contains a slew of electronic sensors. The sensors sometimes include video cameras, radar units, LIDAR devices, ultrasonic sensors, and the like.

Futuristic designs tend to suggest that we might depart from the conventional-looking car to instead redesign cars on both the interior and exterior for being more slick-looking autonomous vehicles. Right now, the general thinking is that it is simpler to use conventional cars and not expend energy trying to stretch the endeavor by simultaneously tinkering with unconventional-looking cars (there are some exceptions to this general viewpoint, see my coverage at the link here).

The gist right now is that if you are driving on the roadway and encounter a nearby self-driving car, you can nearly always immediately discern that it probably is a self-driving car by merely noticing the smattering of sensors mounted on the autonomous vehicle. This is a quick visual giveaway. Of course, you don’t know for sure it is self-driving per se since at this juncture the driving controls are usually still intact and a human backup driver might be at the wheel.

In one manner of thinking, you can suggest that it is especially handy that self-driving cars do physically seem to stand out and are readily visually spotted by human drivers in nearby human-driven cars and also by pedestrians nearby too. The realization that a self-driving car is roaming nearby can be a handy clue to be on your guard, prompting you to be watchful and mindful that the AI is or might be driving the vehicle.

Suppose self-driving cars looked identical to a conventional human-driven car. This is realistically feasible in at least two ways. First, the sensors could potentially be hidden or shaped to not be so obvious to casual visual inspection. Second, it could be that all cars including conventional human-driven cars are gradually outfitted with the akin sensors, even if the vehicle is nonetheless going to remain as a predominantly human-driven car. See my further coverage at this link here.

If you mindfully ponder this consideration about whether self-driving cars can or ought to be identical in appearance to conventional human-driven cars, you might come to the thinking that an uncanny valley might be lurking in this stew.

You see, cars that blatantly look like self-driving cars might be typified as being at a juncture that is just short of the uncanny valley. Basically, you “know” it is a robot or a robotic kind of system. That’s a judgment you can almost immediately leap to.

When self-driving cars are identically looking to human-driven cars, perhaps this suggests that the autonomous vehicles have leaped past the uncanny valley as to their robotic-like appearance. Is there though a middle ground between those two physical appearances that lands us into the uncanny valley?

Perhaps you spot a self-driving car coming down the street and it kind of seems like it probably is a self-driving car, on the other hand, the appearance is neither strictly autonomous looking nor strictly human-driven looking. You could argue that the self-driving car is now in an eerie or unsettling state of appearance.

The self-driving car has ostensibly gotten itself immersed into the uncanny valley.

That being said, not everyone would concur with that categorization. Some would claim that the physical look has nothing to do with the uncanny valley. Some of course also assert that there isn’t anything realistically known as the uncanny valley.

As mentioned earlier, you are welcome to make your own decision on this.

2. Question Of Where Self-Driving Cars Are Looking

A looming concern that many have about self-driving cars is that they usually lack any human driver in the driver’s seat and therefore it is difficult to figure out where “the driver” is looking while in the act of driving the car.

You normally glance at human drivers to spy where they are looking. For example, you might be a pedestrian at a crosswalk and a car is approaching the crossing. You look intently at the person sitting in the driver’s seat and try to discern where their head is turned and where their eyes are looking. If you believe that the human driver has seen you, you might be more comfortable with crossing the street. In contrast, if the human driver hasn’t seemed to see you, you are rightfully worried about crossing.

In some cities, there is a kind of cat and mouse gambit on these aspects. A particular cultural norm in a given city might be that if you make eye contact with a driver, the driver “wins” and they have the seeming right to proceed, regardless of the legality of the driving situation. Other cities might be entirely the opposite, namely that the cultural norm is that when eye contact is made the pedestrian “wins” and the human driver is supposed to defer to the actions of the pedestrian.

We seem to have adopted this custom over the relatively longstanding time of cars being in the midst of our cities and communities. The problem with the advent of self-driving cars is that there isn’t a human driver in the driver’s seat and thusly any pedestrian or nearby human driver that normally uses the head and eyes of car drivers as a cultural indicator of driving intention is now out of luck.

Automakers and self-driving car developers are keenly aware of this emerging issue. One proposed solution consists of the self-driving car blinking the headlights of the autonomous vehicle or possibly tooting the horn. Another notion is that the self-driving car might have a variant of a loudspeaker and tell those nearby what the “intentions” of the AI driving system are. Those ideas each have significant downsides.

Yet a different proposal entails doing something that you might at first believe to be ridiculous. The proposal consists of placing eyeball-like orbs on the exterior of the autonomous vehicle. These orbs would pretty much look like human eyes in the semblance of being able to pivot back and forth, giving you an immediate indication that suggests the AI “has seen you” (you would interpret this by the eyeballs looking in your particular direction). I’ve analyzed this approach at the link here.

What would be your reaction to seeing a self-driving car coming down the roadway and having these outsized oddball eyeball appearing orbs mounted on the hood or rooftop?

I suppose you might think it is eerie, maybe creepy.

Some would suggest that the eeriness arises from the as outfitted self-driving car being in the uncanny valley. Others would vehemently argue that this has nothing whatsoever to do with the uncanny valley. Of those such pundits, some would say that there is an eeriness that can be eeriness without having to be entrenched in the uncanny valley (i.e., the uncanny valley seemingly always produces eeriness, but not all eeriness is produced solely via the uncanny valley). The other angle is that the orbs could be presumably designed to be less eyeball looking and appear to be more robotic looking, or that we all will inevitably accept the appearance of these orbs and the initial startling reaction will subside.

3. AI Driving Actions Of Self-Driving Cars

Many of today’s tryouts of self-driving cars have showcased that the existing AI driving systems tend to be programmed to drive in rather tepid and somewhat strictly abiding legal ways. The AI driving system typically brings the self-driving car to a full stop at Stop signs. The AI driving system doesn’t do daring runs through intersections when the traffic signal is imminently going to be red. These wayward driving practices are the province of human drivers.

In a manner of speaking, you could almost guess that a self-driving car is a self-driving car by the style of driving it is exhibiting. Even if the autonomous vehicle appeared to visually be a conventional human-driven car, you might observe the driving actions and maybe logically deduce that it is probably being driven by an AI system.

Some believe that we will need to make AI driving systems to be more akin to the antics of human drivers so that they will effectively blend into the normative approaches to driving. I suppose you could construe this as fighting fire with fire.

Does that make sense to do?

Be aware that outspoken skeptics and critics abhor the idea. They would strenuously argue that we want AI driving systems to drive properly and gingerly. Adding potentially millions of self-driving cars to the roadways that are programmed to be like errant human drivers would seem a colossal nightmare. I’ve discussed this controversial proposition at the link here.

Let’s recast the dilemma by leveraging the uncanny valley.

When the AI driving system is strictly legal by its driving actions, this is perhaps a telltale clue that it is likely to be a robotic system (notwithstanding human drivers that admittedly do this, though in today’s world they seem far and few between). If AI driving systems are to drive as quirky as human drivers, does this leap across the uncanny valley or fall into the uncanny valley?

Mull that one over.

4. Robots That Drive As A Means Of Attaining Self-Driving Cars

This last item for coverage is the most startling of these four.

You might be entirely unaware that some AI developers are trying to craft robots that would drive cars. The robot would tend to look like a human in various respects, having robotic legs and robotic arms as limbs. When you want any conventional human-driven car to be a self-driving car, you merely put this specialized AI driving robot into the driver’s seat of your car. See my analysis of this notion at the link here.

Why would we want driving robots?

The beauty of such a robot is that all of today’s human-driven cars could in a manner of interpretation become self-driving cars, nearly overnight. You merely buy, lease, or somehow get yourself a driving robot. You put the robot into your driver’s seat when going on a driving journey. The robot drives you to your destination. If you want to switch to doing human driving, you remove the robot from the vehicle, maybe stow it into the trunk for later usage.

There are about 250 million conventional cars in the United State today. Some believe that those will be eventually junked as self-driving cars come into existence. Rather than junking those conventional cars, perhaps we might try to retrofit them into becoming self-driving cars, though this is possibly a quite costly idea. The seemingly more prudent approach would be to make available driving robots.

If you saw a conventional-looking car coming down your neighborhood street and it had a robot at the wheel, what would be your reaction?

Likely eeriness.

One admittedly arguable claim is that this eeriness is due to the robot driving conventional car dipping down into the famous or infamous uncanny valley.

Conclusion

From an Ethical AI perspective, the uncanny valley presents an intriguing conundrum.

There are some in AI that fully believe in the uncanny valley and some that do not. But, whether you believe in the uncanny valley or do not do so, nonetheless the topic itself is bandied around. You cannot hide your head and pretend that the construct per se is nonexistent. The construct as an idea lives and in some semblance is virally powerful. Hate it or love it, the darned or maybe exalted topic persists.

Per my earlier discourse on the merits of the uncanny valley from the ethics of AI angle, there is a dueling love and hate relationship therein. Should those in the ethical AI realm embrace the uncanny valley, or summarily reject the uncanny valley, or remain somewhat neutral about the veracity and instead focus on the impact that accrues due to the ongoing divergent beliefs about it.

This challenge brings rise to the preeminent economist Adam Smith as he once said (paraphrasing) that on the road from the City of Skepticism, you have to pass through the Valley of Ambiguity.



READ NEWS SOURCE

Also Read  Qatar Airways Reinstates Some Routes—Operating 150 Flights Daily To 70 Cities