Transportation

Believing Your Self-Driving Car Or Your Lyin’ Eyes


You might be somewhat familiar with the expression “lyin’ eyes” and which can be used in a variety of interesting and useful ways.

Especially popularized by a famous song that the Eagles brought to the world in the mid-70’s, the most straightforward meaning of lying eyes is that your eyes have the potential of giving away your true intent, in spite of your actions that might suggest some other purpose or goal.

Lore has it that Don Henley and Glenn Frey of the Eagles were inspired to incorporate the expression as the title of their song in order to describe how some beautiful women were cheating on their husbands and that apparently those cheating women’s lyin’ eyes gave away their unfaithful efforts.

There are though other variations to the meaning and use of lying eyes.

Another interpretation for the notion of lyin’ eyes is that sometimes your eyes see one thing, but what you thought you saw is not what was actually there.

That’s a mouthful, so let me elaborate.

The other day I was walking in downtown San Francisco and I caught a glimpse of someone on the crowded sidewalk that looked just like a college chum of mine that I haven’t seen for many years. I did one of those double takes whereby you look, you become momentarily startled, and you look again, staring intently to figure out whether what you thought you saw was true.

I even tried to hurriedly catch-up with the person but gave up when I realized upon further visual inspection that it wasn’t my friend from long ago.

Did my eyes lie to me?

Maybe, though one has to also add the brain into the equation of what you see versus what you think that you see.

In many ways, your eyes are relatively simplistic in terms of mainly serving as a sensory device to capture images, and those images are then relayed to the brain to figure out what the images might mean. You could suggest that the eyes are like a camera, taking a picture and the image has no particular meaning until the mind works its magic by analyzing what was visually captured.

A twist on this version of lyin’ eyes is that you can potentially convince yourself that you saw something that wasn’t there at all.

Witnesses called into courtroom are notorious for oftentimes misremembering things that they believe they saw. Yes, your honor, the man was carrying a gun, I’m sure of it, someone might testify.

Shockingly, suppose that it was later proven via say a video recording that the accused was not carrying a gun.

Why did the witness so confidently and assuredly believe that the person had a gun?

Assuming that the witness is being sincere, it is quite feasible that the witness during the in-the-moment effort had in their mind that the person was threatening looking, which could have invoked a context in their mind encompassing the realm of weapons as part of the dangerous moment, ultimately causing their mind to enact or imagine a weapon into the scene, though it wasn’t really there.

As you can plainly now see, lyin’ eyes is a handy expression that offers plenty of options for its usage.

Here’s an interesting facet to consider: While riding in a true self-driving car, what if you see something occurring in traffic that is not in alignment with what the driverless car is “seeing” or that the driverless car is seemingly not reacting to?

Which should you then believe, the self-driving car or perhaps what might be your lyin’ eyes?

Let’s unpack this question and consider the ramifications of the matter.

Human Lyin’ Eyes And Semi-Autonomous Cars

Before we can closely examine the question, it is important to clarify that I am going to primarily concentrate on fully autonomous cars, often considered at a Level 4 or Level 5.

A true self-driving driverless car is one that the AI is completely driving the car and there is no human driving involved.

In contrast, a Level 2 or Level 3 car is known as a semi-autonomous car, meaning that there must be a human driver present and that the automation and the human driver are co-sharing the driving task.

During the emergence of Level 3 cars, there is admittedly a possibility of having lyin’ eyes issues.

You and the automation are co-sharing the driving task in a Level 3 car, which provokes a dangerous gambit that the automation might try to do one thing, while your eyes and your mind believe that something else should be undertaken.

Similarly, you might be trying to take a particular action with the driving controls, and yet the automation might “disagree” and try to assert that you should be doing something else instead.

We have yet to universally figure out the balance between when the automation wins out versus when the co-sharing driving human wins out. You might be tempted to claim that the human should always win out, especially since they are considered the legally responsible driving party, but this is not such an easy and always appropriate answer.

Consider a situation whereby a Level 3 car is about to crash into a truck that’s come to a halt in front of the Level 3 car, and suppose the human driver is pushing on the gas pedal because they haven’t yet realized the aspect that they are about crash head-on into the truck.

Should the automation invoke the brakes of the car?

Well, of course it should, you would say, but notice that at this juncture of the scenario that the human driving action is completely contrary to the action that the automation is seeking to undertake.

The human is accelerating, yet the automation wants to hit the brakes. I’ve setup the situation so that it would seem obvious that the human is “wrong” and the automation is “right” in this use case, but I can readily provide other scenarios that are just the opposite.

I hope you can now realize why it is difficult to carte blanche pre-determine whether the human wins or the automation wins when they are at cross-purposes of each other (I can provide you with numerous other such indeterminable examples).

Semi-autonomous cars are opening a can of worms due to the co-sharing actions of a human driver and an automation form of driver, thus forcing that the two “drivers” to jointly collaborate during the driving act, a real-time life-or-death act that at-times is going to (sadly) be an untenable collaboration.

Let’s shift now to focus on the lyin’ eyes when you are inside a fully autonomous self-driving car.

Human Lyin’ Eyes And Fully Autonomous Cars

It’s a nice sunny day.

You are enjoying riding in your fancy new self-driving car.

What a wonderful world we live in that you no longer need to take the wheel of the car, and nor do you need to listen to a tiresome human driver that would be driving the car if you were getting a ridesharing lift in a conventional car.

As the driverless car makes its way down the street of your neighborhood, you are looking out the windows of the car, noticing the other homes on your block and the magnificent trees that look so majestic and offer cool shade on this hot and muggy summertime day.

All of a sudden, a ball bounces out into the street.

You look furtively to see if little Joey or little Samantha are maybe playing in their front yard and have inadvertently let their ball go past themselves and into roadway.

Us humans know the age-old logic that where there’s a bouncing ball there is a good chance that a child will soon be running out into the street to try and get the wayward ball.

Here’s the rub.

Does the AI of the self-driving car also “know” that a bouncing ball implies a soon to be appearing child into the roadway and possibly into the path of the oncoming car?

Maybe yes, maybe no.

It could be that the AI has simply detected the ball as an object that perchance is now in the street.

Upon detecting the ball, the AI might determine that the ball is small enough to not pose a threat to the car.

The AI might opt to continue straight ahead, maybe doing so because a mathematical calculation has indicated that the ball won’t hit the car on its current path, thus, there’s no special action needed by the self-driving car.

Or, maybe the AI opts to slightly swerve the car to avoid hitting the ball, if it seems that mathematically the path of the ball and the heretofore path of the driverless car are going to intersect. The swerve is calculated as feasible since there is nothing else in the street at this moment in time that would get hit by the change in the path of the self-driving car.

This all could occur without the AI in any manner whatsoever doing any kind of prediction about what the ball means and what might happen next. Instead, the AI might be programmed to deal with whatever happens to happen at whatever moment it happens, lacking any predictive feature to anticipate what might happen in the near future.

You’ve perhaps seen novice teenage drivers do the same thing, namely, they act upon whatever they see in real-time and fail to consider the future ramifications of a driving scene that is unfolding or step-wise revealing itself.

Meanwhile, what about you, the human passenger inside the driverless car.

You are now personally alert to the possibility that at any moment a child might dart into the street, but you have no idea whether the AI is considering the same possibility.

If the driverless car continues straight ahead and does not change its course, you might assume that the AI hasn’t figured out that a child could be soon in the path of the car.

On the other hand, if the self-driving car does a slight swerving, you won’t know whether the swerving action is merely to avoid hitting the ball, or whether it might be an anticipatory move to be ready in case a child does run into the street.

All in all, should you believe your lyin’ eyes, which are telling you that a ball is in the street and might be a precursor to a child suddenly running into the street (an idea that’s in your mind, at that juncture), or should you merely assume and hope and pray that the AI is ready for the chance of a child darting in front of the self-driving car?

Your peaceful ride in the self-driving car has just transformed into one of grave concern and anxiousness.

What should you do?

Dealing With Your Lyin’ Eyes

So far, we have a ball loose in the street, for which your eyes see the ball and let’s assume there is no disputing the fact that there is a ball there in the roadway.

In your minds eye, you also envision a child soon to follow that ball. The speed of the driverless car and its ongoing direction are going to make for a rather sour outcome if a child does dart into the street.

I realize that some AI developers would say that the AI would be scanning the side of the road to detect whether there is a child nearby. Yes, that could be possible, but let’s assume that there are those majestic trees blocking the view of the area where a child might be, and thus the AI cannot readily spot any children.

You as a passenger have no driving controls. You are completely dependent upon whatever the AI decides to do about the driving of the self-driving car.

Presumably, you could try to ask the AI whether it is going to get ready in case a child darts into the street, which is the kind of conversation or dialogue that you might have with a human driver in a conventional car, wherein you might have deduced the chances of a child appearing and want to make sure that the human driver is thinking the same thing.

Well, even though self-driving cars are going to have Natural Language Processing (NLP) capabilities, akin to the likes of Alexa and Siri, it is unknown as yet as to how sophisticated the AI NLP will be.

Early versions of self-driving cars might have quite simplistic AI NLP and therefore your attempt to discuss or dialogue with the driverless car won’t be viable.

Even if you could converse with the AI, what should the AI do about whatever you might say regarding the driving task?

In this use case about the ball, I realize it seems obvious that if you alert the AI that presumably the AI should follow your instructions and get ready for a child darting into the street. On the other hand, suppose you are drunk and tell the self-driving car to go toward the curb, making things even worse if a child does come out into the street.

A human driver would have overall common sense and be able to figure out the nuances of a myriad of situations, while the AI system is unlikely to be able to do so. Indeed, please be aware that there is not as yet anything close to any kind of common-sense reasoning capability for AI systems.

I know that some AI developers will carp that I’m implying that human drivers are fail-proof and will always do the right thing, which is not at all what I am saying.

What I am saying is that the AI systems are going to be a far cry from being a human driver, which means that in many respects maybe the AI will do a better job (presumably not get drunk, not get distracted, etc.), and in many other respects, a whole lot of respects, the AI will be a many times less capable than what a human driver can do (at least for now and the foreseeable future).

Conclusion

There are going to be situations of humans riding in a self-driving car that involve the humans having lyin’ eyes of seeing something that they believe is happening, or might happen, and for which the human rider won’t know whether the AI is “thinking” the same things or not.

Some might argue that this is easily solved by having the AI continually report to the rider about the driving scene.

I doubt doing so is much of a solution.

Riders might get inundated by this kind of continual reporting, eventually either ignoring it or becoming numb to it.

Also, suppose a child is alone in the self-driving car, what good is it to be telling a young child about the driving actions.

Furthermore, even if an adult passenger is staying on the edge of their seat to act as a kind of imaginary backseat driver, keep in mind that the human passenger won’t have driving controls, therefore whatever they might want the car to do will only be done by trying to convince the AI to take such an action.

Anyway, when you get into a true self-driving car, don’t be surprised if you hear the song Lyin’ Eyes playing on the radio, which might be a singularity trick by the AI to subliminally convince you to not believe your eyes and instead believe the “eyes” (cameras) of the AI system.

Whether you should or should not try to hide your lying eyes from the inward facing camera of the AI system, you’ll need to decide.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.