Transportation

Those Slain Florida Dolphins Were Inadvertently Undermined By Friendly Humans, Providing Harsh Lessons For AI Self-Driving Cars


Recent news reports indicate that several dolphins in the waters of Florida have been killed by some kind of sharp objects or possibly even bullets that were shot at them (forensic pathologists are still studying the deceased marine mammals).

There is now a $20,000 reward being offered by the National Oceanic and Atmospheric Administration (NOAA) for anyone that can lead authorities to the dastardly scoundrels that undertook the killings.

Let’s all hope that the dolphin killings will come to a stop and that the culprits will be found and brought to justice.

There is an added twist to the story that provides additional controversy.

It appears that the dolphins were harmed while in a posture known as the dolphin begging position.

Essentially, when a dolphin is seeking to get fed by a human, the normal action of the dolphin consists of turning onto its side and having its head slightly above the water, which has been dubbed its begging position.

Consider this facet for a moment.

Imagine a dog that has been fed by humans and how it would over time adopt a posture or positioning that it knew was most satisfying to the humans feeding it. Perhaps the dog would go into a sitting stance, panting and expectantly eyeballing a human that is holding outward in their hand scrumptious dog food.

Of course, the dog might go into the same position even if a human did not have food at-hand, doing so in hopes of sparking the human to provide something to eat.

Or, the human might trick the dog into getting into its eating posture, simply by appearing to have dog food and making motions to suggest as such.

In terms of the dolphins, since the points of attack on their bodies suggest that the mammals were in the begging position, it seems plausible that the dolphins were reacting to the presence of humans that the animals assumed were likely to feed them.

All in all, logically, the dolphins might have readily approached their attackers and did so under the assumption of being fed, including that they remained quite vulnerable to attack via getting quite close to the attackers and showcasing their underbellies.

Experts on bottlenose dolphins say that the begging position is a learned behavior based on wild dolphins having encountered humans that seek to feed the animals.

Furthermore, such experts repeatedly exhort that humans should not be feeding wild dolphins.

First, it undermines the natural efforts of the dolphins, causing the dolphins to seek out humans for survival and to get food, rather than foraging in their own habitat. They become less capable of getting food on their own, plus they often expend precious energy trying to reach humans since doing so provides a presumed “easy” food source.

Second, the dolphins approach humans with an expectation of getting fed, which can cause a kind of vicious cycle whereby humans feed the dolphins and reinforce the unfortunate behaviors involved (thus, a human that perhaps wasn’t contemplating feeding a wild dolphin, does so reactively when the dolphin comes up to them and appears to be requesting food).

Though many people think they are doing a kind thing by feeding wild dolphins, it is said that a fed dolphin will ultimately and regrettably become a dead dolphin, meaning that the dolphins will lose their natural competitive edge and be more likely to inevitably die sooner for one related reason or another.

This is reminiscent of those that go to our national parks and try to feed the wild deer. Signs are usually posted by the park rangers telling you that you should not feed the animals, but people do so anyway.

Each person doing so seems to think that a little bit of handed over morsel or other edible doesn’t seem like much of an issue. They don’t realize though that this act is done again and again by a multitude of visitors, thousands upon thousands over time, and gradually from the perspective of the deer it “teaches” them that humans have food and will readily provide food.

In many cases, the act of feeding a wild animal can get you into a lot of trouble with the law.

People seem to frequently assume that feeding a wild animal is indeed perhaps ill-advised, but they figure doing so is really at their own personal discretion.

Not so.

For example, per the U.S. Marine Mammal Protection Act, it is against the law to feed those wild dolphins in Florida, and it is illegal to hunt or kill the dolphins, all of which are acts that can lead to jail time and hefty fines.

The point or added twist then to the dolphin killings is that it could be that the wild dolphins had been fed by other humans from time-to-time, and it made the dolphins more complacent and actually eager to approach the reprobates that decided to kill them.

Had the dolphins been wary of humans, perhaps the mammals might have stayed far enough away to avoid being harmed or might not have made themselves visible to the killers.

To some degree, those that earlier had befriended the dolphins were inadvertently “training” the animals toward behavior that would likely undermine the survival of the mammals.

For those that feed wild animals, they think it is an act of kindness when in reality it is most likely an act of adverse consequences and undermines the animal’s natural instincts.

Why bring up this topic?

Believe it or not, there is a similar kind of unintended adverse consequence that currently is possibly impacting the advent of AI-based true self-driving cars.

Say what?

Yes, let’s consider this question: Is human behavior around and toward true self-driving cars potentially undermining the realization of appropriate AI-based driving capabilities?

Time to unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Human Behavior

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One of the most important aspects of the AI driving system is that by-and-large it is being trained via Machine Learning (ML) and Deep Learning (DL).

In essence, rather than laboriously instructing the AI about how to drive a car step-by-step, the use of ML/DL involves feeding lots of data into a complex mathematical and computational algorithm that tries to find patterns and then utilize those patterns for subsequent actions.

I’ll compare this to how a novice teenager learns to drive, though I don’t want you to overly interpret the analogy and somehow believe that today’s AI systems have human-thinking qualities (today’s AI does not, despite whatever else you might have read or heard about such systems, so please don’t be fooled or misled otherwise).

Do not anthropomorphize today’s AI and assume that it has anything nearing human intelligence.

In any case, indulge a quick analogy, albeit with limitations.

For a novice teenage age driver, you might get into a car with them and explain inch-by-inch how to drive a car. Besides instructions on the use of the pedals and steering wheel, you might have the teenager start the car and take the vehicle for a drive in your local neighborhood.

As you do so, a caring parent is apt to say things like watch out for that pedestrian that appears to be nearing the street, or keep the car in the lane and don’t veer out of your lane, or watch the car ahead of you that might at any moment hit its brakes.

Those are all explicit instructions about driving.

Another approach would be to have the teenager observe the roadway while driving, and potentially notice that sometimes pedestrians get near to the street, in which case it might be sensible to slow down and get ready to stop, just in case the person darts into the roadway.

In fact, for my kids, we used to play a game while they were younger that had them try to guess various roadway behaviors, doing so while sitting in the passenger seat. Though they weren’t yet behind the wheel, by having made observations of the traffic situations this was allowing their minds to become ready-made drivers, being able to spot patterns in what happens around a driving car.

To a great extent, the AI-based true self-driving cars are using collected data about driving that becomes fodder for “learning” how to drive.

Feed-in lots of data, apply ML/DL, algorithmically spot patterns, and voila, you then potentially have guidance about how to drive a car.

One qualm about this approach is whether the AI is figuring out the right or correct ways to drive, plus if the data is rather homogeneous it might not have unusual driving circumstances and thus there’s no pattern to be formulated for what is referred to as “edge cases” (those are rare or less commonly encountered cases, see my detailed explanation at the link here).

What in the world does this have to do with wild dolphins, you might be wondering?

Here’s the connection.

Suppose that pedestrians upon seeing a self-driving car coming down the street are reactive in a manner unlike how they normally react to human-driven cars.

For example, in some areas, pedestrians opt to stay clear of the driverless cars that are being tried out on their neighborhood and downtown streets. The logic seems to be that the self-driving cars are “special” and worthy of gawking at, but that you don’t want to disturb the vehicle or upset whatever is taking place (and, admittedly, a bit of fear might be involved too, being rightfully wary of what the darned thing is going to do).

In contrast, for human-driven cars, those same pedestrians are oftentimes willing to play chicken with the drivers, stepping off the curb and daring the driver to proceed. In some cities, human drivers will accede to a jaywalker, while in other cities it becomes a do-or-die challenge as to whether the human gets across the street before the maddened car driver runs them over (a veritable game of Frogger).

Let’s tie this back to the manner in which someone or something learn to drive.

If you were driving a car and never had any pedestrian that had tried to leap off the curb and dart into traffic, what might you “learn” about driving?

You could interpret this overarching pedestrian behavior to imply that you don’t need to worry about pedestrians coming into your way. The lack of pedestrians that are willing to challenge the car is therefore inadvertently setting a pattern or tone of driving that you have an expectation of no pedestrian wackiness.

In a manner of speaking, the “kindness” of the pedestrians that are purposely holding back and not challenging a driverless car is essentially allowing the AI to “learn” that pedestrians are passive and not a likely threat to the driving act.

You could liken this to “training” wild dolphins by those that “befriend” the mammals and inadvertently undermining what the animals should actually know about the real-world.

Of course, we know what kind of adverse outcomes can befall the dolphins, so it would be useful to consider the kind of adverse outcomes that could befall AI self-driving cars.

Ponder the matter.

What might happen when a pedestrian suddenly and unexpectedly opts to run into the street and in front of an oncoming self-driving car?

The AI is likely to try and come to a halt, though it is conceivable that it will be caught somewhat off-guard and not be prepared to halt as soon as it could have otherwise accomplished. Without having built up a predictive expectation of such pedestrian behavior, the AI might be reliant predominantly on the sensors of the vehicle that happen to detect the pedestrian once they are already in the middle of the street.

It might be too late to come to a stop in time.

Had instead the AI been predicting that the pedestrian was likely to come into the street, it could have already begun to slow down or assess other means of avoiding the pedestrian.

Do not misunderstand and somehow assume that I’m suggesting the AI wouldn’t likely try to come to a stop. The odds are that the overall AI system would try to do so.

The key in this example is that the AI might either not be anticipating what’s going to happen or has no base of patterns to do so because the data used for ML/DL was without sufficient instances thereof.

When we humans drive, we anticipate the future. We don’t just drive by whatever happens to come in front of the car. Most of us, at least those driving sensibly, scan back-and-forth, looking for a pedestrian that might come into traffic, or a dog that’s running loose and might run in front of the car, or a tree that’s ready to fall onto the roadway.

If AI-based true self-driving cars are going to drive at least as well as humans, there is a need to have a predictive quality that can anticipate what might happen, and then start on a timely basis to prepare for that potential action in-advance of the action itself playing out.

Pedestrians that are purposely avoiding a self-driving car rolling down the street are potentially undermining the adaptation of driving behaviors that we would want the AI to imbue.

Conclusion

Some smarmy pedestrian is going to read this and in a distorted manner think that I am suggesting that you ought to jump in front of an oncoming driverless car, as though that’s a better way to train the AI.

No, that’s plain stupid.

On a related matter, there are some that are pranking driverless cars, whereby they intentionally try to mess around with the AI system by doing something rash toward a self-driving car.

The most notable are those that while driving in traffic and upon spotting a driverless car, they maneuver to get in front of the self-driving car and then hit their brakes.

Apparently, those nutty drivers believe its fun and challenging to see what happens to the self-driving car when they pull these kinds of dangerous stunts. They also are seemingly too obtuse to realize that doing this to any driver, whether another human-driven car or a self-driving car, consists of an illegal driving act on their part and one that can have a terrible outcome for all parties.

Do not play pranks on self-driving cars.

The key to making sure that the AI driving systems are capable drivers consists of the AI developers and self-driving tech makers ensuring that the use of ML/DL is being undertaken in a sound manner.

As I’ve exhorted repeatedly, ML/DL for driverless cars can inadvertently become imbued with various biases as a result of the data being used to train them (see the link here).

We rightfully should expect that those crafting these AI systems should be scrutinizing the data and also the patterns being surfaced, doing so to ensure that the data is representative and that the discovered patterns are sensible.

Today’s AI systems do not have any robust form of common-sense reasoning and thus put aside any false assumption that magically the AI will realize that what it is doing is somehow wrong or inappropriate.

The act of learning is a lot more sophisticated than we generally give credit for.

As the sad tale of the wild dolphins illustrates, sometimes what is learned does not bode well.

Automakers and self-driving tech developers would be wise to keep those dolphins in mind, serving as a reminder that AI “learning” needs human guidance and must be done in a manner befitting the survival of us all, especially when dealing with multi-ton vehicles being allowed to rove on our busy and human-inhabited highways and byways.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.