Transportation

Heroic Kansas Soldier Runs Over Active Shooter, Saves Lives, Vital Implications For AI And Self-Driving Cars


Quite a harrowing moment occurred recently when a man with a rifle started randomly shooting at people while standing on the Centennial Bridge in Leavenworth, Missouri.

A Kansas soldier that happened to be driving his car on the bridge, waiting in traffic, realized that an active shooter was ahead of him, and so the soldier opted to navigate his car out of traffic and steered directly toward the shooter, ramming smackdab into him.

The shooter was knocked over and no longer able to shoot.

When police arrived, they and responding medical personnel extracted the shooter from underneath the soldier’s car and took the injured shooter to a nearby Kansas City hospital.

There is no doubt that the solider saved lives.

He is a hero.

His quick response and alert mindfulness stopped the shooter and prevented further potential carnage.

It was a prudent act and one that required quick thinking.

One supposes that some people in a similar circumstance would have tried to duck down inside their car to avoid getting shot. Or, maybe tried to put their car immediately into reverse, and backed-up to try and get away from the scene. Perhaps even gotten out of their car and run desperately for safety. For an explanation of ways to deal with outdoor active shooters while you are inside a vehicle such as a car or a bus, see my indication at the link here.

In this case, the soldier reasoned that he could take swift action to end the shooting spree, doing so by using whatever resource might be readily available in the situation, which was his car.

Not everyone would have had such a first thought, namely that their car could be used as a type of defensive weapon to defuse the situation.

Most of us instinctively do everything possible to avoid hitting anyone with our vehicle.

If a jaywalking pedestrian suddenly darts into the street, you will instantly hit the brakes and come to a screeching halt, a type of reflexive action based on years of driving. Similarly, when a bicyclist veers in front of your car at an intersection, you likely would radically swerve to avoid hitting the interloper.

In short, we have been trained and have experience in seeking to avert ramming into people, which is reinforced nearly every day that you take your car for a drive.

Again and again, you have ongoing moments while driving on even a daily journey of intentionally avoiding people that might perchance be in the vicinity of your car, whether they are in the middle of the street, or stepping off the sidewalk, or jogging through an unmarked crosswalk.

Of course, there are those crazy drivers that get into a road rage mindset and decide to smash other cars or attempt to run over people.

There are also those rather rare circumstances when the brakes on a car suddenly fail and the driver is unable to control their vehicle, sadly sometimes bashing into an innocent bystander.

And, there are way too many instances of drunk drivers that smash into pedestrians or otherwise are driving out-of-control and injure or kill people that happen to be in the wrong place at the wrong time.

The point overall though is that on a societal and cultural basis, we are instinctively geared toward not hitting people, along with being aware of criminal laws that make doing so a harshly punishable act, and thus on a normal daily basis, we pretty much do not run over people (statistically, that would be true, given that we drive in the U.S. some 3.2 trillion miles annually and have relatively few pedestrian deaths or injuries for all of that voluminous driving, though each such injury or death is certainly tragic and ought to be avoided).

Would you have made a real-time instantaneous decision to ram the shooter?

I dare say that most of us would like to think that we would have done so, but trying to overcome the years of habit about not striking people is not so easily conquered, and especially without any warning or heads-up that such an act is unexpectedly warranted and imminently needed.

The solider indicated to a reporter that he had prior training on active shooter situations overall, which likely did help in his quick reaction decision-making, and yet you never know whether prior training will kick-in and motivate sufficiently at the requisite moment.

There is a bit of twist on this overall topic about the use of our cars.

We are supposed to not use our vehicles to harm others, a seemingly sacrosanct principle.

But I think we all would agree that the soldier did the right thing by using his car to harm the active shooter.

In short, the hard-and-fast rule about never using your car to strike others is actually more bendable than might otherwise seem to be the case.

It is societally agreeable and legally acceptable to use your car as a means to harm someone in certain kinds of situations, as this example of the active shooter getting put down well-instructively showcases.

Ponder that for a moment.

We are in the midst of AI-based self-driving cars being readied for use on our public roadways, and one facet that is not yet resolved involves how the AI is supposed to act in all of the myriads of driving situations that might arise.

This brings up an intriguing question: Would an AI-based true self-driving car have been “savvy” enough to run over an active shooter, but at the same time not runover people-at-large whenever it seemed like the potentially right thing to do and yet might not really be?

In other words, do we want AI systems to “decide” to run people over, injuring or potentially killing those human beings?

That’s a tough nut to crack.

Let’s unpack the matter and see.

The Role of AI-Based Self-Driving Cars

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Law

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One of the biggest worries that the public has about AI-based self-driving cars is that those vehicles need to be driven in a safe manner. (for my analysis of surveys and polls about public perception of self-driving cars, see the link here).

Generally, the hope is that the AI driving systems will be as safe as or even safer than human drivers.

Currently, in the United States alone, there are some 40,000 deaths annually due to car crashes, and about 2.3 million injuries that occur as a result of car accidents. Human drivers have known foibles such as driving when distracted, driving when intoxicated, etc. The assumption is that the AI will not be drinking, it won’t be distracted, and so overall it ought to be a safer driver than human drivers.

We do not yet know whether this will be the case.

One thing that we do know is that a key precept for AI driving systems is that they are not supposed to ram into people.

Via the use of various sensory devices, including cameras, radar, LIDAR, ultrasonic devices, and the like, an AI system is supposed to be scrutinizing all the data about what is surrounding the vehicle. In that morass of info, the AI has to tease out the aspect that there are pedestrians nearby, or that a bicyclist is getting close to the car, and so on. And, upon making such detection, not hit those people.

Presumably, the AI has been intentionally crafted to avoid striking any of those nearby humans.

That seems prudent.

This sensible principle might remind you of the famous Three Laws of Robotics that were devised by science fiction writer Issac Asimov in 1942, of which those “laws” have become lore subsequently, including that the first rule says this: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

We have all grown accustomed to the Asimov rules and have seen them energetically portrayed in numerous big-time movies and TV shows.

A reality check is needed about those so-called laws.

They are not laws in any conventional meaning of the word “laws” and therefore please do not be somehow misled.

For example, some people at times think that an AI driving system will never ever hit a person via the self-driving car because it would break the Asimov “law” about not injuring a human being. In essence, this is a false belief that the Asimov rule is akin to a law of physics or a law of chemistry, as though there are nature-imposed limitations that cannot be exceeded or broken, because, well, because that is the way nature is.

There is absolutely nothing at all that prevents an AI system from ramming into a person, other than the coding or programming of the AI to try and prevent such an act.

Also, let’s cover the topic of sentience while we are on this matter.

The goal of AI is to ultimately achieve the equivalent of human intelligence, which, notably, we are not anywhere yet near to accomplishing.

In addition, some assert that the vaunted version of AI will become sentient, thus instilling the same ill-defined spark that humanity and living creatures appear to embody. In theory, it is thought that this sentience might arrive at the moment of the singularity, a point in time at which AI seemingly gains revered sentience (for my explanation on these aspects, see the link here).

We do not know if sentience is possible for computationally machine-based AI.

No one knows too whether this notion of a singularity is going to occur, nor when it might occur.

In any case, for those that believe in sentience and the singularity, they at times might also suggest that once this happens, the AI will “know” that killing people is a wrong thing to do. Prior to that point in time, presumably, the AI was (hopefully) programmed to not do such acts, but it did not “understand” the provision and was merely abiding programmatically.

I don’t want to go too far on this tangent, but we ought to at least agree that even if the AI reaches sentience, there is no particular reason it will opt to not run into people. Humans run into people. If AI is going to be akin to human intelligence, doesn’t it seem plausible that AI might run into people?

There is both the smiley face version of the singularity outcome, in which AI is a great benefactor to humanity and the sad face version of AI as humanity-crushing and enslaving evildoer, along with numerous variants in-between those polar extremes (for more on such speculation, see my analysis here).

Moving on, another variant involved in this discussion is the role of intent.

If an AI driving system runs into someone, did it intend to do so?

That is a murky topic.

Some would say that if the AI was programmed to allow for ramming into people, ergo it evidently was built with the “intention” to be able to do so and therefore unequivocally exhibits intent.

Others would decry this kind of argument and proffer that intent is only possible with sentient beings. Thus, until or if AI crosses over into sentience, it never has any semblance of “intent” per se.

The matter of intention is a whole another can of worms to be dealt with regarding AI (see my discussion here).

AI And The Runover Someone Dilemma

Return to the story of the solider that ran over the active shooter.

In that situation, we applaud the human driver for doing so.

Imagine that you were sitting in a self-driving car, while on that bridge, while the active shooter was shooting his rifle.

Would you want the AI to proceed to ram into the active shooter?

It seems like we would want that kind of action, since, as we now know, it stopped the active shooter from his dastardly act.

But, this would quite obviously violate the science fiction rule of a robot not causing injury to a human being.

Oops, a contradiction.

You might counterargue that the same rule also offers that through inaction the robot is not allowed to have a human come to harm.

Thus, if the AI driving system opted to not run over the shooter, and assuming the shooter kept shooting and harming people, the inaction of the AI would lead to humans being harmed.

Therefore, the AI indeed should run over the shooter.

That seems to clear-up the matter.

Does it?

You are opening a Pandora’s box, some would say.

If you believe that the AI driving system should run over the shooter, you are indicating that it is okay for the AI to use the self-driving car to harm humans.

Now that you’ve opened that door, the question arises about how far to keep it open and what will be the means of trying to shut that door.

We all agree in retrospect that the active shooter was doing a wrongful act and needed to be stopped. That though is quite a sizable amount of logic and interpretation.

Consider the myriad of circumstances that might arise about when we would agree that the use of a car to harm someone was warranted or not warranted.

An AI system that has been programmed or coded, and presumably not yet sentient, would have to embody a slew of facets about human ethics, human laws, societal norms, and the rest.

Keep in mind that currently there is no such thing as common-sense reasoning for AI, other than prototypes and small examples (see my discussion here), and thus you cannot just load-in an AI-based common sense component to help make these kinds of decisions (some lump this into the re-phrased and reborn notion of AI, known as Artificial General Intelligence or AGI, see my explanation at this link here).

Note too that such a decision was made in real-time.

The solider did not call others to confer about what to do. He made a choice on-the-spot. The point being that if you suggest that we can allow AI to proceed on such matters, but first it would need to seek advice or permission, the question arises about how that would work in any real-time situation.

Some might point out that if there was a passenger in the self-driving car, during the unfolding of the shooting event, the AI could simply ask the human passenger what the AI ought to do.

Indeed, this brings up Asimov’s second rule: “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

Okay, so assume that the human passenger tells the AI to ram the shooter, or maybe the AI offers that it would be willing to ram the shooter and wants approval from the human passenger.

The human passenger yells out, yes, runover that no-good son-of-a-gun and the AI obediently proceeds.

Well, we already have a conflict with respect to the Asimov rules, since the command by the human is telling the AI to harm another human, which seems like a violation.

Though there is the angle that the running over of the shooter will potentially save other humans (a somewhat problematic aspect to be calculated in a formulaic manner), and it seems like quite a judgment to make (in this case, a right one, but not necessarily always so)?

It seems that asking the human passenger might have resolved the dilemma.

Unfortunately, no, it does not.

First, suppose the human passenger was a child that was riding alone in the self-driving car (this will indeed be happening, likely quite often), and do you want a child making such a decision or even being placed into a posture of having to make such decisions?

Second, suppose there weren’t any passengers in the self-driving car, which will likely be the case much of the time as self-driving cars are roaming around looking to be available for those that need a lift. In that circumstance, there is not a human available in the car to provide any direction on the matter.

Third, you might argue that a remote assistant could be accessed, such as an OnStar-like human agent, but do you want a person that is not at the scene to make such a decision, and for which the remote access might not happen at all or be delayed due to connectivity issues.

Here’s the part that will likely send chills down your back.

If we did decide that a human passenger could make such decisions, how far might that be stretched, and veer perilously into a decidedly undesirable realm?

Imagine that a human passenger sees someone that they’ve always disliked, and the person is merely walking across the street, no weapons, purely innocently out for a stroll, and the rider in the self-driving car falsely urges the AI driving system to run over that person, claiming that the pedestrian is a threat.

Now what?

Do we really expect the AI to discern which situations are valid for a runover and which are not?

Conclusion

Bringing things back to today’s reality, most AI self-driving cars being formulated currently would not on-their-own seek to run over that active shooter, and in fact, the AI system is likely to have various programmatic precautions to vigorously attempt to avoid hitting a person (any and all persons).

Nor would the Natural Language Processing (NLP) in-car interactive component somehow allow for a human rider to instruct the AI to proceed to ram the shooter.

In short, the self-driving car and the AI would have done nothing in that situation, other than wait for the traffic to proceed.

Admittedly, this whole discussion can be labeled as an edge case or corner case, meaning it will only happen rarely and for the automakers and self-driving tech makers it just isn’t something on their busy plate right now (i.e., the aim at this time is fundamentally getting a self-driving car that can safely navigate everyday street driving scenarios).

At some point, as a society, we are going to need to have a really serious talk about what we expect AI and especially self-driving cars to do, including the act of running over humans, which seems like an ironclad no-no, but the real-world is a lot grayer and confounding than it might seem.

Meanwhile, for that heroic soldier, thanks immensely for your service and mindfulness.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.