Those amazing dancing four-legged robots.
I’m sure that you’ve seen those viral videos of the four-legged robotic systems that dance and prance in seemingly delightful and dog-like beloved ways. We seem to relish seeing those AI-driven robots as they climb over obstacles and appear to gain a delicate foothold when perched on top of boxes or after having been precariously placed on tipsy tops of cabinets. Their human handlers will at times poke or push the frolicking four-legged robots, which seems exasperatingly unfair and can readily raise your ire at the bratty treatment by those mastermind humanoids.
I’m wondering though if you’ve seen the not-so-viral videos of an entirely different caliber.
Prepare yourself.
There are widely posted videos showing the same kind of four-legged robots that have been outfitted with pronounced weaponry of one kind or another.
For example, a machine gun or similar firearm is mounted on top of an otherwise familiar dancing-and-prancing robot. There is an electronic linkage between the four-legged robot and the firing mechanism of the weapon. The now weaponized computer-walking contraption is shown striding over to where a bullseye target has been placed, and the machine gun is rat-a-tat fired fiercely at the target. After nailing the subsequently partially destroyed target, the four-legged robot dances and prances around nearby obstacles and lines up to repeat the same action over and over again at other fresh targets.
Not quite what you might have expected to see. This certainly takes the lightheartedness and relative joy out of candidly watching those cuddly four-legged robots do their thing. Welcome to the harsh reality of seemingly innocuous autonomous systems being converted or transformed into being sharply weaponized. With nary much of an effort, you can overnight have a considered “non-weaponized” autonomous system retrofitted to contain full-on weaponry.
It is almost easy-peasy in some circumstances.
I’m going to discuss this heatedly controversial topic and cover the rather hefty AI Ethics qualms that arise. We will take a journey into autonomous systems, AI, autonomous vehicles, weaponization, and a slew of related Ethical AI combative issues. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Let’s start with some foundational keystones.
For sake of discussion, accept that there are two major ways to categorize autonomous systems that have been armed with weaponry:
1) Autonomous Weapons Systems (by design)
2) Autonomous Systems that are Weaponized (by after-the-fact)
There is an important difference between the two categories.
In the first instance, we will define an autonomous weapons system to be from the get-go a grounds up computerized creation that is purposefully intended to be a weapon. The developers had in mind that they wanted to devise a weapon. Their explicit quest is to produce a weapon. They knew that they could integrally combine weaponry with the latest in autonomous systems technologies. This is a weapon that rides upon the wave of high-tech that attains autonomous movement and autonomous actions (I’ll elaborate more fully, shortly).
In a contrast, in the second instance, we will consider the matter of autonomous systems that have no particular slant toward weaponry at all. These are autonomous systems being developed for other purposes. Envision an autonomous vehicle such as a self-driving car that will be used to provide mobility-for-all and aid in reducing the thousands of annual fatalities that occur due to human-driving and human-operated automobiles (see my in-depth coverage at the link here). Not a weapon seems to be in consideration for those upbeat societally boosting efforts to get humans out from behind the wheel and put AI into the driver’s seat instead.
But those innocuous autonomous systems can be weaponized if humans want to do so.
I refer to this second category then as autonomous systems that are weaponized. The autonomous system was originally and devoutly crafted for a presumably non-weaponry purpose. Despite that, the dreamy altruistic hope gets upended by somebody somewhere that gets the conniving idea that this contrivance could be weaponized. All of a sudden, the autonomous system that seemed cuddly has become a lethal weapon by tagging on some form of weapons capabilities (such as the four-legged dog-like robots mentioned earlier that have a machine gun or similar firearm added onto their features).
Sadly, the two categories end up somewhat at the same place, namely providing a capacity of utilizing autonomous systems for weaponry on a potentially lethal basis.
The process of getting to that endpoint is likely to be different.
For the outright weaponized autonomous systems that were pegged as weapons, the weaponry aspects are typically front and center. You might say that the facets of the autonomous system are wrapped around the cornerstone of whatever weapon is being considered. The thinking process by the developers is somewhat along the lines of how a weapon can exploit the advent of autonomous systems.
The other perspective is usually not at all of that mindset. The developers want to derive a state-of-the-art autonomous system, perhaps for the betterment of humankind. These developers put their sincerest heart and soul into making the autonomous system. This is the core of their invention. They might not imagine that anyone would usurp or subvert their miraculously beneficial device. They are blissfully enamored with the societal benefits associated with the autonomous system being shaped and produced.
At some point, let’s say that third parties realize that the autonomous system can be rejiggered to be weaponized. Maybe they trick the developers into letting them have the autonomous system for what is claimed to be lofty purposes. Behind closed doors, these evildoers opt to sneakily add a weaponization capacity to the autonomous system. Voila, innocence turned into outright weaponry.
Things don’t have to proceed in that manner.
Perhaps the developers were told that they were developing an innocent autonomous system, yet those funding or directing the effort had other purposes in mind. Maybe the autonomous system effort indeed started innocently, but then when bills had to be paid, the leadership cut a deal with a funding source that wants the autonomous systems for nefarious reasons. Another possibility is that the developers knew that a later use might be for weaponization but that they figured that they would cross that harrowing bridge when or if it ever arose. Etc.
There are lots and lots of varying paths on how this all plays out.
You might find of interest that I’ve described in prior columns that there is a rising AI Ethics awareness regarding the dual-use elements of contemporary AI, see the link here. Let me briefly bring you up to speed.
An AI system that is envisioned for the good can be sometimes on the verge of the bad, perhaps via some rather simple changes, and correspondingly is regarded as being of a dual-use nature. In the news recently was an AI system that was built to discover chemicals that might be killers, for which the developers were wanting to make sure that we could avoid or be wary of such ill-conceived chemicals. Turns out that the AI could be somewhat easily tweaked to devotedly uncover those killer chemicals and thus potentially allow baddies to know what kinds of chemicals to potentially cook up for their abysmal evil plans.
Autonomous systems can demonstrably fit into that dual-use envelopment.
To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be undercutting the golden goose by clamping down on advances in AI that proffer immense societal advantages.
The Autonomy And The Weapon As Two Precepts
There is a catchphrase that some are using to forewarn about weaponized autonomous systems of all kinds, which are being coined collectively as slaughterbots.
This brings up an additional aspect that we ought to mull over.
Does an autonomous weapons system and/or an autonomous system that has been weaponized have to be of a lethal or killer robot preponderance?
Some would argue that we can have decidedly non-lethal weaponized autonomous systems too. Thus, in that viewpoint, it would seem profoundly inappropriate to use phrasings such as slaughterbots or killer robots. A non-lethal variant would presumably be able to subdue or enact harm that is not of a lethal result. Those such systems are not killing, they are of a lesser injury-generating capacity. Do not overstate the abilities, say those that insist we don’t have to be preoccupied with an outright killing machine trope.
Thus, we might have this:
- Lethal autonomous weapons systems
- Lethal autonomous systems that have been weaponized
- Non-lethal autonomous weapons systems
- Non-lethal autonomous systems that have been weaponized
Of course, the counterargument is that any autonomous system that has been weaponized would seem to have the potential for sliding into the lethality realm, even if supposedly only envisioned for use on a non-lethal basis. The incremental two-step of going from non-lethal to lethal is going to be quickly undertaken once you’ve already got in hand a weapon amidst an autonomous system. You would be hard-pressed to provide an ironclad guarantee that the non-lethal will not have sashayed into the lethal arena (though some are trying to do so, in a mathematical semblance).
Before we get much further into this overall topic of autonomous systems and weaponization, it might be handy to point out something else that though perhaps obvious is not necessarily visibly at top of mind.
Here it is:
- There is an AI aspect that is part and parcel of the autonomous system
- There is a weaponry aspect that is the weaponry side of this equation
- The AI might also be interconnected with the weaponry
Let’s unpack that.
We shall assume that today’s autonomous systems require AI as the underlying computerized means of bringing forth the autonomous facets. I mention this because you could try to argue that we might use non-AI-related technologies and techniques to do the autonomous systems, which though true, would seem less and less likely. Basically, AI tends to allow for greater levels of autonomy, and most are leveraging AI high-tech accordingly.
Okay, so we’ve got an AI-based capability that is infused somehow within the autonomous system and acts to guide and control the autonomous system.
Keep that at your fingertips as a rule of thumb.
It seems readily apparent that we also need to have some form of weaponry, else why are we discussing herein the topic of autonomous systems and weapons. So, yes, obviously, there is a weapon of one kind or another.
I am not going to delve into the type of weaponry that might be used. You can simply substitute whatever weaponry that comes to mind. There might be pinpoint weaponry. There might be mass-oriented destructive weaponry. It could be something with bullets or projectiles. It might be something that has chemicals or atomic volatile components. The list is endless.
The additional consideration is whether or not AI is interconnected with weaponry. The AI might be merely taking the weaponry for a ride. In the case of the four-legged robot that was shooting a gun, perhaps the gun is being fired by a human that has a remote control connected to the triggering of the weapon. The dog-like robot navigates a scene and then it is up to a remote human to pull the trigger.
On the other hand, the AI might be the trigger puller, as it were. The AI might have been devised to not only navigate and maneuver but also activates the weapon too. In that sense, the AI is doing everything from A to Z. There is no dependence upon a remote human to perform the weaponry side of things. The AI is programmed to do that instead.
To clarify then in this particular use case of autonomous systems that are weaponized, we have these kinds of possibilities:
- Autonomous System: AI runs the autonomous system entirely on its own
- Autonomous System: AI runs the autonomous system, but a human-in-the-loop can also intervene
- Weaponry: Remote human runs the weaponry (the AI does not)
- Weaponry: AI runs the weaponry, but a human-in-the-loop can also intervene
- Weaponry: AI runs the weaponry entirely on its own
I’ve previously covered the variations of having a human-in-the-loop concerning autonomous systems and autonomous vehicles, see the link here.
When you watch those fun-oriented videos of the dancing and prancing four-legged robots, they are usually supposed to be robots that are exclusively being navigably run by the AI (well, that is the custom or considered proper etiquette amongst those that are deeply into these matters). That’s what you might also rightfully assume. Of course, you don’t know that for sure. It could be that a remote human operator is guiding the robots. There is also a possibility that the AI does part of the guiding, and a remote human operator also does so, perhaps aiding the AI if the robot gets into a tough position and cannot computationally calculate a viable means to wiggle itself free.
The gist here is that there are numerous flavors of how AI and autonomous systems and weaponization are able to be intermixed. Some have AI that runs the autonomous system but does not run the weaponry. A human perhaps remotely runs the weaponry. Another angle is that the weaponry is maybe activated beforehand, and the autonomous system delivers the activated weapon, thus the AI did not partake directly in the triggering of the weapon per se and was instead acting as a delivery vehicle. And it could be that the AI is a proverbial jack-of-all-trades and does the entire gamut of autonomous system aspects to the weaponry utilization too.
Take your pick.
Meanwhile, please know that the human-in-the-loop is a big factor when it comes to debates on this topic.
A dividing line by some is that if the AI is doing the targeting and shooting (or whatever the weapon entails) then the whole kit and caboodle has crossed over into no-no land. This seemingly differs from the conventional fire-and-forget weapons that have a pre-targeted human-determined selection, such as a patrolling drone that has a missile ready for firing at a target that was already chosen by a human.
Some wonder why autonomous systems that are weaponized would not always include a human-in-the-loop throughout the process of the autonomous system being in an actively underway status. Seems like we might be better off if a strident requirement was that all such weaponized autonomous systems had to have a human-in-the-loop, either doing so for the operation of the autonomous system or for operating the weaponry (or for both). Keeping an assumed sure-and-steady human hand in this AI mix might seem altogether shrewd.
Get ready for a lengthy list of reasons why this is not necessarily feasible.
Consider these difficulties:
- Human-in-the-loop might not be fast enough to timely respond
- Human-in-the-loop might not have sufficient info to dutifully respond
- Human-in-the-loop might not be available at the time needed
- Human-in-the-loop might be undecided and won’t act when needed
- Human-in-the-loop might make the “wrong” decision (relatively)
- Human-in-the-loop might not be accessible from the system at the needed time
- Human-in-the-loop might get confused and be overwhelmed
- Etc.
You are undoubtedly tempted to look at that list of human frailties and limitations and then come to the solemn conclusion that it makes apparent sense to excise the human-in-the-loop and always instead use AI. This could either be to the exclusion of the human-in-the-loop or maybe have the AI be able to override an ingrained human-in-the-loop design. See my analysis of how disagreements between AI and a human-in-the-loop can lead to precarious situations, covered at the link here.
Oftentimes, a belittling list of these kinds of real-time human-focused downsides is left on its own accord and leaves a lingering impression that the AI must somehow be leaps and bounds a much wiser choice than having a human-in-the-loop. Do not fall into that treacherous trap. There are sobering tradeoffs involved.
Consider these ramifications of the AI:
- AI might encounter an error that causes it to go astray
- AI might be overwhelmed and lockup unresponsively
- AI might contain developer bugs that cause erratic behavior
- AI might be corrupted with implanted evildoer virus
- AI might be taken over by cyberhackers in real-time
- AI might be considered unpredictable due to complexities
- AI might computationally make the “wrong” decision (relatively)
- Etc.
I trust that you can see that there are tradeoffs between using a human-in-the-loop versus being reliant solely on AI. In case you are tempted to suggest that the ready solution is to use both, I’d just like to emphasize that you can get the best of both worlds, but you can also get the worst of both worlds. Do not assume that it will always and assuredly be the best of both worlds.
You might have been somewhat surprised by one of the above-listed downsides about AI, specifically that the AI might be unpredictable. We are used to believing that AI is supposed to be strictly logical and mathematically precise. As such, you might also expect that the AI will be fully predictable. We are supposed to know exactly what AI will do. Period, end of the story.
Sorry to burst that balloon but this myth of predictability is a misnomer. The size and complexity of modern-day AI is frequently a morass that defies being perfectly predictable. This is being seen in the Ethical AI uproars about some Machine Learning (ML) and Deep Learning (DL) uses of today. I’ll explain a bit more momentarily.
Also, you might want to take a look at my latest analysis of the upcoming trends toward trying to ensure verifiable and mathematically provably correct AI systems via the latest in AI safety, at the link here.
Ruminating On The Rules Of The Road
I had mentioned the notion of targets and targeting, which is a rather heavily laden piece of terminology that deserves keen attention.
We can ponder this:
- Targets that are humans
- Targets that aren’t humans but are living creatures
- Targets that are construed as property
Suppose that we have an autonomous system that has been weaponized. The AI is used to guide the autonomous system and used for weaponry. The AI does everything from A to Z. There is no provision for a human-in-the-loop. In terms of targeting, the AI will choose the targets. There isn’t a pre-targeting that has been established by humans. Instead, the AI has been programmed to generally ascertain whether there are humans that are to be targeted (maybe scanning for hostile actions, certain kinds of uniforms, and so on).
With me on this so far?
This scenario is pretty much the one that causes the most outcry about weaponized autonomous systems.
The stated concern is that the AI is doing (at least) three things that it ought to not be permitted to do:
- Targeting humans as the targets
- Targeting without the use of a human-in-the-loop
- Potentially acting unpredictably
Notice that there is a pointed mention of worries about the AI being unpredictable. It could be that though the AI was programmed to target certain kinds of humans, the AI programming is not what we thought it was, and the AI ends up targeting “friendlies” in addition to those that the AI was supposed to construe as “hostiles” (or, perhaps in lieu of). On top of this, even if we opt to include a human-in-the-loop provision, the unpredictability of the AI might mean that when the AI is supposed to confer with the human-in-the-loop, it fails to do so and acts without any human intervention.
You might find of interest that the International Committee of the Red Cross (ICRC) has proffered a three-point overarching position about autonomous weapons systems that elaborates on these types of concerns (per the ICRC website):
1. “Unpredictable autonomous weapon systems should be expressly ruled out, notably because of their indiscriminate effects. This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted, and explained.”
2. “In light of ethical considerations to safeguard humanity, and to uphold international humanitarian law rules for the protection of civilians and combatants hors de combat, use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.”
3. “In order to protect civilians and civilian objects, uphold the rules of international humanitarian law and safeguard humanity, the design and use of autonomous weapon systems that would not be prohibited should be regulated, including through a combination of: limits on the types of target, such as constraining them to objects that are military objectives by nature; limits on the duration, geographical scope and scale of use, including to enable human judgement and control in relation to a specific attack; limits on situations of use, such as constraining them to situations where civilians or civilian objects are not present; requirements for human-machine interaction, notably to ensure effective human supervision, and timely intervention and deactivation.”
On a related outlook, the United Nations (UN) via the Convention on Certain Conventional Weapons (CCW) in Geneva had established eleven non-binding Guiding Principles on Lethal Autonomous Weapons, as per the official report posted online (encompassing references to pertinent International Humanitarian Law or IHL provisos):
(a) International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems;
(b) Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system;
(c) Human-machine interaction, which may take various forms and be implemented at various stages of the life cycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems is in compliance with applicable international law, in particular IHL. In determining the quality and extent of human-machine interaction, a range of factors should be considered including the operational context, and the characteristics and capabilities of the weapons system as a whole;
(d) Accountability for developing, deploying and using any emerging weapons system in the framework of the CCW must be ensured in accordance with applicable international law, including through the operation of such systems within a responsible chain of human command and control;
(e) In accordance with States’ obligations under international law, in the study, development, acquisition, or adoption of a new weapon, means or method of warfare, determination must be made whether its employment would, in some or all circumstances, be prohibited by international law;
(f) When developing or acquiring new weapons systems based on emerging technologies in the area of lethal autonomous weapons systems, physical security, appropriate non-physical safeguards (including cyber-security against hacking or data spoofing), the risk of acquisition by terrorist groups and the risk of proliferation should be considered;
(g) Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems;
(h) Consideration should be given to the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with IHL and other applicable international legal obligations;
(i) In crafting potential policy measures, emerging technologies in the area of lethal autonomous weapons systems should not be anthropomorphized;
(j) Discussions and any potential policy measures taken within the context of the CCW should not hamper progress in or access to peaceful uses of intelligent autonomous technologies;
(k) The CCW offers an appropriate framework for dealing with the issue of emerging technologies in the area of lethal autonomous weapons systems within the context of the objectives and purposes of the Convention, which seeks to strike a balance between military necessity and humanitarian considerations.
The Quandary We Find Ourselves In
These various laws of war, laws of armed conflict, or IHL (International Humanitarian Laws) serve as a vital and ever-promising guide to considering what we might try to do about the advent of autonomous systems that are weaponized, whether by keystone design or by after-the-fact methods.
We can sincerely wish that a ban on lethal weaponized autonomous systems would be strictly and obediently observed. The problem is that a lot of wiggle room is bound to slyly be found within any of the most sincere of bans. As they say, rules are meant to be broken. You can bet that where things are loosey-goosey, riffraff will ferret out gaps and try to wink-wink their way around the rules.
Here are some potential loopholes worthy of consideration:
- Claims of Non-Lethal. Make non-lethal autonomous weapons systems (seemingly okay since it is outside of the ban boundary), which you can then on a dime shift into becoming lethal (you’ll only be beyond the ban at the last minute).
- Claims of Autonomous System Only. Uphold the ban by not making lethal-focused autonomous systems, meanwhile, be making as much progress on devising everyday autonomous systems that aren’t (yet) weaponized but that you can on a dime retrofit into being weaponized.
- Claims of Not Integrated As One. Craft autonomous systems that are not at all weaponized, and when the time comes, piggyback weaponization such that you can attempt to vehemently argue that they are two separate elements and therefore contend that they do not fall within the rubric of an all-in-one autonomous weapon system or its cousin.
- Claims That It Is Not Autonomous. Make a weapon system that does not seem to be of autonomous capacities. Leave room in this presumably non-autonomous system for the dropping in of AI-based autonomy. When needed, plug in the autonomy and you are ready to roll (until then, seemingly you were not violating the ban).
- Other
There are plenty of other expressed difficulties with trying to outright ban lethal autonomous weapons systems. I’ll cover a few more of them.
Some pundits argue that a ban is not especially useful and instead there should be regulatory provisions. The idea is that these contraptions will be allowed but stridently policed. A litany of lawful uses is laid out, along with lawful ways of targeting, lawful types of capabilities, lawful proportionality, and the like.
In their view, a straight-out ban is like putting your head in the sand and pretending that the elephant in the room doesn’t exist. This contention though gets the blood boiling of those that counter with the argument that by instituting a ban you are able to dramatically reduce the otherwise temptation to pursue these kinds of systems. Sure, some will flaunt the ban, but at least hopefully most will not. You can then focus your attention on the flaunters and not have to splinter your attention to everyone.
Round and round these debates go.
Another oft-noted concern is that even if the good abides by the ban, the bad will not. This puts the good in a lousy posture. The bad will have these kinds of weaponized autonomous systems and the good won’t. Once things are revealed that the bad have them, it will be too late for the good to catch up. In short, the only astute thing to do is to prepare to fight fire with fire.
There is also the classic deterrence contention. If the good opt to make weaponized autonomous systems, this can be used to deter the bad from seeking to get into a tussle. Either the good will be better armed and thusly dissuade the bad, or the good will be ready when the bad perhaps unveil that they have surreptitiously been devising those systems all along.
A counter to these counters is that by making weaponized autonomous systems, you are waging an arms race. The other side will seek to have the same. Even if they are technologically unable to create such systems anew, they will now be able to steal the plans of the “good” ones, reverse engineer the high-tech guts, or mimic whatever they seem to see as a tried-and-true way to get the job done.
Aha, some retort, all of this might lead to a reduction in conflicts by a semblance of mutually. If side A knows that side B has those lethal autonomous systems weapons, and side B knows that side A has them, they might sit tight and not come to blows. This has that distinct aura of mutually assured destruction (MAD) vibes.
And so on.
The AI In The Autonomy
Let’s make sure we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
With that added foundational background, we turn once again to the autonomous systems and weaponization topic. We earlier saw that the AI enters into the autonomous system component and also can enter into the weaponization component. The AI of today is not sentient. This is worthy of repeating and I will highlight this for added insights into these matters.
Let’s explore some scenarios to see how this is a crucial consideration. I will momentarily switch out of a wartime orientation on this topic and showcase how it permeates many other social milieus. Steady yourself accordingly.
An AI-based autonomous system such as an autonomous vehicle that we shall say has nothing whatsoever to do with weapons is making its way throughout a normal locale. A human comes along to make use of the autonomous vehicle. The person is armed with a foreboding weapon. Assume for sake of discussion in this particular scenario that the person has something untoward in mind. The person gets into the autonomous vehicle (carrying their weapon, concealed or not concealed, either way).
The autonomous vehicle proceeds to whatever destination that the rider has requested. In this case, the AI is simply programmatically carrying this passenger from one pickup location to a designated destination, just as it has been doing for possibly dozens or hundreds of trips, each day.
If this had been a human driver and a human-driven vehicle, presumably there is some chance that the human driver would realize that the passenger is armed and seems to have untoward intentions. The human driver might refuse to drive the vehicle. Or the human driver might drive to the police station. Or maybe the human driver might try to subdue the armed passenger (reported instances exist) or dissuade the passenger from using their weapon. It is quite complicated, and any number of variations can exist. You would be hard-pressed to assert that there is only one right answer to resolving such a predicament. Sadly, the situation is vexing and obviously dangerous.
The AI in this case is unlikely to be programmed for any of those kinds of possibilities. In short, the armed passenger might be able to use their weapon, doing so from within the autonomous vehicle, during the course of the driving journey. The AI driving system will continue to travel along and the autonomous vehicle will keep heading to the stated designation of the passenger (assuming that the destination was not otherwise considered out-of-bounds).
Most contemporary AI driving systems would only be computationally focusing on the roadway and not on the efforts of the rider.
Things can get worse than this.
Suppose someone wants to have a bunch of groceries transported over to a place that takes extra food for the needy. The person requests an autonomous vehicle and places the bags of groceries into the backseat of the vehicle. They aren’t going to go along for the ride and are merely using the autonomous vehicle to deliver the food bags for them.
Seems perfectly fine.
Envision that a dastardly person opts instead to place some form of weaponization into the autonomous vehicle rather than the more peaceful notion of grocery bags. I think you can guess what might happen. This is a concern that I have been exhorting repeatedly in my columns and forewarning that we need to cope with sooner rather than later.
One proffered response to these types of scenarios is that perhaps all autonomous vehicles could be programmed to make use of their cameras and other sensors to try and detect whether a potential passenger is armed and has nefarious intentions. Maybe the AI would be programmed to do this. Or the AI electronically and silently alerts a remote human operator that then would via the cameras visually and otherwise examine and possibly interact with the passenger. It is all part of a complex and potentially intractable can of worms, such that it raises intense privacy issues and a plethora of other potential Ethical AI concerns. See my coverage at the link here.
Another somewhat akin alternative is that the AI contains some kind of embedded ethics programming that tries to enable the AI to make ethical or moral judgments that normally are reserved for human decision-makers. I’ve examined these brewing kinds of AI-embedded computational ethics prognosticators, see the link here and the link here.
Going back to a battlefield scenario, envision that a lethal autonomous weapons system is cruising overhead of a combat zone. The AI is operating the autonomous system. The AI is operating the weapons onboard. We had earlier conceived of the possibility that the AI might be programmed to scan for seemingly hostile movements or other indicators of human targets deemed as valid combatants.
Should this same AI have some kind of ethical-oriented component that strives to computationally consider what a human-in-the-loop might do, acting in a sense in place of having a human-in-the-loop?
Some say yes, let’s pursue this. Some recoil in horror and say it is either impossible or otherwise violates the sanctity of humanness.
Yet another can of worms.
Conclusion
For those of you interested in this topic, there is a lot more to be discussed.
I’ll give you a quick taste of one nagging conundrum.
We normally expect that a human will be ultimately held accountable for whatever occurs during wartime. If AI is controlling an autonomous weapon system or controlling an autonomous system that has perchance been weaponized, and this system does something on the battlefield that is believed to be unconscionable, who or what is to be blamed for this?
You might argue that AI should be held accountable. But, if so, what does that exactly mean? We don’t yet consider today’s AI to be the embodiment of legal personhood, see my explanation at the link here. No pinning the tail on the donkey in the case of the AI. Perhaps if AI someday becomes sentient, you can try to do so. Until then, this is a bit of a reach (plus, what kind of penalties or repercussions would the AI be subject to, see my analysis at the link here and the link here, for example).
If the AI is not the accountable suspect, we might then naturally say that whatever human or humans devised the AI should be held accountable. Can you do this if the AI was merely running an autonomous system and some humans came along that coupled it with weaponization? Do you go after the AI developers? Or those that deployed the AI? Or just the weaponizing actor?
I trust that you get the idea that I’ve only touched the tip of the iceberg in my hearty discussion.
For now, let’s go ahead and wrap up this discourse. You might recall that John Lyly in Euphues: The Anatomy Of Wit in 1578 memorably stated that all is fair in love and war.
Would he have had in mind the emergence of autonomous weapons systems and the likewise advent of autonomous systems that are weaponized?
We certainly need to put this at top of our minds, right away.