Bad blood.
Yes, we are going to be discussing bad blood.
The context will be within the arena of Ethical AI and the ethics of AI systems, which is a burgeoning and vital set of topics about AI that I’ve extensively been covering such as the link here and the link here, just to name a few. We’ll start here by first laying out the foundations of bad blood and then wrap into the discourse the perhaps startling dilemma of how AI will impact humankind correspondingly. Get yourself ready for an enthralling ride.
I believe that you would reasonably concur that bad blood is a popular phrase in our society.
You undoubtedly know that there have been noteworthy songs with that famous phrasing (consider Taylor Swift and the lyrics from the same-named song that opens with “cause baby, we now got bad blood”), plus movies and cable shows so coined, and many a written narrative including great works of poetry that have invoked the longstanding notion of bad blood.
We usually associate the eyebrow-raising catchphrase with bitter reams of hatred that ruminates between two different groups. Sometimes the bad blood fosters a dire feud. The feud can metastasize into endless and potentially mutually destructive outcomes for the corresponding parties. All because of that bad blood, baby.
Historians tend to indicate that the initial phrasing was “ill blood” and was later immortalized as “bad blood” via the famous English essayist Charles Lamb. Thus, one of the first well-known uses if not perhaps the foremost that started our viral use of bad blood seems to come from his 1833 book The Last Essays of Elia and can be found in his essay entitled Poor Relations.
He recounts the tale of hostilities between two factions that live near each other, one called the Above Boys that reside on a hill, and the other referred to as the Below Boys that live down below in a valley.
I knew you’d like to see the historical keystone use of bad blood with your own eyes, so here is the passage from his Poor Relations essay (I’ve highlighted in bold the use of bad blood therein): “The houses of the ancient city of Lincoln are divided (as most my readers know) between the dwellers on the hill and in the valley. This marked distinction formed an obvious division between the boys who lived above (however brought together in a common school) and the boys whose paternal residence was on the plain a sufficient cause of hostility in the code of these young Grotiuses. My father had been a leading Mountaineer and would still maintain the general superiority in skill and hardihood of the Above Boys (his own faction) over the Below Boys (so were they called), of which party his contemporary had been a chieftain. Many and hot were the skirmishes on this topic—the only one upon which the old gentleman was ever brought out—and bad blood bred; even sometimes almost to the recommencement (so I expected) of actual hostilities. But my father, who scorned to insist upon advantages, generally contrived to turn the conversation upon some adroit by-commendation of the old Minster; in the general preference of which, before all other cathedrals in the island, the dweller on the hill, and the plain-born, could meet on a conciliating level, and lay down their less important differences.”
I purposely included the sentence that comes after the bad blood indication, doing so to showcase that despite bad blood being predominant, some will attempt to reduce the enmity and seek to keep things on a peaceful keel (the father appears to have that aspirational hope in mind). That provides a modicum of a happy face on something that otherwise is decidedly a sad and exasperating face.
What causes bad blood?
The amazing aspect of bad blood is that there can be explicit reasons for it, yet there can also be no semblance of known reasons for it at all.
We might harbor bad blood due to:
- Specific identifiable reasons
- Vague and unspecified reasons
- Reasons that were once known and no longer are (but bad blood lingers nonetheless)
- No reasons of any viable consequence or seeming exceedingly trivial basis
- Possibly by randomness alone
One interesting theory is that we innately find ourselves tending to form into groups. Once you’ve gotten into a group, the potential is sizable that you might realize that your group is in direct competition with a different group, possibly sorely grappling over in-common scarce or limited resources. In that manner of thinking, you might argue that the bad blood makes sense for survival purposes, perhaps.
Even if there isn’t any common basis for contention, we might instinctually assume that there is a chance of contentions eventually arising. In that case, we might preemptively favor our chosen group over the other group. You can’t put your finger on why your group is somehow better than the other or in disagreement with the other. So, you blindly shrug your shoulders and assume that it is logically prudent to cling to your group and denigrate the other group. No sense in taking any chances, you impulsively conclude.
Ethicists have lots of fascinating theories about this chunk of human behavior.
Consider this insightful assessment of the humankind situation by these researchers: “The evolution of our moral and social mind took place in the context of relatively small and competitively antagonistic social groups. This is considered to be at least a part of the explanation for the unique and throughout social nature of the human mind. One of the mental tendencies stemming from this social origin is the human tendency to split people into in-groups and out-groups. Individuals categorized in the in-group are perceived as more valuable than those categorized as out-group members” (as published in AI And Ethics, “Socio-Cognitive Biases In Folk AI Ethics And Risk Discourse” by co-authors Michael Laakasuo, Volo Herzon, Silva Perander, Marianna Drosinou, Jukka Sundvall, Jussi Palomäki, Aku Visala).
You have invariably likely experienced being part of an in-group or being part of an out-group. I’d bet that you’ve been at times a member of an in-group. Other times you’ve been a member of an out-group. We shuffle between joining a group that is the in-group and joining other groups that are so-called out-groups.
It can be dizzying. An in-group might subsequently no longer be the in-group and have become the out-group. Similarly, an out-group might get anointed as the in-group. Further confounding the matter is what gets a group labeled as an in-group versus an out-group. Perhaps the in-group considers all other groups to be the out-group. Meanwhile, the out-group believes they are the in-group and all other groups are the out-group.
Round and round we go.
I find myself drifting to the cleverly devised Dr. Seuss story of Stars Upon Thars, published in 1961 and forever reminds us of the in-group versus out-group struggles of society: “Now, the Star-Bell Sneetches had bellies with stars. The Plain-Belly Sneetches had none upon thars. Those stars weren’t so big. They were really so small. You might think such a thing wouldn’t matter at all. But, because they had stars, all the Star-Belly Sneetches Would brag, “We’re the best kind of Sneetch on the beaches.” With their snoots in the air, they would sniff and they’d snort “We’ll have nothing to do with the Plain-Belly sort!”
The easiest and perhaps most popular way to describe this phenomenon would be the remarkably straightforward pronouncement of us versus them. Just as Charles Lamb discussed the Above Boys and the Below Boys, and Dr. Seuss brought us the Star-Bell Sneetches and the Plain-Belly Sneetches, groupings like this often dovetail into a fierce loyalty and group attraction for those that are members of their respective group.
You might at times exhort loudly that you are a member of your chosen group, which you voraciously and gleefully declare, thus you are part of the us, while those reprehensible members of the other despicable group are the appalling them.
In shorthand: Us-versus-them.
I am about to introduce you to an us-versus-them that I am guessing you haven’t put much thought toward. I say this because we customarily think of the us-versus-them in human terms. Humans of one group are an us, and humans of another group are them. Easy-peasy.
Are you ready for the shocker?
Consider the circumstance of us-versus-them in terms of humans versus Artificial Intelligence (AI).
I could phrase this as humans versus AI, or as AI versus humans, since in theory, this could be a two-way street. The normative viewpoint would be that it is humans versus AI, namely the in-group consists of being human and the out-group consists of being AI. All I’m saying is that we might stretch things to ponder the AI being the in-group and humans being the out-group, which might be the AI versus human variation of the phrasing.
Right now, we don’t have any AI that is sentient. We don’t know if AI sentience is possible. It might be, it might not be. No one can say for sure. No one can even state for a certainty that AI sentience will happen and nor when it will happen or if it will arise. For those reasons, we’ll concentrate on the humans versus AI and put to the side the AI versus humans (though, this other angle is something I’ve covered variously previously, even in a non-sentient context, see the link here).
We need to take a reflective moment to examine an us-versus-them in the realm of humans versus AI.
There are variations worthy of distinct analysis.
The most usual variant is that we will perceive our fellow humans as being in an in-group or an out-group as shaped around the use of AI. There might be some people that are actively using AI. We could label them as the in-group. Other people that aren’t using AI are by default members of the out-group.
For completeness, there could of course be those not using AI as considered to be the in-group, while the out-group are those that are using AI. I’d hazard a guess that this is a much less oft way to think of things. It is though another worthy topic of discussion.
You might right away quibble that the phrasing of humans versus AI is not quite on-target if we are going to be splitting humans into varying groups of using or not using AI. This would almost seem to be humans versus humans, and we are merely identifying that the claimed reason for humans to be divided into two groups is based on AI as the factor for doing so. All in all, it is a bit disingenuous perhaps to say that this is humans versus AI.
I do though want to keep that humans versus humans divisional grouping due to AI usage as something that we’ll be getting to further in a moment. Please keep it in the back of your mind.
The blatantly obvious connotation of humans versus AI is that we humans are in one group and AI is in the other group. Humans versus AI. I suppose that sci-fi movies have well-prepared us for that eventuality. You know how the sordid plotline goes. Humans make AI systems. AI systems decide they want to no longer be subservient to humans. AI enslaves humankind, or maybe wipes humankind from existence.
That latter result is what many refer to as the existential risk of AI. We craft AI with the best of intentions. This is the AI For Good mantra. AI helps to solve the world’s most challenging problems. Yay, we are wise to have built AI. Oops, AI turns on us. The feared AI For Bad becomes real. Humankind laments that Pandora’s box of AI was ever opened. End of story.
To clarify, such a malevolent AI doesn’t necessarily have to be sentient. The typical assumption is that AI will turn on us once it is reached sentience, presumably via the emergence of singularity (see my discussion at this link here). We could still get ourselves into an existential risk even if the AI isn’t sentient. Imagine a type of doomsday setting whereby we have automated mechanisms that control global nuclear weapons via plain-old non-sentient AI, and we managed to excise ourselves from being in the loop on crucial life-or-death defining decisions.
Lots of movies and coverage of this kind of situation took place during the Cold War era. With today’s non-sentient AI nowadays involved as slick algorithmic decision making (ADM) automation, we are stepping up the chances of cataclysms happening. You don’t need sentience per se to get the cascading effects of planetary-wide destruction. Automation such as via everyday AI can be a catalyst but doesn’t have to be “thinking” to enable the possibility of such an apocalypse.
The gist is that I want to discuss herein the humans versus plain-old non-sentient AI. I’m trying to keep the focus on a real-world conundrum rather than a more outstretched one. We should certainly be worried about what will happen if sentient AI ever does poke out its head, meanwhile, we have plenty to consider with the non-sentient AI that we have today and will continue to advance forward.
Here’s what I mean.
Humans are apt to attribute anthropomorphic properties to AI. AI systems of today can lead us down a primrose path of thinking that AI does have a sentient quality to it. Consider the use of Alexa or Siri. These are AI-based systems using Natural Language Processing (NLP) and are often augmented with the use of Machine Learning (ML) and Deep Learning (DL). Keep in mind that all of those technologies and techniques are composed of general computing capabilities. They can do some nifty computational pattern matching. They are not at all sentients.
When you speak with Alexa or Siri in conversational interaction, you can readily start to converse as though you are dialoguing with another human. You aren’t sure how far the AI can go in terms of language fluency. Maybe the AI can interact and comprehend as well as another human can. The aspect that the AI is speaking with you gives it that anthropomorphic aura.
Of course, any attempt at a detailed “conversation” with the likes of Alexa or Siri of today will pretty quickly get you into the zone of irksome irritation. That AI is relatively brittle and narrow. You cannot carry on a truly fluent discussion. As a result, you find yourself resorting to curt commands and drop the whole harrowing effort of trying to talk with your computing buddy, as it were.
I don’t want to have our attention preoccupied with the way that such AI exists today. Be assured that it is incrementally being improved. Fluency will get better. Robots too that look entirely like robots will gradually be built to more closely resemble humans. The march of technological progress is going to continue unabated.
Returning to the us-versus-them, we can contemplate how it is that humans will embark upon an “us” way of thinking and ergo are going to place AI into the “them” categorization.
Some people (not all!) will falsely ascribe human sentience capacities to AI. Once you’ve made that initial leap, the rest of the jump is kind of effortless. You might begin to hate AI for what it is doing or what it is not doing. For example, you make use of an AI-based online mortgage granting app. After entering some data about yourself, the AI system tells you that you have been turned down for your desired home loan. What do you do? You get angry at the AI. That no-good dastardly AI has ruined your life.
You begin to hate AI. Other ills of society that you find troubling are also then laid at the feet of AI. Everywhere you go and whatever you do, you are on edge that AI is out to get you. On top of this, you slide further in this direction by deducing that AI hates you too. You hate AI, and it in return hates you. Maybe the AI hated you all along, rather than the AI hating you because you hated it.
I have somewhat dramatically overstated the human versus AI aspects to try and showcase in a vivid way how far this can proceed. Not all people will take that path. Some people will take a much milder path. Some won’t go in that direction at all.
From an Ethical AI viewpoint, we do need to put on the table that there is going to be some semblance of bad blood between humans and AI. The us-versus-them that already seems to be in our blood is bound to be triggered in some manner toward the blood of AI. That last reference to the blood of AI is not meant to suggest that AI will be flesh-and-blood. Maybe we can say the oil of AI if that clarifies.
As a side tangent, there are ongoing efforts of combining the biology of living creatures with the technology of AI. In that sense, you could suggest that we are likely to eventually have a “blood” coursing through the computing capabilities of AI. For more on that future, see my analysis at the link here.
All right, we’ve got on the horizon or possibly already percolating an us-versus-them divide of humans versus AI. Humans in one camp, and AI in the other. We are taking the perspective of humans as the in-group and the AI as the out-group, doing so with the realization of the other variations that we could also dive into.
What can we make of this us-versus-them dilemma?
We ought to see if some use cases can help further illuminate the matter. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about an us-versus-them of humans versus AI, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Us-Versus-Them
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and ethical AI questions entailing the eyebrow-raising notion of us-versus-them when it comes to humans versus AI.
Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale. One day, suppose a self-driving car in your town or city approaches a Stop sign and does not stop. The autonomous vehicle plows right past the Stop sign. Whoa, dangerous and Illegal! A bike rider nearly got sideswiped. A human-driven car that was in the intersection almost got hit by the self-driving car. Luckily, no one was actually harmed.
The actions of the AI-based self-driving car were caught on video. Social media turned the video into a viral sensation. Some people reacted to the errant self-driving car by claiming that the AI purposely ran the Stop sign. You see, the AI was trying to be harmful. Those ardent believers that immoral AI is out to get us were up in arms.
Finally, it is boldly proclaimed, AI is starting to show itself for what it really is and wants to do.
Never mind that the issue was traced to a seemingly plausible and sensible basis for the wayward action. The automaker and self-driving car tech firm noted that the Stop sign was quite obscured by an overhanging tree that had not been cut back. During the training of the Machine Learning and Deep Learning algorithm that detects Stop signs, the initial training set did not include sample pictures of Stop signs that were so notably visually obscured. As such, the AI driving system inadvertently failed to properly classify the sign as a Stop sign (the posted sign got assigned a low probability, plus the algorithm estimated that the sign was perhaps just a red sign that resembled a Stop sign and not an actual Stop sign).
The company updated its AI accordingly.
In any case, the incident raised awareness about AI as a driving system. Those that already suspected that AI is part of the “them” were waiting for a slipup on the part of AI. This was the breakthrough instance. A kind of outspoken protest began to be constructed. Even people who had not previously thought of AI as an us-versus-them joined the growing movement.
In some areas where the AI-based self-driving cars were being tried out, protestors got positioned to where the autonomous vehicles typically roamed. When they saw a self-driving car approaching, they would hurl rocks at the vehicles and toss sharp spikes into the street. The idea was that they wanted to make it clear that AI driving is not welcomed in their town.
As an aside, there have been previously reported instances of people throwing rocks and trying to dissuade the use of self-driving cars, see my coverage at the link here, which has nearly completely subsided thankfully (plus, keep in mind that this is a highly unsafe activity and can imperial passengers inside the self-driving car and can lead to an injurious car crash).
The tipping point came when regrettably an AI-based self-driving car rammed into the rear end of a human-driven car. This caused some modest damage to the human-driven car and luckily only slightly jarred the driver and a front-seat passenger. Nonetheless, those advocating the us-versus-them viewpoint were fast to emphasize that AI is getting worse and worse. The AI is “obviously” deciding to tip its hand and escalating its attacks upon humans.
With social media echoing this vitriolic sentiment, the advocates for AI-based self-driving cars start to become outvoiced. Whereas the hope is that such autonomous vehicles will dramatically reduce the number of annual car crashes and ergo reduce the nearly 40,000 yearly human fatalities and 2.5 million human injuries from car collisions (that’s in the United States alone, see my stats analysis at this link here), the sharpened opponents of AI in-general begin to overwhelm those that favor self-driving cars.
Bowing to widespread viral-boosted condemnations, automakers and AI developers opt to curtail many of their tryouts. Within the autonomous vehicles industry, there is concern that this is a major disruption in progress. The pullback will significantly delay the gradual advancement toward AI-based self-driving cars. For each day delayed, the implication is that we are allowing more injuries and fatalities to occur due to conventional human-based driving. The avid crowds of us-versus-them are drowning out the calls for looking at the benefits of AI driving systems.
Imagine that this is one of those water dams breaking circumstances whereby the spread of us-versus-them about humans versus AI spreads like wildfire. As per the earlier sketch of how far the AI loathing can go, envision that people convert over to passionately hating AI of any kind. They believe that AI is out to get us all. Humans must rebel. The masses are convinced that hating AI is entirely sensible, especially since they also stridently believe that AI now hates us.
AI systems of all kinds are pulled out of service. We revert back to not using AI. This causes all sorts of other adverse consequences as older forms of automation are put into use. Meanwhile, wanting to retain the utility of AI, many firms and developers try to rebrand the AI moniker by coming up with different terminology. The hope is that those of the us-versus-them won’t realize that AI is in fact still being put to use. But the odds are this will create an even more bountiful adverse reaction once the us-versus-them crowds find out that the insiders are trying to fool those of their own kind (fellow humans).
Whew, that is all a notably gloomy and rather sad state of affairs.
Let’s hope we don’t see that type of unsavory scenario arise.
Conclusion
I had earlier noted that another variant of the us-versus-them of humans versus AI dealt with the notion of some humans using AI and other humans not using AI. Though that is not the purity of humans versus AI, we can give it some attention anyway.
Here’s a means by which that could play out.
AI-based self-driving cars are rolled out into your town or city. The deployment algorithms are based on seeking revenue maximization. This makes good sense. The automakers and self-driving car tech firms are desirous of finally making money off their deep investments into the technology. The fleet operator that is running the self-driving cars and maintaining them wants to earn dough too. Profitability makes the world go around.
After a while, the AI detects a pattern. More money is to be made by being in the wealthy part of town rather than the impoverished parts of town. This is strictly based on the number of rides and the monies collected. There is no realization per se that there are areas of the town that are wealthier than others. The whole aspect is solely based on the raw numbers and driving outcomes.
That kind of scenario has been oft identified by AI ethicists and others, see my discussion at the link here. Worries are that AI-based self-driving cars will ultimately only be accessed by those with wealth and be essentially denied to those that do not have wealth. We will further cleave society. The advantages of AI in the context of self-driving cars will selectively arise for one segment of society and will not be borne by another segment.
Do you see how that illustrates an us-versus-them of the human versus human variety?
In that use case, AI was a factor in the spurring of such divisions. You could readily argue that AI was not at fault in the sense of the AI had no intention and was not sentient in bringing forth the divisional separations. We might though incur bad blood and a type of feud between those that are using AI self-driving cars and those that seem to not be able to use AI-based self-driving cars. The same overarching phenomenon could arise in a wide variety of other uses of AI, far beyond just the narrower scope of self-driving cars.
I don’t want this discussion to entirely seem downbeat and disheartening. There is enough of that already in daily activities. The good or heartening news is that if we realize that there is a possibility of an us-versus-them due to AI, we can presumably take overt and sensible actions to avoid falling into that ugly trap. Ethical AI is pounding away at trying to increase awareness of these issues and get us all to be cognizant of detrimental and potentially disastrous AI-related outcomes. By being forewarned, we can seek to avert these storied calamities.
One last comment for now.
In Time magazine, former U.S. Secretary of State Madeleine Albright proffered an editorial opinion piece last year that decried the rising tide of an us-versus-them mindset all told. She emphasized that the two most dangerous words in the human vocabulary are “us” and “them.”
Let’s keep the us-versus-them on a tight rein. If we don’t do so, the other side of the coin might be something that could come to fruition, namely the somewhat outstretched possibility of AI versus humans, whereby the AI “decides” to go the route of us-versus-them.
Just to clarify, AI in that scenario is in the in-group, and we, as humans, are in the out-group. Be careful of the AI-related life-endangering perils of us-versus-them when we are the out-group.