Transportation

AI Ethics Unnervingly Asking Whether AI Biases Are Insidiously Hiding Societal Power Dynamics, Including For AI Self-Driving Cars


Hiding in plain sight.

Have you ever heard that line before?

Probably so.

It can have various meanings, though the most common understanding seems to be that sometimes we fail to notice something that is right there before our very eyes. You might have experienced this phenomenon.

For example, maybe you recently mildly panicked that you couldn’t find your smartphone, and then suddenly and embarrassingly realized that it was being held firmly in your own hand, and yet you overlooked it. This happened because you were anxiously glancing all around the room to find where you had last left your smartphone, which distracted you from realizing that the elusive and treasured electronic device was within your own grasp.

It happens to all of us.

A related and quite legendary psychological experiment in the late 1990s vividly documented this type of human oversight and has periodically regained public attention as to this ostensibly mindless or shall we say mind-distracted vulnerability. In modern times, there are plenty of online videos that showcase the essence of the experiment and how easily we can forego seeing something that is “hiding” in plain sight. On a somewhat formal basis, the matter overall is often referred to as inattentional blindness, though other characterizations and variations have been extensively described.

I’d like to tell you about the experiment and the videos depicting it, though I must first alert you that if you read my elucidation about this then the fun of the trick is lamentedly lessened. You might compare this to a magic act. Once you know how the trick works, you never quite see the magic trick in the same light. Just wanted to give you a fair heads-up.

Are you ready?

It usually goes like this. You are asked to watch a video containing several people that are loosely grouped together. They are somewhat idly and randomly tossing a ball or some such object back and forth to each other. There is no particular pattern to the tossing and they are merely having an enjoyable time doing some lighthearted fare.

So far, so good.

The twist comes next.

You are asked to count how many times the ball or object is being tossed around. Usually, this task is given a bit more specificity. For example, count how many times the members of the group that are wearing a purple shirt are tossing the ball, while do not count the instances of those in a white shirt catching the ball. As you’ll see in a moment, this is the “sneaky” means of distracting you, doing so by engaging your cognitive processes on a narrowed task.

You are asked to earnestly perform the counting task, giving it your most heartfelt try. It is admittedly somewhat hard to do and requires intense concentration because the ball is moving quickly and the people are seemingly randomly choosing where and when to toss the ball. Your mind tries to focus on the purple-shirted participants. Your mind watches for the ball. Your mind simultaneously tries to ignore or blot out the white-shirted members. They are getting almost confoundingly mentally in your way. You don’t want to inadvertently count them into your count (an exasperating mistake if you do).

At the end of the sequence of tosses, you are asked how many times the purple-shirted members tossed the ball. You are smug in your personal confidence and profound assurance that you accurately and precisely made the exact count. Indeed, you unquestionably must be absolutely right about the count.

The experiment though is not finished (here’s the magic trick revealed, so continue reading only if you want to or are willing to know).

You are asked whether you perchance noticed that a gorilla had walked amongst the participants that were tossing the ball or other objects.

Say what?

Huh?

Come again?

Most people laugh or scoff at the question of whether there was a gorilla in the scene. The knee-jerk response consists of stubbornly insisting that no such gorilla ever appeared. In fact, the question seems like a quirky query and entirely fantastically made up.

I’ve had people that were willing to put a ton of dough on their contention that no gorilla of any kind was in that tossing scene that they closely and attentively watched. I sometimes wish that I had taken those people up on that bet since I would be rolling in cash today. It would though be an entirely unfair bet and likely become a lamented friend-turns-to-foe kind of hollow misstep.

You see, there was indeed a gorilla that walked in that scene (a person in a gorilla suit, which for all practical purposes is essentially a gorilla per se when it comes to the nature of the experiment).

Upon watching the same recorded scene again, the person that failed to spot the gorilla is gobsmacked. They will typically and within a few seconds gyrate frantically through a series of emotional reactions. First, they are stunned in disbelief. Second, they will claim that you are showing them a doctored version of the scene and that you are not showing them the one that they watched. Third, they will reluctantly acknowledge that you are showing them the actual scene but will insist that you never told them to watch for a gorilla. In other words, you obediently did what you were asked to do. Nothing more, nothing less. As such, whether you happened to notice a gorilla or not is utterly immaterial and has no bearing or merit on the sobering matter at hand.

I caution you that the bottom line here is that you risk losing a friend or possibly forever irking an acquaintance if you try this on them without any semblance of a clue or at least a wink-wink helpful tip of what is about to happen. Be careful trying this out. I’ve done my duty and given you a suitable precautionary warning, so it’s up to you to decide whether to spring this on anyone that you might know.

I do though have to mention how powerful an exercise this is.

For those caught unawares, there is almost a surefire guarantee that the person will forever remember the “experiment” and will always recall it, not necessarily fondly but they will remember it quite strongly. If you have never seen such a video, and if you go watch one now, I doubt that you will have as strident a recall about the matter. There is a notable difference between being caught completely unawares about the intriguing notion of such a cognitive blind spot and merely learning about it secondhand.

We all tend to pride ourselves on our keen cognitive capacities. Getting snared in having missed seeing the gorilla is a huge takedown in our cognitive confidence. That is also why some are so adamant in refusing to accept what occurred and will fight tooth and nail against the experimental result. If you tell someone beforehand they are about to witness a bit of magic, they are usually fine afterward if they get tricked. When you ask someone to seemingly do a serious task, and then out of the blue opt to pull the rug out from under them, the reaction can be emotionally and cognitively explosive.

For those of you of a literary bent, you might mildly have experienced a similar example of this mind-oriented distractedness when reading Edgar Allen Poe’s famous story entitled “The Purloined Letter” about an amateur sleuth bucking to be a full-fledged detective. I don’t want to give too much away, especially since I’ve already potentially spoiled the gorilla experiment for you, so let’s just say that this wonderful tale as published in 1844 is wholly relevant to the idea of hiding something in plain sight (maybe that will perk your curiosity to read the delightful story).

Let’s do a quick recap on the hiding in plain sight mantra.

Here are some key takeaways:

  • We can at times not notice things that are right in front of our eyes
  • This is especially possible when you are cognitively preoccupied
  • The cognitive preoccupation might have nothing to do with the hidden in plain sight aspect
  • There is also a chance that the cognitive preoccupation might be related to the “hidden” aspect

A few additional corollaries are worthwhile to consider:

  • If you are on the alert to such a condition you might be able to overcome the misperceived hiddenness
  • The hiddenness should be reasonably out in the open else the hiddenness is truly hidden (a rigged and unfair test of this specific phenomenon)
  • By cognitively being prepared it is feasible to cope with the situation, though surprises can still indeed occur (various other cognitive factors arise)
  • This phenomenon is cognitively pervasive and generally the case for basically everyone

With that keystone foundation established on this fascinating matter, I’d like to shift gears and share with you a perhaps surprising angle to it, including that all of this ties in an extraordinary way to the advent of Artificial Intelligence (AI).

In brief, some are asserting that the emergence of AI biases is hiding something else in veritably plain sight. The argument goes that AI biases are often more so indicators of societal power dynamics rather than mere data-misshapen oddities requiring a techno-focused resolution alone.

Allow me a moment to elaborate on this.

The detection of biases within AI systems is often seen as a type of whack-a-mole gambit. When you manage to find an embedded bias, you stomp it out, hopefully so. Of course, many more such imbued biases might still exist and ultimately be revealed or worse perhaps never see the light of day. All of this though is predicated on the riveting of attention to the AI biases themselves. Rarely do those that seek to expunge those AI biases take a much deeper look underneath to get a broader semblance of what might be happening.

What some are strenuously claiming is that those AI biases could readily be a telltale clue of underlying societal power dynamics. In a sense, the shiny nature of the AI biases distracts many from taking a closer look. If you did take a more in-depth and telling assessment, you might very well discover that all manner of societal power arrangements exists at the root of the embodied AI biases.

You were watching the ball being tossed back and forth, counting the tosses somewhat mentally mechanically so, and failed to see the gorilla because you were otherwise distracted or had your mind elsewhere and not on the root cause.

Yikes, some respond, this is alarming. Others contend that the asserted hidden factor of societal power dynamics is not really there at all. Or, if it is there, you aren’t tasked with counting gorillas and instead are tasked with counting and eviscerating those AI biases, thus the whole gorilla (“power dynamics”) doesn’t come into play anyway.

Now then, those that are outspoken proponents of the societal power dynamics posture would likely clamor that such reactions to this surprising and generally unrealized notion of AI biases being the canaries in the coal mine are on par with the same reactions to those that get caught off-guard by the gorilla experiment. Thus, the typical reaction entails oscillating among disbelief, rejection of the premise, and becoming upset that perhaps they have been “tricked” all along and embarrassingly didn’t realize that they had been blind to realize the true import of the AI biases.

Others that come across this slowly trickling-in theory about AI biases are often intrigued by it. We all are bound to find that a heretofore “hadn’t noticed that before” possibility as being overtly curious and potentially mind-expanding. Maybe this is an example of something hiding in plain sight. For those that are toiling away on trying to find and eradicate AI biases, it might be of value to find a more macroscopic way to assess and seek to somehow resolve or prevent such biases.

Could this claimed underpinning of societal power dynamics be a handy means of relooking and rethinking the decidedly increasingly growing problem entailing AI biases?

As I will explain in a moment, there are lots of AI Ethics and Ethical AI ramifications pertaining to AI biases. This has given rise to AI Ethics principles such as trying to make sure that AI is devised and acting in a semblance of fairness, accountability, responsibility, and the like. I will cover herein more on this in a moment. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

We can delve for now into the AI biases and societal power dynamics with a keen sense of interest and inquisitiveness.

In an article published in the journal Nature, entitled “Don’t Ask If AI Is Good Or Fair, Ask How It Shifts Power” the author Pratyusha Kalluri proffers a crucial point about the possibility of AI biases hiding in plain sight as a telltale smorgasbord of societal power dynamics: “It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: How is AI shifting power?”

To make this even more abundantly clear, the researcher makes this somewhat stunning (for some) suggestion about where the AI field might at times be fooling itself in the (shall we say) preoccupation with AI biases as a standalone consideration: “When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful.”

Again, some are caught by surprise by these suggestions or perhaps could be construed as allegations or a theory of sorts, while others are saying that they had a sneaking suspicion all along and someone has finally put their finger on what is actually awry. Meanwhile, you’ve also got some that react by saying that this is not especially earth-shattering. There is a kind of “so what?” reaction and a feeling that nothing special or insightful is being gleaned by this.

To try and showcase why the realization that societal power dynamics is indeed a genuine and vital hidden factor regarding AI biases, there are various efforts underway to aid in showcasing abject constructive steps that can be undertaken as a result of seeing or agreeing that the gorilla exists.

Consider a recent study by Milagros Miceli, Julian Posada, and Tianling Yang entitled “Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?” which explores these probing concerns: “With the present commentary, we aim to contribute to the discussion around data bias, data worker bias, and data documentation by broadening the field of inquiry: from bias research towards an investigation of power differentials that shape data. As we will argue in the following sections, the study of biases locates the problem within technical systems, either data or algorithms, and obscures its root causes. Moreover, the very understanding of bias and debiasing is inscribed with values, interests, and power relations that inform what counts as bias and what does not, what problems debiasing initiatives address, and what goals they aim to achieve. Conversely, the power-oriented perspective looks into technical systems but focuses on larger organizational and social contexts. It investigates the relations that intervene in data and system production and aims to make visible power asymmetries that inscribe particular values and preferences in them” (Proceeding of the ACM Human-Computer Interactions, January 2022).

Take as another example an article by Jonne Maas entitled “Machine Learning And Power Relations” covering the moral dilemma of AI that has biases and might run counter to the AI Ethics precepts of an explicit need for accountability: “Indeed, to be increasingly dependent on such an unaccountable exercise of power is not just problematic when the system proves to be incorrect in its results, it is problematic more generally as it opens up the possibility for a moral wrong, limiting human flourishing by establishing a power dichotomy between the developers and users, on the one hand, and the end-users, on the other. We should therefore seriously consider the potential political asymmetry that the increased use of ML applications bring to society, where developers and users–in combination with the ML system itself–increasingly gain more power over a system’s end users due to inadequate accountability mechanisms” (in AI & Society, February 2022).

All of these emerging efforts seem to be contending that the underlying societal power dynamics are abundantly making a substantive difference in what we do (or opt to not do) about AI biases, including:

  • Societal power dynamics can lead to the production of AI biases
  • Societal power dynamics can essentially define what we believe AI biases to be
  • Societal power dynamics can shape how we deal with AI biases
  • Societal power dynamics can guide how the debiasing of AI will take place
  • Societal power dynamics impact all stakeholders as it relates to AI and AI biases
  • Societal power dynamics can especially adversely impact the most vulnerable stakeholders
  • And so on.

As a quick aside, you might be wondering what it means to refer to societal power dynamics and where does that concept come from.

Turn back the clock to the year 1640. You might vaguely know that Thomas Hobbes famously laid out some essential groundwork for analyzing and comprehending the nature of societal power, which is still utilized avidly to this day. He is indubitably best known for the Leviathan which was published in 1651 and set forth the crux of his theories about power and the dynamics of power. Earlier on, in 1640, he had stated this cornerstone remark about the essence of power in his earlier work entitled the Elements of Law (I’ve shown the passage in somewhat of its original form, consisting of what today is considered old-English): “… because the power of one man resisteth and hindreth the effects of the power of another, Power simply is noe more, but the excesse of the Power of one above that of another. For equall powers opposed destroye one another and such theire opposition is called Contention.”

In a modernistic straightforward fashion, contemporary dictionaries tend to say that power is when someone can get someone else to do their bidding that otherwise, the tasked person would not wish to do so. In that sense, it is definitionally suggested that power usually creates an asymmetrical relationship. One person has power over another. We can refer to the parties as being agents. Power is asymmetrically distributed between one or more agents.

The shorthand is labeled as power-over.

There is heated debate in the field of societal power dynamics whether this is the only form of such power. For example, another viewpoint is that power is more so about garnering the desired outcome, known as power-to. You are said to make use of power to affect a particular sought outcome. The outcome is likely to involve agents, though this is pretty much a means of more broadly attaining the outcome that you want to attain.

The shorthand version for this is power-to.

If you want to dig more deeply into the societal power dynamics topic, there is plenty of beneficial material out there. As an example, this researcher brings up a taste of the debate taking place: “If for the first tradition power consists in an agent’s power to affect certain outcomes, for the second it consists in an agent’s power over other agents. Proponents of the former tradition frequently argue that power-over is merely a species of power-to; proponents of the latter, by contrast, often hold that all relevant social power consists in power-over, so that social power-to reduces to power-over” and the author then proposes seven variations including power-over-others, power-to-effect-outcomes, power-despite-resistance, power-with-others, and so on (per the journal of Political Studies, “The Grammar Of Social Power: Power-to, Power-with, Power-despite and Power-over” by Arash Abizadeh, 2021).

On a housekeeping note, I normally try to carefully state that the shorthand wording of “power” used in this herein context is referring to societal power dynamics (all three words in combination together). I do this because otherwise if I merely use the word “power” you might be thinking of electrical power or maybe computer power which consists of computational processing capabilities. I trust that for the remainder of this discussion, I can simply use the word “power” and you’ll know that I am alluding to societal power dynamics, thanks.

Before getting into some more meat and potatoes about the role of power (societal power dynamics) and the hiding in plain sight pertaining to AI biases, let’s establish some additional fundamentals on some profoundly integral topics. Much of the handwringing about AI biases is predominantly taking place in the realm of AI known as Machine Learning (ML) and Deep Learning (DL).

We ought to take a breezy dive into the AI Ethics and ML/DL arena.

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage further by exploring what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s now return to the topic of power (societal power dynamics).

Recall that I earlier proclaimed these hypotheses about the intermixing of power and the realm of AI biases:

  • Power can lead to the production of AI biases
  • Power can essentially define what we believe AI biases to be
  • Power can shape how we deal with AI biases
  • Power can guide how the debiasing of AI will take place
  • Power can impact all stakeholders as it relates to AI and AI biases
  • Power can especially adversely impact the most vulnerable stakeholders
  • And so on.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about how AI biases are integrally intertwined with potential hidden societal power dynamics, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Role of Societal Power Dynamics

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall consider next how AI biases and societal power dynamics might come to play in this context of self-driving cars. Let’s start with one of the most transparent and frequently cited “power plays” regarding autonomous vehicles and especially self-driving cars.

Some pundits are worried that self-driving cars will be the province of only the wealthy and the elite. It could be that the cost to use self-driving cars will be prohibitively expensive. Unless you’ve got big bucks, you might not ever see the inside of a self-driving car. Those that will be utilizing self-driving cars are going to have to be rich, it is purportedly contended (I don’t agree, see my discussion at the link here).

This could be construed as a form of societal power dynamics that will permeate the advent of AI-based self-driving cars. The overall autonomous vehicle industrial system as a whole will keep self-driving cars out of the hands of those that are poor or less affluent. This might not necessarily be by overt intent and just turns out that the only believed way to recoup the burdensome costs of having invented self-driving cars will be to charge outrageously high prices.

Is this a societal power dynamic consideration or not?

You be the judge.

Moving on, we can next consider the matter of AI-related statistical and computational biases.

Contemplate the seemingly inconsequential question of where self-driving cars will be roaming to pick up passengers. This seems like an abundantly innocuous topic.

At first, assume that AI self-driving cars will be roaming throughout entire towns. Anybody that wants to request a ride in a self-driving car has essentially an equal chance of hailing one. Gradually, the AI begins to primarily keep the self-driving cars roaming in just one section of town. This section is a greater money-maker and the AI has been programmed to try and maximize revenues as part of the usage in the community at large.

Community members in the impoverished parts of the town turn out to be less likely to be able to get a ride from a self-driving car. This is because the self-driving cars were further away and roaming in the higher revenue part of the town. When a request comes in from a distant part of town, any other request from a closer location would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town is nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.

Out goes the vaunted mobility-for-all dreams that self-driving cars are supposed to bring to life.

You could assert that the AI pretty much landed on a form of statistical and computational biases, akin to a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL. See my explanation about proxy discrimination and AI biases at the link here. Also, for more on these types of citywide or township issues that autonomous vehicles and self-driving cars are going to encounter, see my coverage at this link here, describing a Harvard-led study that I co-authored.

Do societal power dynamics come to play in this instance?

You be the judge.

For the third instance of AI biases, we turn to an example that involves the AI determining whether to stop for awaiting pedestrians that do not have the right-of-way to cross a street.

You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so. This meant that you had discretion as to whether to stop and let them cross. You could proceed without letting them cross and still be fully within the legal driving rules of doing so.

Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases. A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender. I’ve examined this at the link here.

Imagine that AI-based self-driving cars are programmed to deal with the question of whether to stop or not stop for pedestrians that do not have the right-of-way. Here’s how the AI developers might decide to program this task. They collect data from a town’s video cameras that are placed all around the city. The data showcases human drivers that do stop for pedestrians that do not have the right-of-way and also capture video of human drivers that do not stop. This is all collected into a large dataset.

By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop. Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car.

To the surprise of the city leaders and the residents, imagine their shock when the AI opts to stop or not stop based on the age of the pedestrian.

How could that happen?

Upon a closer review of the ML/DL training data and videos of human driver discretion, it turns out that many of the instances of not stopping entailed pedestrians that had a walking cane such as might be customary for an elder. Human drivers were seemingly unwilling to stop and let an elder cross the street, presumably due to the anticipated length of time that it might take for someone to make the journey. If the pedestrian looked like they could quickly dart across the street and minimize the waiting time of the driver, the human drivers were more amenable to letting the person cross.

This got deeply buried into the AI driving system via the ML/DL having “discovered” this pattern in the training data and accordingly computationally dutifully modeled upon it. Once the ML/DL was downloaded into the self-driving cars via the use of OTA (Over-The-Air) updates, the onboard AI driving system went ahead to use the newly provided ML/DL models.

Here’s how it would work. The sensors of the self-driving car would scan the awaiting pedestrian, feed this data into the ML/DL model, and the model would emit to the AI whether to stop or continue. Any visual indication that the pedestrian might be slow to cross, such as the use of a walking cane, mathematically was being used to determine whether the AI driving system should let the awaiting pedestrian cross or not.

Do you think that this AI bias is hiding a societal power dynamic?

You be the judge.

Conclusion

Some final thoughts for now.

Let’s cover a few high-level tips and important caveats.

First, for clarification, there is no one particularly arguing that we should somehow ignore AI biases and jump over into examining solely the societal power dynamics only. I mention this because there are some that seem to toss the baby out with the bathwater and leap entirely into the power elements as though the AI biases are inconsequential in comparison.

Do not make that Grand Canyon kind of oversized leap.

Second, we do need to focus on AI biases. And, meanwhile, we should also be keeping our eyes and ears open for the societal power dynamics.

Yes, we can do both at the same time, especially now that we know to be on the watch. I assure you that people experiencing the gorilla experiment are apt to be on their toes whenever they next see anything that might challenge their cognitive agility. Knowing that a gorilla can be there, though it might not necessarily appear, will ensure that you are watching for the potential appearance anyway.

Here are some worthy rules of thumb (this raises some controversy, just to let you know):

  • An AI bias can be construed as “hiding” a societal power dynamics issue (the AI bias isn’t the hider, it is the person preoccupied with looking at the surface only of the AI bias)
  • But not all AI biases are necessarily hiding a societal power dynamics issue
  • Nor are all societal power dynamics issues necessarily embedded only into an AI bias
  • An AI bias can be part of a collective of AI biases that are all related to a particular societal power dynamics issue
  • A litany of societal power dynamics issues can be riddling just about any AI system since AI systems are a reflection of society and power dynamics all-told

Ruminate on those for a moment.

I’ll wait.

Okay, here are a few more rules of thumb that might also get the mental juices going:

  • When encountering AI biases, ask whether there is any connection to societal power dynamics
  • When devising an AI system, ask whether and in what ways the societal power dynamics are shaping the AI
  • Try to examine how societal power dynamics are impacting the AI stakeholders
  • Consider what mitigations might be appropriate regarding AI and societal power dynamics
  • Be always watchful for the gorilla and don’t forget that gorillas can exist (I’d say that sparks a life’s hack rule-of-thumb — never get blindsided by a gorilla, it could be painful and you’ll wish you had seen it coming!).

Talking about power has reminded me of the impacts that societal power can engender. I would wager a bet that you are familiar with one of the most famous or infamous lines about power, which per Lord Action goes like this: “Power tends to corrupt, and absolute power corrupts absolutely.”

A breathtaking concern.

Society has to be on the up and up that AI is going to have a demonstrative impact on societal power dynamics. This seems blatantly obvious, perhaps, and yet there isn’t as much outward debate and outspoken discourse on the matter as you might think would seem warranted. The rise of AI Ethics considerations is, fortunately, moving society in that direction. Inexorably, if not begrudgingly, and an excruciating inch at a time.

We can give the final word for now to the great Leonardo da Vinci and take a reflective moment to ponder these astutely wise words: “Nothing strengthens authority so much as silence.”

Those that are venturing into the AI biases and societal power dynamics arena are trying to bring direct sunlight and rapt attention to an otherwise quiet or unassuming matter that would seemingly silently lay in wait. Let’s start talking, as Leonardo da Vinci would presumably be urging us to do.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.