Transportation

Moral Judgments About AI Will Shape Legal And Ethical Assignment Of Blame, Including The Particular Use Case Of AI-Based Self-Driving Cars


Who are you going to blame?

That question comes up quite a bit when talking about AI.

You see if an AI system goes awry and causes some form of harm or damage, a somewhat vexing or open-ended question arises as to who or what ought to garner the blame for the adverse action. The range of harm can be anywhere between a mild annoyance to an outright severe injury or even a devastating fatality. Think about AI systems that answer trivia questions such as what the state capital is (an Alexa or Siri oft-asked query), perhaps wrongly answering and prodding you into irritation, versus the more sobering and life-or-death decisions of an autonomous vehicle such as an AI-based self-driving car that gets into a car crash.

A lot of hand wringing is taking place concerning where the blame should be pinned. There are deep ethical AI-related considerations, see for example my Ethical AI extensive coverage at this link here and this analysis at the link here. You can argue that AI advances are bringing society a slew of great benefits and therefore you have to expect that there are bound to be some added “costs” associated with gleaning those advantages. Presumably, if you try to stifle the adoption of AI, you’ll be undercutting a ton of vital societal gains. Don’t mess with the golden goose. Accept that from time to time there will be shortfalls.

Maybe we ought to let AI off the hook?

The other side of that debate says that we are recklessly roaring toward an AI-enmeshed societal nightmare. All kinds of untoward AI are going to be unleashed and we will allow it to happen without nary a precautionary word. We are like the proverbial frog in the boiling water that doesn’t realize the heat intensity is getting worse and worse, until the very end (an unpleasant end).

In short, some loudly proclaim we must hold AI accountable else chaos and disaster will ensue.

An altogether different viewpoint exhorts that this talk about holding AI responsible is utterly foolish and misguided. Today’s AI is not sentient. Not even close to sentience. We also do not know when we will achieve sentient AI, plus nobody can say for sure that it will ever happen. The notion of placing blame on AI is an absurdity and you might as well try to affix blame to your household toaster or refrigerator.

The blame game has lots of other potential targets.

An obvious choice is whatever company or entity that unleashes a harmful AI system should be the culprit for blame. The firm that did the release should have been more circumspect and made sure that the AI would not go faulty. Also, it doesn’t matter whether the company made the AI or merely passed it along, such as licensing the AI from someone else since any firm that put the AI into the wild should be the one to shoulder the responsibility for their promulgating the AI.

Wait for a second, some rejoinder, the maker of the AI is the true holder of the blame. Those AI developers that dared to build an AI that went amiss are the keystone of blame focus. Why didn’t they do a better job of putting guardrails on the AI? Their slipshod practices were the origin of the AI going bad. If they had not been so careless, the rest of the dominos would not have fallen down.

I dare say we can even mull over whether those that use AI are somehow within the scope of blame. Yes, the potential “victim” of an AI that has gone akilter could get some finger-pointing too, one supposes. If the person that was notably harmed had not used the AI, there presumably would have been zero chance of getting harmed. When you play with matches, you have to expect that you might get burned.

Of course, not everyone is going to accept the theory that the user of the AI ought to be the target of blame. An innocent that perchance used AI and didn’t know they were relying upon AI seems to be free of the burdensome notion of responsibility (or can they?). We can extend that shade to an innocent that knew they were relying upon AI but had been given assurances that the AI was in good shape and wouldn’t produce harmful results. Unless the user went out of their way to subvert the AI, we might normally assess that the user was beyond blame and should not be unfairly accused of holding blame.

This is not only an ethical consideration. We need to think about the legal implications too (see my coverage on the legal ramifications, such as at this link here and this link here). It is one thing to generally discuss the assignment of blame, but for some, the assignment of legal culpability is equally or even more important. The law can provide quite a demonstrative hammer to ensure that the blame holder will be held legally liable, bringing forth justice for those that were harmed by the AI.

Many efforts are underway to forge new laws that will encompass AI that has done some form of wrongdoing. Some legal experts would argue that existing laws are suitable to cover AI-related issues. Other legal beagles would counterargue that prevailing laws are either bereft of provisions that would cover AI or that are extraordinarily hard to stretch into the AI justice-seeking realm. Rather than having to twist and turn the existent laws, the argument is that you can put in place AI-specific laws that will grease the skids toward holding AI accountable.

I’ve previously covered in my columns the thorny topic of ethical AI and the aims of legal provisions about AI, such that at times the two are not fully aligned. In essence, we might have moral views about AI that are at odds with what the law stipulates. And we can also have laws that upon further reflection perhaps do not seem to comport with our ethical stances. One of my favorite historical quotes comes from The Law by Frederic Bastiat in 1850 that laments this notable qualm: “When law and morality contradict each other, the citizen has the cruel alternative of either losing his moral sense or losing his respect for the law.”

There is a decided push and pull tug-of-war that often occurs amidst morality and the law. We might be spurred to enact news law due to societally emerging ethical concerns that rise to a level of prominence and drive lawmakers to take action. You can claim too that laws put onto the books can shape our moral sensibilities, perhaps guiding society in the direction of the laws that stoke ire about the laws and getting them repealed as a result of societal pressures based on ethical mores.

In case this discussion seems a bit heavy to contemplate, the gist of the focus herein will be on a somewhat simpler and laser-focused matter, namely who should get the blame when AI has gone askew?

I’ll momentarily put aside the blaming of the user, for now, and give you three other choices to pick from. I offer them in no particular order. You get to decide without any subtle or hidden indication.

Which gets the blame:

  • Company that released the AI
  • AI developers that crafted the AI
  • The AI itself

As a kind of recap, let’s consider why each should either get the blame or should be exempted from getting the blame.

In terms of blaming the company that releases AI:

  • Blame the company: The company should get the blame since they anointed the AI as usable by promulgating it. Had they not put the AI into the world, no adverse consequence could have occurred via that AI.
  • Don’t blame the company: The company should not get the blame since they merely relied upon the developers of the AI to ensure that the AI would work without untoward results. Put the blame where it belongs, at the feet of the developers or at the bits-and-bytes of the AI itself.

In terms of blaming the AI developers:

  • Blame the AI developers: Those that develop AI ought to garner the rewards and yet also retain the responsibility for what they have invented or devised. You can’t create a Frankenstein and then try to walk away from the wreckage that arises.
  • Don’t blame the AI developers: AI developers are trying to bring AI into use that can provide all manner of good for society. These innovations are sorely needed to solve many of the globe’s toughest problems. You will squelch all of those wonderments by pinning the blame on the lowly everyday AI developers. Aim instead at the mighty companies that promulgate adverse AI, or, if you prefer, aim your glaring eyes at the AI which of its own accord goes adrift as being the blameworthy instigator.

In terms of blaming the AI:

  • Blame the AI: AI systems are increasingly becoming autonomous. As such, the AI itself should be held responsible. If we don’t hold AI accountable, we are going to gradually have AI embodied in all facets of our lives that can scot-free do whatever it wants to do. Do not let the horse out of the barn, scot-free.
  • Don’t blame the AI: Blaming AI is entirely nonsensical at this time. AI is not sentient. Until or if AI ever reaches sentience, we can relook at blaming AI. You might as well try to blame a fire hydrant for spilling water or a lamppost for failing to light your pathway. Put the blame where it belongs, either at the front porch of the companies or the desktops of the AI developers.

To clarify, there are a bunch of other variations of how to couch arguments regarding blaming or not blaming each of those indicated potential blame holders. You can also come up with a myriad of additional blame holders, such as top management, boards of directors, lawmakers, evildoers, politicians, and so on. For space limitations, I’ve tried to just provide a highlighted semblance. There are lots of other angles and the whole morass is much more complicated than it might seem at a cursory glance.

Let’s next take a look at a quite interesting experiment that was recently published and sought to explore the AI blame game sphere.

In a noteworthy study published in the Journal of Business Ethics entitled “Moral Judgments in the Age of Artificial Intelligence,” co-authors Yulia Sullivan, and Samuel Wamba performed a series of experiments to examine the matter of who we perceive ought to get the blame for AI-related harming actions. They set the stage in this manner: “Understanding the psychological process of how people assign blame to various entities in the age of AI helps explain what capacity would render an AI system a natural target of moral judgments. As AI is becoming more autonomous, one primary concern is the possibility of humans putting the entire blame on AI in case of harm caused by such systems. It is a societally relevant question how we should deal with such moral issues, not only from legal and financial perspectives but also from the social and technology perspective—how people tend to take responsibility for their interaction with an AI system.”

The researchers carefully explored the existing literature on this topic. Out of which, they then conceived salient hypotheses worthy of empirical exploration.

Here are their hypotheses:

  • H1: “People will attribute higher blame judgments toward AI when a violation is perceived to be intentional than when it is perceived to be accidental.”
  • H2a: “Perceived agency in AI mediates the relationship between perceived intentional harm (directed to humans) and blame judgments toward AI.”
  • H2b: “Perceived experience in AI mediates the relationship between perceived intentional harm (directed to humans) and blame judgments toward AI.”
  • H3: “People will attribute higher blame judgments toward companies when a violation involving AI is perceived to be intentional than when it is perceived to be accidental.”
  • H4: “People will attribute higher blame judgments toward developers when a violation involving AI is perceived to be intentional than when it is perceived to be accidental.”

The research study then proceeded to devise several experiments involving human subjects that were voluntarily asked to assess AI-related untoward action-producing scenarios.

As you might recall, I earlier herein mentioned that the harm generated by an AI system could range from mild annoyance to outright fatalities. This brings up the important realization that the assignment of blame might vary by the substantive nature of the harm produced. Three scenarios sketched by the researchers involved varying outcomes, including one entailing the AI causing the death of a human, another that led to the injury of a human, and the third one that involved damages to property and business value (but no personal injury or death).

The Bottom-Line On The AI Blame Game

With all of that as background about the study, we can now consider the results of their analysis. I would forthrightly add that the research paper results are quite detailed and nuanced, thus I urge caution in interpreting the results out of context. For the sake of discussion and limited space, I am going to highlight some key points, but ask that you be mindful of not overstating these short snippets.

Turning to the hypotheses, here’s what the authors indicated: “In all three studies, we found most of our hypotheses were supported.”

I had asked you whether you would blame the companies, the AI developers, or the AI. Here’s what the study suggested as based on the sample of people used in the experiments: “We also find people blame developers the most in all scenarios, followed by companies and AI.”

Those of you that are AI developers might want to ponder devoutly that finding.

If you are currently thinking that you are free of blame when your AI goes askew, apparently societal perceptions do not match with that sense of armored protection from responsibility. Thus, that clever programming code you are laying out for the AI system you are devising might come back to haunt you, at least that’s what society might so believe. Whether you are proud to be ranked above blaming the company or blaming the AI is up to you, though I’d guess most AI developers would not relish that topmost slot.

Regarding the legal ramifications of the experimental results, here’s a bottom-line vital notion: “Without a legal framework to deal with an AI system’s liability, a victim can easily place the blame on the nearest responsible parties involved in an AI life-cycle. Regulators should adopt standards that would help distribute responsibility fairly. For example, it could be accomplished by developing standards specifying the characteristics of AI systems should have, such as being limited to specified activities.”

Finally, as a wrap-up of the study pertaining to the elements of the ethical and moral judgment concerning AI, particularly regarding those that are devising AI systems: “As they put more autonomy on an AI system, they should carefully consider the implication of their design decision on morality. To the extent that scientists and policymakers are concerned with public opination, they may have to be prepared to face ethical and legal issues that humanity has never faced before. Based on our research findings, we argue that our immediate goal is to design and create AI systems that are more sensitive to ethically important aspects of their tasks.”

You might find of added interest that this study included a look at the anthropomorphic aspects entailing people perceiving AI as having sentience or having some kind of human-like agency (e.g., the agentic mind, the experience mind). This is a topic I’ve covered extensively in my column and relates integrally to demonstrative theories of mind.

As with any such studies about these kinds of topics, please realize that numerous considerations go into interpreting the results. You need to examine the design of the experiments, the scenarios, and how they were phrased, you need to consider the human subjects utilized and their motivations to participate, etc.

These are all part and parcel of mindfully gauging any scientific or technological efforts and what they signify.

Shifting gears, we can explore the AI blame game in the context of the rising use of AI that is going to have a huge impact on our daily lives. I’m referring to the emergence of AI-based true self-driving cars. This will serve as a handy use case or exemplar of where the visible and highly touted AI blame assignment will indubitably arise.

Here’s then a noteworthy question that is worth contemplating: Who should we blame when AI-based true self-driving cars somehow get enmeshed into car crashes or other car-related upending qualms?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Blame Game

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the blame assignment conundrum.

Let’s quickly identify some plausible scenarios of AI-related troubling encounters:

  • Imagine that an AI-based true self-driving car clips a pedestrian, doing mild injury to the person but not causing any sustained or permanent harm. Who is to blame?
  • Envision that an AI-based self-driving car rams into the rear of a car ahead of the autonomous vehicle. No one is hurt. There is though some considerable damage done to both cars. Who is to blame?
  • A self-driving car makes a right turn and comes to a sudden stop, for which a human-driven car behind the autonomous vehicle then slams into the halted self-driving car. Who is to blame?
  • While navigating a construction zone, an AI self-driving car strikes several red cones and then swerves into a standing pole that (ironically) has a sign that warns to be cautious in the marked zone. Who is to blame?

These kinds of scenarios can readily be concocted. They are all realistic. That being said, some pundits insist we will never ever have any self-driving cars get into car crashes or car collisions of any kind. The ardent belief is that self-driving cars will be uncrashable. As such, those pundits would deny that any of those scenarios could happen. Per their strident viewpoint, since self-driving cars are uncrashable it ergo logically must be the case that there will not be any crashes or collisions. Full stop, period.

I’ve debunked this ridiculous notion, see my coverage at the link here.

We are not going to have uncrashable self-driving cars. It makes no sense to speak of it. Worse too, it sets completely unrealistic and totally outstretched expectations.

Some say that if we had only self-driving cars on our roadways we would not have to worry about any more car crashes or collisions. The AI would always be attentive. The AI won’t drink and drive. All in all, you won’t need to be concerned with human drivers anymore.

First, it is not going to be the case that we will suddenly have only self-driving cars and do away with human-driven cars. There are about 250 million or more human-driven cars in the United States alone and they aren’t going to be replaced overnight via the advent of self-driving cars. There will be a mixture of human-driven cars and self-driving cars for many decades to come. Indeed, there is a question of whether we will give up the use of human-driven cars, which some say that you will only take away their steering wheel when you pry their dead cold hands from it.

Second, even if we had locales that strictly limited the roadways to self-driving cars, you are still going to have circumstances arise that entail a self-driving car getting into a crash or a collision. A bike rider might dart into traffic and the self-driving car cannot stop in time or swerve to avoid colliding with the cyclist. A child runs out between two parked cars, doing so as hidden from view, and the self-driving car strikes the youngster, coming to an immediate stop thereafter.

You might be tempted to argue that the bike rider is to blame and the child is to blame in those examples. When discussing the uncrashable nature of self-driving cars, the pundits are not talking about blame. They are overtly contending that no matter what happens, the self-driving car won’t collide or crash.

Somehow, self-driving cars will defy the laws of physics. It is a miracle! Of course, it also is not in the cards.

I hope that that bit of a side tangent sets the stage for agreeing that self-driving cars can get into car crashes and collisions of one kind or another. I wanted to make sure we acknowledge the possibility of it happening. Put aside the uncrashable notion and keep it locked away.

Well, we can keep the uncrashable concept in the forefront of this discussion about the AI blame game since it has an influencing propensity, despite its absurdity. If enough pundits keep clamoring that self-driving cars are uncrashable, what might that do about the perceptions associated with AI-based self-driving car crashes?

Get this.

Let us grandly assume that you’ve whole-hog bought into the vacuous idea that self-driving cars are uncrashable. For a time, the early use of self-driving cars let’s also say has no crashes or collisions (slim chance, but we’ll go with the scenario). Some segment of society now ardently believes that self-driving cars are uncrashable. This is due to the pundits saying it, and due to the apparent fact that there haven’t been any car crashes or collisions (or at least none that got widespread public attention).

Out of the blue, as though a lightning bolt startingly appears from an otherwise cloud-free blue sky, a self-driving car gets into a crash or collision, akin to those other examples I earlier outlined.

Who is to blame?

Will it be:

  • Company that is operating the AI self-driving car
  • AI developers that crafted the AI self-driving car
  • The AI itself that is driving the self-driving car

Take your pick.

You can bet that some members of the public will say that it is the company operating the AI self-driving car that must be the focus of the blame. The rationale for this though is not what you might assume. It could be that since some of the public has fallen for the uncrashable self-driving car mantra, they will think that it must be the company that runs the autonomous vehicle that is to blame. In an unknown or as yet identifiable manner, the company usurped the uncrashable capabilities and ergo the company takes the blame.

Another segment of society will put the blame at the shores of the AI developers. Heck, AI-based self-driving cars are supposed to be uncrashable. Since that’s an asserted fact, the humans underlying the crafting of the AI must be to blame. Those darned humans caused the AI to slip up.

And yet a different segment of society might put the blame on AI. Here’s their somewhat tortured logic. Self-driving cars are said to be uncrashable. But, a self-driving car crashed or had a collision. This must suggest that the AI turned bad. Like a bad apple in a barrel of otherwise good apples, this particular AI of that specific self-driving car went sour. Bad AI ought to be hung out to dry, as it were.

Where does that take us?

As we earlier covered, moral judgments about AI can be part and parcel of where we decide to assign blame for AI-related actions that go awry. The uncrashable theorists are distorting or shaping the moral judgments that people will make. Let’s hope that this does not become a long-lasting or pervasive morally shaping pronouncement.

Conclusion

For blaming self-driving car crashes on someone or something, the list of candidates is quite extensive.

Take a gander at these possibilities:

  • The company that devised the AI driving system could be a culprit.
  • A separate company might own or run a fleet of self-driving cars, operating the autonomous vehicles in a particular locale, of which as the fleet operator that might be to blame (perhaps they’ve not provided proper upkeep or failed to undertake other proper precautions).
  • The AI developers that crafted the AI driving system could be the blame-worthy holders.
  • Don’t forget about the automaker that made the car. It could be that the autonomous vehicle had some defect or issue that led to the crash or collision.
  • If a city has allowed self-driving cars on their public streets, you could presumably argue that they are to blame for any autonomous vehicle crashes or collisions. Were it not for their approval of the roaming vehicles, no such calamity could have arisen.
  • Etc.

I assume you get the drift of this.

If you are interested in a rather robust scenario encompassing a wide array of autonomous vehicles that we’ll soon see in our cities and towns, such as self-driving robo-taxis, self-driving delivery vehicles, self-driving shuttles, etc., you might find value in this near-term exploration of what we might be facing, see the link here.

Get ready for the tremendous Ethical AI ramifications of AI, including the advent of AI-based self-driving cars. Lawyers and lawmakers are also going to be part of this equation, such that we will have laws that cover AI and perhaps also have gaps in the laws that need to be closed.

Throughout this herein winding tale about who is to blame, we admittedly employed a simplified assessment by seeking to find just one actor to bear the blame. One thing you can say about AI adoption that seems like a nearly absolute truism, there will be plenty of blame to go around. Just about anyone, everyone, and even possibly, some things will be taking on the blame, and likely too will be tagged with the legal liability that will accordingly ride along as AI permeates our everyday lives.



READ NEWS SOURCE

Also Read  Experience the Power of AI with Windows 11 Pro, Just $20