Transportation

AI Ethics And The Quagmire Of Whether You Have A Legal Right To Know Of AI Inferences About You, Including Those Via AI-Based Self-Driving Cars


Inferences, love them or hate them.

You decide.

One thing that seems abundantly clear is that we cannot live without inferences. As eloquently stated by the renowned 19th-century philosopher Friedrich Nietzsche: “The greatest progress that the human race has made lies in learning how to make correct inferences.”

It is worth carefully and diligently noticing that this sage wisdom emphasizes the importance of making correct inferences. This seems like a quite notable make-or-break kind of criterion. We can all render inferences, but the question remains as to which of the devised inferences are right and which ones are wrong. You can probably concoct inferences until the cows come home, though if the inferences are wholly inaccurate or wayward, the result is disconcerting and possibly even endangering.

Suppose I see a car coming down the street. I want to make my way across the street and do so without getting run over. Based on the speed of the automobile and other roadway-related factors, I make a mental inference that I can safely cross.

Does this inference guarantee that I will assuredly and without incident successfully get across the roadway?

Of course not.

The driver of the car might suddenly and unexpectedly speed up. Or I might trip over my own feet and fall while trying to cross (the driver might not be able to stop the vehicle in time to prevent hitting me). There are zillions of intervening and uncalculated possibilities that could block or disturb my attempts to cross the street.

My inference is essentially a guess. You could say that there is a probability or certainty level associated with the inference that I made. When I first conceived of the mental inference that I could proceed to cross, perhaps my internal assessment was that I had a pretty good chance of making it across. If I then observed that the road was wet and slick from heavy rains, I might adjust my perceived probability regarding the inference and decide that the chances of crossing quickly enough were a lot lower than my original estimation.

You can prudently contend that inferences entail an entanglement of the known and the unknown. John Dewey, the revered American psychologist and philosopher, had proffered this insight: “Inference is always an invasion of the unknown, a leap from the known.” We customarily start toward an inference by mentally assembling what we know or seem to know, and then make a hopefully sensible effort to gauge what the future or unknown is going to bring forth.

Why all this chitchat and chatter about the nature of inferences?

Because there is Artificial Intelligence (AI) devised inferences being made about you all day long, and the societal concern is whether you have any particular rights of how AI is devising those inferences and even if you ought to be able to know what if any such inferences exist about you.

I am betting that you probably assumed that of course, you have full rights to knowing all about any AI inferences that pertain to you. That might appear to be obvious at face value. Shouldn’t you be able to know what AI is guessing about you?

Furthermore, we already noted earlier that any inferences made by humans can sometimes be right and sometimes be wrong. An AI system could decidedly be making incorrect inferences about you too. Though some people hazily think that AI is never wrong, which is a mindbogglingly crazy and completely unsupported notion, there is a strong chance that there are AI inferences sitting here and there on computers and networks that are utterly off-target about who you are and what you are aiming to do. For my close look at the false belief about AI being infallible, see the link here.

How might AI be making inferences about you?

Based on your social media postings, an AI system might have been programmed to grab up your tweets and other posts, doing so to make inferences about you. Aha, you made several posts that mentioned a recent trip to Las Vegas. The AI infers that you are an avid gambler and this inference is used to feed you ads about online gambling or perhaps send you mailers about how to break the onerous habit of addictive gambling. Meanwhile, other posts that you have made indicate that you like to drink beers when vacationing and also tend to have a glass of wine or two with your dinners.

The AI generates an inference that you are an alcoholic.

Fair or unfair?

I dare say that most of us would be outraged and totally enraged if we could see all the AI-derived inferences that exist on planet Earth about us (well, computing is heading to outer space too, so those AI inferences might be sitting out there as well).

Speaking of sitting, please sit down for the next eye-opening statement about this. You might be unpleasantly surprised to know that those AI inferences are not readily or customarily a core part of your legal rights per se, at least not as you might have naturally presumed that they were. An ongoing legal and ethical debate is still underway about the nature of AI-based inferences, including some experts that insist AI inferences are emphatically a central aspect of your personal data and other experts strenuously counterargue that AI inferences are assuredly not at all in the realm of so-called personal data (the catchphrase of “personal data” is usually the cornerstone around which data-related legal rights are shaped).

All of this raises a lot of societal challenges. AI is becoming more and more pervasive throughout society. The odds are that AI is making lots and lots of inferences about us all. Are we allowing ourselves to be vulnerable and at the mercy of these AI-based inferences? Should we be clamoring to make sure that AI inferences are within our scope of data-related rights? These questions are being bandied around by experts in the law and likewise by expert ethicists. For my ongoing coverage of AI Ethics and Ethical AI topics, see the link here and the link here, just to name a few.

Some people react to the AI inferences topic with a sense of indifference and merely shrug their shoulders. I don’t care what kind of nutty inferences an AI system fabricates, say those that are sanguine about it. Go at it. AI will do what AI does. Let AI be.

Well, there are plenty of reasons to get riled up about AI inferences. Researchers point out for example some of the disconcerting and sobering qualms: “The concern is that individuals will be put into categories that are for one inaccurate and secondly, hard to break out of. This can lead to individuals being discriminated and social inequality being amplified, by categorizing individuals based on inferences, that cannot be evaluated for accuracy or are not subject to checks for up-to-datedness. Individuals might therefore get judged, not based on what they’ve done, or will do in the future, but because inferences or correlations drawn by algorithms suggest they may behave in ways that make them poor credit or insurance risks” (Celin Fischer, “The Legal Protection Against Inferences Drawn By AI Under The GDPR,” July 2020, Tilburg University Law School).

Imagine that an AI inference is made about you that you are an addictive gambler and an alcoholic, per the scenarios depicted a moment ago. This can be spread from computer to computer. Eventually, databases everywhere might have this AI inference in them. You are unaware of this. Turns out that whenever you do things like apply for a job or try to get a mortgage, an online query related to approving those life-desired choices will be confronted by AI inferences that could insidiously sink your aspirations and dreams.

Again, you might not even know that those AI inferences are waiting out there like hidden online digital landmines waiting to destroy your life.

The usual way in which data-related rights are arranged tends to include “personal data” as the hallmark of what constitutes something that you legally can do something about. Three types normally are encompassed, and then there is the fourth one that might or might be in the rubric.

Consider these four types of personal data:

  • Provided Data – data that you gave to the AI in one fashion or another
  • Observed Data – data that the AI somehow noted about you
  • Derived Data – data that the AI calculated about you based on available data
  • Inferred Data – data that consists of AI inferences about you

Many of the laws regarding data privacy and data protection will typically include the Provided Data, Observed Data, and Derived Data. The facets of Inferred Data might or might not be included.

The attention overall seems to pretty much concentrate on the first three and not as much on the fourth element, the Inferred data. Consider a prime example of this. You might be aware that the EU has the GDPR (General Data Protection Regulation) which is one of the most comprehensive legal underpinnings established for data-related rights and protections. According to this researcher, data-related inferences are said to be second-class or less than prized in contrast to the other types of personal data: “Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (‘GDPR’). Data subjects’ rights to know about (Art. 13–15), rectify (Art. 16), delete (Art. 17), object to (Art. 21), or port (Art. 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with the controller’s interests (e.g., trade secrets or intellectual property) than would otherwise be the case” (Sandra Wachter and Brent Mittelstadt, “A Right To Reasonable Inferences Re-Thinking Data Protection Law In The Age Of Big Data And AI,” Columbia Business Law Review).

Those AI inferences could be biased, discriminatory, and have incorporated an untold number of falsehoods about your race, gender, and the like.

This brings up a whack-a-mole problem. Suppose that there are stringent rights about the Provided Data, the Observed Data, and the Derived Data. You manage to stay on top of how that kind of data is being represented about you. Leveraging various data-related privacy and protection rights, you ensure that the data does not contain biases and discriminatory indications (that alone would be an incredible feat, by the way).

An AI system uses the Provided Data, Observed, Data, and Derived Data to contrive an inference about you that turns out to be absolutely biased and discriminatory. Yikes! Even though you did your best to scour the source data, the AI opts to use it anyway to derive an inference that is in fact of a biased or discriminatory nature. Exasperating. Exhausting. Enraging.

Here is how this research paper describes the situation: “These inferences draw on highly diverse and feature-rich data of unpredictable value and create new opportunities for discriminatory, biased, and privacy-invasive profiling and decision-making. Inferential analytics methods are used to infer user preferences, sensitive attributes (e.g., race, gender, sexual orientation), and opinions (e.g., political stances), or to predict behaviors (e.g., to serve advertisements). These methods can be used to nudge or manipulate us, or to make important decisions (e.g., loan or employment decisions) about us” (as cited above, “A Right To Reasonable Inferences Re-Thinking Data Protection Law In The Age Of Big Data And AI,” Columbia Business Law Review).

Before getting into some more meat and potatoes about the wild and woolly considerations underlying AI inferences, let’s establish some additional fundamentals on profoundly integral topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s now return to the topic of AI inferences.

There are a handful of proposed AI inference pertinent data-related legal rights that some assert should be codified into our laws. This is being considered or in some cases being enacted via laws on a national and international basis. A lot of gray area still exists. In the United States, the various state laws vary quite a bit on the AI inference legalities. The whole kit and kaboodle have yet to be resolved across the board.

Those of you in California might be familiar with the California Consumer Privacy Act (CCPA). A recently expressed opinion by the California Office of the Attorney General (OAG) proffered indications about consumer rights regarding AI and digitally derived inferences. In short, it was interpreted that the CCPA seems to provide consumers with a limited sort of right to know about such inferences. A mainstay focuses on the CCPA language that refers to “… inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.” It seems that the OAG textually analyses this and other associated CCPA language to predicate that the right arises when the inference is based on personal data as further defined within the CCPA and when a business opts to create a profile about the consumer.

Take a look at this excerpt from the formal opinion issued by the OAG: “Under the California Consumer Privacy Act, does a consumer’s right to know the specific pieces of personal information that a business has collected about that consumer apply to internally generated inferences the business holds about the consumer from either internal or external information sources? Yes, under the California Consumer Privacy Act, a consumer has the right to know internally generated inferences about that consumer, unless a business can demonstrate that a statutory exception to the Act applies” (OAG, State of California, March 10, 2022).

As with any such legal matter, you would be wise to consult with a versed attorney on such matters for insights on the topic. There are also plenty of online analyses about the OAG published opinion. The responses range from buoyant optimism to disappointment that the allowance for exceptions and other conditions could water things down.

Those that stridently argue for outright and fully explicit rights associated with AI inferences will usually want these kinds of provisions:

  • Right to know what AI inferences exist about you
  • Right to ensure that the AI inferences about you are “reasonable”
  • Right to be able to rectify or fix AI inferences that you believe to be improper
  • Right to have AI inferences about you get deleted or expunged
  • Right to challenge decisions made about you based on AI inferences about you
  • Etc.

There are a bunch of thorny considerations involved in the rights underlying AI inferences. For example, one contention is that any AI inferences about you must be “reasonable” and cannot or should not be unreasonable that perhaps falsely or misleadingly infer something about you. Another perspective is that AI inferences ought to be classified as to whether they are advantageous for you versus being adversely against you.

Trying to iron out those details is a rough road, for sure.

You might be curious as to why there would be an acrimonious debate about having rights to AI inferences. It seems like one of those open and shut cases. Everyone would almost universally seem to want to have rights associated with AI inferences.

Companies that use AI for devising such inferences would at times beg to differ on this suggestion that the matter is open and shut. Likewise, firms that rely upon AI inferences would also question the arguable claim that it is simple and cleanly figured out. Things are more complex than a cursory glance suggests.

One argument is that the AI inferences are potentially trade secrets of a given firm. If a company has to reveal the AI inferences, it might be unduly divulging its Intellectual Property (IP). This divulging might expose not just the inferred data but also reveal how the AI inferences are being computationally devised altogether, perhaps giving up valued methods and procedures to competitors. The secret sauce, as it were, could be backward engineered by the mere showing of the AI inference results.

On top of that, some firms might wish to treat the AI inferences as a form of asset that they can then monetize. A company might wish to sell its AI inferences. A company might be willing to trade AI inferences for other goods or services. The belief is that by revealing AI inferences, the cost of having created the means to devise them will not be recouped and an unfair “taking” of profits associated with the AI inferences will occur.

Counterarguments abound. Companies can potentially be sloppy and create harmful AI inferences without any repercussions if there aren’t suitable and sufficient laws on this. Consumers can get sorely dinged. Expecting firms to voluntarily do the right thing about AI inferences is considered by some as dreamy or unrealistic.

Welcome to the morass underpinning the lofty and at times the acrid matter of AI inferences.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the use of AI inferences, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Inferences

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall begin by heaping praise upon the use of ML/DL in the realm of bringing forth AI-based self-driving cars. Several key aspects of self-driving cars have come to fruition as a result of using Machine Learning and Deep Learning. For example, consider the core requirement of having to detect and analyze the driving scene that surrounds an AI-based self-driving car.

You’ve undoubtedly noticed that most self-driving cars have a myriad of mounted sensors on the autonomous vehicle. This is often done on the rooftop of the self-driving car. Sensor devices such as video cameras, LIDAR units, radar units, ultrasonic detectors, and the like are typically included on a rooftop rack or possibly affixed to the car top or sides of the vehicle. The array of sensors is intended to electronically collect data that can be used to figure out what exists in the driving scene.

The sensors collect data and feed the digitized data to onboard computers. Those computers can be a combination of general-purpose computing processors and specialized processors that are devised specifically to analyze sensory data. By and large, most of the sensory data computational analysis is undertaken by ML/DL that has been crafted for this purpose and is running on the vehicle’s onboard computing platforms.

The ML/DL computationally tries to find patterns in the data such as where the roadway is, where pedestrians are, where other nearby cars are, and so on. All of this is crucial to being able to have the self-driving car proceed ahead. Without the ML/DL performing the driving scene analysis, the self-driving car would be essentially blind as to what exists around the autonomous vehicle.

In brief, you can readily make the case that the use of ML/DL is essential to the emergence of AI-based self-driving cars. It all seems a heralded matter. But that is until you start to look into some of the details and possibilities of what the future might hold.

I’ve repeatedly been warning about the massive potential of privacy intrusions due to the advent of self-driving cars. Few are giving much attention to the matter at this time because it hasn’t yet risen as a realized problem. I assure you that it will ultimately be a humongous problem, one that I’ve labeled as the veritably “roving eye” of self-driving cars (see the link here).

Your first thought might be that this is assuredly solely about the privacy of passengers. The AI could monitor what riders are doing while inside a self-driving car. Sure, that’s a definite concern. But I’d argue it pales in comparison to the less obvious qualm that has even larger ramifications.

Simply stated, the sensors of the self-driving car are collecting all kinds of data as the autonomous vehicles roam throughout our public roadways. It captures what is happening on the front lawns of our houses. It captures what people are doing when walking from place to place. It shows when you entered an establishment and when you left it. This data can be easily uploaded into the cloud of the fleet operator. The fleet operator could then connect this data and essentially have the details of our day-to-day lives that occur while outdoors and throughout our community.

We don’t seem to care about that privacy intrusion potential right now because there are only handfuls of self-driving cars being tried out in relatively few locales. Someday, we’ll have hundreds of thousands or millions of self-driving cars that are crisscrossing all of our major cities and towns. Our daily outdoor lives will be recorded non-stop, 24×7 and we can be tracked accordingly.

You could suggest that this data about our lives is within the three categories of personal data that I earlier discussed, namely Provided Data, Observed Data, and Derived Data.

Would AI inferences also come to play here?

Undoubtedly.

An AI system is set up to examine the vast collection of data about your comings and goings. The AI ferrets out of the data that you visit the local pub on Tuesday and Thursday evenings. You stay there for several hours. The AI has been programmed to make inferences from the data that is being examined.

As a result, the AI infers that you are an alcoholic.

Turns out that the pub also allows legalized gambling via the use of slot machines and the like. The AI infers that you are an addictive gambler.

These AI inferences are then kept online and stored for further use. The automaker or fleet operator opts to sell these AI inferences to a wide array of other companies that want to know about you. All in all, these AI inferences are completely unknown to you, yet they are being shared, sold, and traded endlessly while you are carrying on your daily life.

Seems a bit daunting.

It is a boatload of AI Ethics, legal AI issues, and societal challenges.

Conclusion

Those that are in favor of AI inferences point out that without those inferences the world might not go around. In addition, if we are aiming to make AI as “intelligent” as possible, it would seem that we have no choice but to devote attention to crafting AI that can make inferences.

Remember the words of Friedrich Nietzsche that I quoted at the start of this discussion: “The greatest progress that the human race has made lies in learning how to make correct inferences.” The implication is that humans rely on making inferences. Ergo, you could try to claim that AI of any semblance will also likely need to rely on making inferences. If you try to stop or block the use of AI inferences, you are said to be trying to turn back the clock on making progress in AI. This in turn would seem detrimental since you can readily make the case that the advent of AI has been beneficial in numerous glorious respects to society (though, the tradeoff consists too of the bad sides of AI).

Within the quotation by Nietzsche is the crucial point about making correct inferences. In that manner of consideration, rather than trying to prevent AI inferences, the argument goes that we should be attending to making sure that the AI inferences are correct. We should all agree that AI inferences are needed and must be permitted, while the contentious angle is whether the AI inferences are correct or not.

That leads us to the open-ended question about what in fact is or is not a correct AI inference.

The right to know about AI inferences might be a means of keeping this ship on course. We can have AI inferences and eat our cake too. Will the right AI tech and the right set of laws about AI inferences and our rights as humans ensure that we have a cake worth eating?

Time will tell.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.