Plato famously stated that a good decision is based on knowledge and not on numbers.
This keen insight seems amazingly prescient about today’s Artificial Intelligence (AI).
You see, despite the blaring headlines currently proclaiming that AI has somehow reached sentience and embodies human knowledge and reasoning, please be aware that this overstated AI hyperbole is insidious prevarication since we are still relying upon number-crunching in today’s algorithm decision-making (ADM) as undertaken by AI systems. Even the vaunted Machine Learning (ML) and Deep Learning (DL) consist of computational pattern matching, meaning that numbers are still at the core of the exalted use of ML/DL.
We do not know if AI reaching sentience is possible. Could be, might not be. No one can say for sure how this might arise. Some believe that we will incrementally improve our computational AI efforts such that a form of sentience will spontaneously occur. Others think that the AI might go into a kind of computational supernova and reach sentience pretty much on its own accord (typically referred to as the singularity). For more on these theories about the future of AI, see my coverage at the link here.
So, let’s not kid ourselves and falsely believe that contemporary AI is able to think like humans. I suppose the question then comes to the forefront about Plato’s remark as to whether we can have good decisions based on computational AI rather than on sentient AI. You might be surprised to know that I would assert that we can indeed have good decisions being made by everyday AI systems.
The other side of that coin is that we can also have everyday AI systems that make bad decisions. Rotten decisions. Decisions that are rife with untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
My extensive coverage of AI Ethics and Ethical AI can be found at this link here and this link here, just to name a few.
For this herein discussion, I’d like to bring up an especially worrying aspect about AI that those in the AI Ethics arena are rightfully bemoaning and trying to raise apt awareness about. The sobering and disconcerting matter is actually quite straightforward to point out.
Here it is: AI has the real-world potential of promulgating AI-steeped biases at an alarming global scale.
And when I say “at scale” this demonstrably means worldwide massive scale. Humongous scale. Scale that goes off the scale.
Before I dive into how this scaling of AI-steeped biases will take place, let’s make sure we all have a semblance of how AI can incorporate undue biases and inequities. Recall again that this is not of a sentient variety. This is all of a computational caliber.
You might be perplexed as to how AI could imbue the same kinds of adverse biases and inequities that humans do. We tend to think of AI as being entirely neutral, unbiased, simply a machine that has none of the emotional sway and foul thinking that humans might have. One of the most common means of AI falling into the biases and inequities dourness happens when using Machine Learning and Deep Learning, partially as a result of relying upon collected data about how humans are making decisions.
Allow me a moment to elaborate.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. The Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making or ADM of AI axiomatically becomes laden with inequities.
Not good.
This brings us to the matter of AI-steeped biases when at scale.
First, let’s take a look at how human biases might be creating inequities. A company that makes mortgage loans decides to hire a mortgage loan agent. The agent is supposed to review requests from consumers that want to get a home loan. After assessing an application, the agent renders a decision to either grant the loan or deny the loan. Easy-peasy.
For sake of discussion, let’s imagine that a human loan agent can analyze 8 loans per day, taking about one hour per review. In a five-day workweek, the agent does about 40 loan reviews. On an annual basis, the agent typically does around 2,000 loan reviews, give or take a bit.
The company wants to increase its volume of loan reviews, thus the firm hires 100 additional loan agents. Let’s assume they all have about the same productivity and that this implies we can now handle about 200,000 loans per year (at a rate of 2,000 loan reviews per annum per agent). It seems that we’ve really ramped up our processing of loan applications.
Turns out that the company devises an AI system that can essentially do the same loan reviews as the human agents. The AI is running on computer servers in the cloud. Via the cloud infrastructure, the company can readily add more computing power to accommodate any volume of loan reviews that might be needed.
With the existing AI configuration, they can do 1,000 loan reviews per hour. This also can happen 24×7. There is no vacation time needed for the AI. No lunch breaks. The AI works around the clock with no squawking about being overworked. We’ll say that at that approximate pace, the AI can process nearly 9 million loan applications per year.
Notice that we went from having 100 human agents that could do 200,000 loans per year and jumped many times over to the much-heightened number of 9 million reviews per year via the AI system. We have dramatically scaled up our loan request processing. No doubt about it.
Get ready for the kicker that will perhaps make you fall off your chair.
Assume that some of our human agents are making their loan decisions on a basis of untoward biases. Perhaps some are giving racial factors a key role in the loan decision. Maybe some are using gender. Others are using age. And so on.
Of the 200,000 annual loan reviews, how many are being done under the wrongful gaze of adverse biases and inequities? Perhaps 10% which is around 20,000 of the loan requests. Worse still, suppose it is 50% of the loan requests, in which case there is a quite troubling 100,000 annual instances of loan decisions wrongly decided.
That’s bad. But we have yet to consider an even more frightening possibility.
Suppose the AI has a hidden bias that consists of factors such as race, gender, age, and the like. If 10% of the annual loan analyses are subject to this unsavoriness, we have 900,000 loan requests that are being improperly handled. That’s a lot more than what the human agents could possibly do, primarily due simply to the volume aspects. Those 100 agents if all entirely were doing an inequitable review could at most do this on the 200,000 annual loan reviews. The AI could do the same on a much large scale of the 9,000,000 annual reviews.
Yikes!
This is truly AI-steeped bias at a tremendous scale.
When untoward biases are entombed within an AI system, the same scaling that seemed advantageous is now turned on its head and becomes a monstrously beguiling (and disturbing) scaling outcome. On the one hand, the AI can beneficially rachet up to handle more people that are requesting home loans. On the surface, that seems a tremendous AI For Good. We ought to pat ourselves on the back for presumably expanding the chances of humans getting needed loans. Meanwhile, if the AI has embedded biases, the scaling is going to be a tremendously rotten result and we find ourselves lamentedly mired in AI For Bad, at a truly massive scale.
The proverbial dual-edged sword.
AI can radically increase the access to decision-making for those that are seeking desired services and products. No more human-constrained labor bottleneck. Outstanding! The other edge of the sword is that if the AI contains badness such as hidden inequities, the very same massive scaling is going to promulgate that untoward behavior on an unimaginable scale. Exasperating, wrongful, shameful, and we can’t allow society to fall into such an ugly abyss.
Anyone that has been puzzled as to why we need to pound away at the importance of AI Ethics should be now realizing that the AI scaling phenomenon is a darned important reason for pursuing Ethical AI. Let’s take a moment to briefly consider some of the key Ethical AI precepts to illustrate what ought to be a vital focus for anyone crafting, fielding, or using AI.
For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
- Reliability: AI systems must be able to work reliably
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation upon the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life-cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. Please be aware that it takes a village to devise and field AI. For which the entire village has to be keeping on their toes about AI Ethics.
How AI-Steeped Biases Scaling Works
Now that I’ve gotten onto the table that AI can contain biases, we are ready to examine some of the reasons why AI scaling is so intrusive.
Consider this keystone list of ten underlying reasons:
- Easily replicated
- Minimal cost to scale
- Abhorrently consistent
- Lack of self-reflection
- Blind obedience
- Doesn’t tip its hand
- Recipient unsuspecting
- Tends not to spur provocation
- False aura of fairness
- Hard to refute
I’ll briefly explore each of those crucial points.
When you try to scale up with human labor, the odds are that doing so will be enormously complicated. You have to find and hire the people. You have to train them to do the job. You have to pay them and take into account human wants and needs. Compare this to an AI system. You develop it and put it into use. Other than some amount of ongoing upkeep of the AI, you can sit back and let it process endlessly.
This means that AI is easily replicated. You can add more computing power as the task and volume might so require (you aren’t hiring or firing). Global use is done with the push of a button and attained by the worldwide availability of the Internet. The scaling up is a minimal cost in comparison to doing likewise with human labor.
Human labor is notoriously inconsistent. When you have large teams, you have a veritable box of chocolates in that you never know what you might have on your hands. The AI system is likely to be highly consistent. It repeats the same activities over and over, each time being essentially the same as the last.
Normally, we would relish AI consistency. If humans are prone to biases, we will always have some portion of our human labor that is going astray. The AI, if purely unbiased in its construction and computational efforts, would by far be more consistent. The problem though is that if the AI has hidden biases, the consistency now is painfully abhorrent. The odds are that the biased behavior is going to be consistently carried out, over and over again.
Humans would hopefully have some inkling of self-reflection and maybe catch themselves making biased decisions. I’m not saying all would do so. I’m also not saying that those that catch themselves will necessarily right their wrongs. In any case, at least some humans would sometimes correct themselves.
The AI is unlikely to have any form of computational self-reflection. This means that the AI just keeps on doing what it is doing. There would seemingly be zero chance of the AI detecting that it is running afoul of equity. That being said, I have described some efforts to deal with this, such as building AI Ethics components within AI (see the link here) and devising AI that monitors other AI to discern unethical AI activities (see the link here).
Lacking any kind of self-reflection, the AI is also likely to have essentially blind obedience to whatever it was instructed to do. Humans might not be so obedient. The chances are that some humans that are doing a task will question whether they are perhaps being guided into inequity territory. They would tend to reject unethical commands or perhaps go the whistleblower route (see my coverage at this link here). Do not expect everyday contemporary AI to somehow question its programming.
We next turn to those that are using AI. If you were seeking a home loan and spoke with a human, you might be on your alert as to whether the human is giving you a fair shake. When using an AI system, most people seem to be less suspicious. They often assume that the AI is fair and ergo do not get as quickly riled up. The AI appears to lull people into an “it’s just a machine” trance. On top of this, it can be hard to try and protest the AI. In contrast, protesting how you were treated by a human agent is a lot easier and much more commonly accepted and assumed as viably possible.
All told, AI that is steeped in biases has a dishonorable leg-up over humans steeped in biases, namely in terms of being able to have the AI massively deploy those biases on a gigantic scale, doing so without as readily getting caught or having consumers realize what is disturbingly taking place.
At this juncture of this discussion, I’d bet that you are desirous of some additional examples that might showcase the conundrum of AI-steeped biases at scale.
I’m glad you asked.
There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI steeped biases at scale, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Biases At Scale
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the exploration of AI-steeped biases that are promulgated at a large scale.
Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale.
Turns out that two unseemly concerns start to arise about the otherwise innocuous and generally welcomed AI-based self-driving cars, specifically:
a. Where the AI is roaming the self-driving cars for picking up rides was looming as a vocalized concern
b. How the AI is treating awaiting pedestrians that do not have the right-of-way was rising up as a pressing issue
At first, the AI was roaming the self-driving cars throughout the entire town. Anybody that wanted to request a ride in the self-driving car had essentially an equal chance of hailing one. Gradually, the AI began to primarily keep the self-driving cars roaming in just one section of town. This section was a greater money-maker and the AI system had been programmed to try and maximize revenues as part of the usage in the community.
Community members in the impoverished parts of the town were less likely to be able to get a ride from a self-driving car. This was because the self-driving cars were further away and roaming in the higher revenue part of the locale. When a request came in from a distant part of town, any request from a closer location that was likely in the “esteemed” part of town would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town was nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.
You could assert that the AI pretty much landed on a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL.
The thing is, ridesharing human drivers were known for doing the same thing, though not necessarily exclusively due to the money-making angle. There were some of the ridesharing human drivers that had an untoward bias about picking up riders in certain parts of the town. This was a somewhat known phenomenon and the city had put in place a monitoring approach to catch human drivers doing this. Human drivers could get in trouble for carrying out unsavory selection practices.
It was assumed that the AI would never fall into that same kind of quicksand. No specialized monitoring was set up to keep track of where the AI-based self-driving cars were going. Only after community members began to complain did the city leaders realize what was happening. For more on these types of citywide issues that autonomous vehicles and self-driving cars are going to present, see my coverage at this link here and which describes a Harvard-led study that I co-authored on the topic.
This example of the roaming aspects of the AI-based self-driving cars illustrates the earlier indication that there can be situations entailing humans with untoward biases, for which controls are put in place, and that the AI replacing those human drivers is left scot-free. Unfortunately, the AI can then incrementally become mired in akin biases and do so without sufficient guardrails in place.
This also showcases the AI steeped biases at scale issue.
In the case of human drivers, we might have had a few here or there that were exercising some form of inequity. For the AI driving system, it is usually one such unified AI for an entire fleet of self-driving cars. Thus, we might have begun with say fifty self-driving cars in the town (all run by the same AI code), and gradually increased to let’s say 500 self-driving cars (all being run by the same AI code). Since all of those five hundred self-driving cars are being run by the same AI, they are correspondingly all subject to the same derived biases and inequities embedded within the AI.
Scaling hurts us in that regard.
A second example involves the AI determining whether to stop for awaiting pedestrians that do not have the right-of-way to cross a street.
You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so. This meant that you had discretion as to whether to stop and let them cross. You could proceed without letting them cross and still be fully within the legal driving rules of doing so.
Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases. A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender. I’ve examined this at the link here.
Imagine that the AI-based self-driving cars are programmed to deal with the question of whether to stop or not stop for pedestrians that do not have the right-of-way. Here’s how the AI developers decided to program this task. They collected data from the town’s video cameras that are placed all around the city. The data showcases human drivers that stop for pedestrians that do not have the right-of-way and human drivers that do not stop. It is all collected into a large dataset.
By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop. Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car.
To the surprise of the city leaders and the residents, the AI was evidently opting to stop or not stop based on the appearance of the pedestrian, including their race and gender. The sensors of the self-driving car would scan the awaiting pedestrian, feed this data into the ML/DL model, and the model would emit to the AI whether to stop or continue. Lamentedly, the town already had a lot of human driver biases in this regard and the AI was now mimicking the same.
This example illustrates that an AI system might merely duplicate the already preexisting untoward biases of humans. Furthermore, it does so at scale. Any human drivers might have sometimes been taught to do this untoward form of selection or maybe personally chosen to do so, but the chances are that the bulk of the human drivers is probably not doing this en masse.
In strident contrast, the AI driving system that is being used to drive self-driving cars is likely to abhorrently consistently and assuredly carry out the derived bias.
Conclusion
There is a multitude of ways to try and avoid devising AI that has untoward biases or that over time gleans biases. As much as possible, the idea is to catch the problems before you go into high gear and ramp up for scaling. Hopefully, biases don’t get out the door, so to speak.
Assume though that one way or another biases will arise in the AI. Once you are deployed into massive scale with AI, you can’t just do one of those oft-proclaimed techie “fire and forget” notions. You have to diligently keep on top of what the AI is doing and seek to detect any untoward biases that need to be corrected.
As earlier pointed out, one approach involves ensuring that AI developers are aware of AI Ethics and thus spur them to be on their toes to program the AI to avert these matters. Another avenue consists of having the AI self-monitor itself for unethical behaviors and/or having another piece of AI that monitors other AI systems for potentially unethical behaviors. I’ve covered numerous other potential solutions in my writings.
A final thought for now. Having started this discussion with a quote by Plato, it might be fitting to close the discourse with yet another shrewd utterance by Plato.
Plato stated that there is no harm in repeating a good thing.
The ease of going at scale with AI is certainly a viable means of attaining such an upbeat aspiration when the AI is of the AI For Good variety. We savor repeating a good thing. When AI is the AI For Bad and replete with untoward biases and inequities, we might lean on Plato’s remarks and say that there is abundant harm in repeating a bad thing.
Let’s listen carefully to Plato’s wise words and devise our AI accordingly.