There seems to be quite a rush to get AI systems out-the-door these days.
Unfortunately, there is a devilish price to be paid in such a rush to judgment and the associated misguided assumption that the soonest AI is the best kind of AI.
For example, some have put together contact tracing apps that have AI capabilities to bolster the intended functionality of diminishing or preventing the spread of the infectious virus, but distressingly those same apps oftentimes are privacy intrusion nightmares (see my coverage at this link here).
Had there been concentrated attention toward incorporating AI Ethics precepts, perhaps those scurrying to push out an AI beneficial application would have been more likely to consider the downsides of their creation too.
For clarification, AI Ethics is a set of ethics-oriented guidelines or principles that offer guidance when designing, building, testing, fielding, and maintaining an AI system (for examples, see my analysis of the set by the Vatican and another similar one by the U.S. DoD at this link here). There is currently a multitude of such proposed AI Ethics indications and no single set has become the one-and-only globally accepted standard, though nonetheless they all pretty much adhere to a similar theme and core values.
In that sense, those that try to use as an excuse that there isn’t an anointed all-hands standard are disingenuous at best, and by readily embracing nearly any of the bona fide ones they would be doing well and able to proceed unabated.
Okay, so let’s hope that entities developing AI will see the light and opt to adopt a viable AI Ethics code set. Doing so is intended to make them aware of the qualms about AI such as the potential for privacy intrusions, the potential for inherent hidden biases in AI-based decision making, and so on.
One assumes that with that awareness will spark AI development efforts that foundationally detect those kinds of problematic and endangering issues, beforehand, rather than after shipping the AI into the hands of the public.
Letting the horse out of the barn prematurely is a bad idea and the goal ought to be making sure first and foremost that the horse is ready to be let loose.
But what happens to AI Ethics when it is crunch time and a kind of panic occurs to roll-out AI?
While in the throes of a crisis, it can seem daunting to keep AI Ethics at the forefront, since the usual perception is that worrying about those AI Ethics codifications are bound to stymy the pace of development and undercut the urgent tempo to get AI into the hands of end-users.
That is a decidedly short-minded viewpoint and though it might seem like the value of being quick to market is a worthwhile trade-off, the result can reverberate for a long time to come. The public will not especially remember that there was a presumed urgency and instead will only tend to know that the AI fostered into the world was insidious and ultimately nefarious.
If enough of those kinds of haphazard make-and-bake AI apps are tossed into the market, we will inevitably witness a colossal backlash that will make the so-far optional AI Ethics principles and guidelines into becoming mandatory and potentially overly onerous as a reaction to those that scrimped at the beginning.
Maybe AI will get banned entirely.
Somehow, there needs to be a balance found that can appropriately make use of the AI Ethics precepts and yet allow for flexibility when there is a real and fully tangible basis to partially cut corners, as it were.
Of course, some would likely abuse the possibility of a slimmer version and always go that route, regardless of any truly needed urgency of timing. Thus, there is a chance of opening a Pandora’s box whereby a less-than fully AI Ethics protocol becomes the default norm, rather than serving as a break-glass exception when rarely so needed.
It can be hard to put the Genie back into the bottle.
In any case, there are already some attempts at trying to craft a fast-track variant of AI Ethics principles.
We can perhaps temper those that leverage the urgent version with both a stick and a carrot.
The carrot is obvious that they are seemingly able to get their AI completed sooner, while the stick is that they will be held wholly accountable for not having taken the full nine yards on the use of the AI Ethics. This is a crucial point that might be used against those taking such a route and be a means to extract penalties via a court of law, along with penalties in the court of public opinion.
Let’s next take a look at a well-known set of AI Ethics, promulgated by the OECD, and see if we can shape those into offering a fast-track variant.
The OECD has proffered these five foundational precepts as part of undertaking AI efforts (the source document is at this link here):
1) AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.
2) AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3) There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4) AI systems must function in a robust, secure, and safe way throughout their life cycles and potential risks should be continually assessed and managed.
5) Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.
For ease of reference, this is the official shorthand list of the five AI Ethics principles:
1) Inclusive growth, sustainable development, and well-being
2) Human-centered values and fairness
3) Transparency and explainability
4) Robustness, security, and safety
5) Accountability
What can be done to fast-track this set of AI Ethics?
Some might leap to the easy way out by simply lopping one or more from the list.
In other words, just cut the stack down to four, or maybe three, or heaven forbid even just one or two of the precepts.
It isn’t apparent as to which of the five would be “best” to ditch.
You could liken this to having to cut off a limb and be arguing about which appendage is the less vital one.
No, I don’t think this will do.
Each of the five has a distinct purpose. None of the other five will make up for having omitted one of the other precepts. Knocking out any of them is akin to taking out the leg of a stool, wherein the stool no longer properly stands.
For those that have suggested to chop out one or more of the five, it seems to defy any reasonable logic and you are essentially arbitrarily willing to wholly weaken and undermine the concrete flooring upon which any AI has to be properly constructed.
Case rejected.
That might seem stubborn and unyielding, lest there are some other means to try and turn these precepts into a slimmer variant.
Consider these possibilities:
· Prioritize the five and weight your effort accordingly – they are all essential but you could argue that in a pinch they are not all equal, thus, for a given AI, assign weights to each of the five, and then structure your AI development effort according to those with the greater importance or weighting.
· Take the shortened version of one or more of the precepts – for each of the five there are lots of subordinated items to normally be considered, but for a bona fide crunch you could shorten some of those underlying elements and knowingly skip a few underlings, keep track as you do so and realize what you are missing accordingly.
· Double-up on the precepts – the principles do not need to necessarily be considered one-at-a-time, and there are synergies to be had by considering simultaneously more than one while undertaking your AI effort, thus a double-up is bound to reduce the time and effort, though be careful that it does not become a morass and a gigantic blur.
· Borrow from a similar AI app – many that are embarking upon developing AI are doing so for the first time and as such they have little or no prior base to build upon, yet for those that are actively in AI and have other AI systems under their belt they can readily reuse prior aspects that already passed the AI Ethics precepts (just be mindful and do not blindly and falsely borrow something unlike what you are doing now).
· Parcel out wisely the precepts – in some instances an AI effort places the rigors of the AI Ethics precepts onto the shoulder of one person that becomes the AI Ethics guru, this might be handy, unfortunately, this can also become a bottleneck and slows down the entire effort, reconsider if there is a single link in the chain and how to better parcel out the workload.
· Cautiously crowdsource the precepts – in an open-source manner some are doing AI that relishes using crowdsourcing to either perform the AI development or serve to scrutinize and test it, in which case you could have the AI Ethics become likewise crowdsourced, just as long as the crowd knows what it is doing and this is not an unbridled castoff that undermines the precepts.
· Combine these fast-tracks – the aforementioned fast-tracks can be combined in various sensible ways and produce even larger short-cuts accordingly, but do not cut to the bone and end-up losing the core by taking too many slim-downs at once.
As mentioned, you can use one or more of the fast-track facets at the same time.
How will you know that you have gone too far?
Here are the key supplementals for taking on a fast-track approach with the AI Ethics principles:
· Establish internal checks-and-balances – make sure to set up the responsibilities for the AI Ethics precepts and include numerous checks-and-balances for gauging how they are proceeding.
· Potentially use an external third-party auditor – in addition to internal checks-and-balances, it can be helpful to use an external auditor that does not presumably feel attached to what has been done and therefore will independently and forthrightly conduct their review.
· Explicitly plan to include the AI Ethics considerations – sometimes the AI Ethics considerations are belatedly shoved into an AI effort as though a second class citizen and not worthy of resources and nor priority, rate them instead as primary for all AI implementations and include them dutifully in all of your AI planning accordingly.
· Boost the fail-safe catchall of the AI – properly devised AI should already have fail-safe catchalls so that if it goes awry in the field the AI will presumably realize troubles are afoot and will do something prudent as a contingency, this is even more so a crucial component when taking the fast-track.
All told, I’ve provided a workable semblance of how to fast-track an AI effort as it pertains to using AI Ethics guidelines during the project.
Again, do not interpret this as a license to wildly proceed and merely try to skate through the AI Ethics precepts.
Your AI is going to be better if it undergoes a robust and complete AI Ethics effort, and likewise, those using your AI will benefit, and the totality of AI adoption will be kept enlivened.
Self-Driving Cars and AI Ethics Principles
You might be wondering how the AI Ethics principles come to play in real-world AI applications.
A handy exemplar would be the development of AI-based true self-driving cars.
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
Many of the automakers and self-driving tech firms have been adopting the AI Ethics precepts (this varies from firm-to-firm).
One can argue that all of them ought to be doing so, which is worthwhile to mention herein as an impetus to get some of them off-the-dime and get in gear with including AI Ethics facets.
Please do so.
Let’s next consider an example of how AI Ethics applies to self-driving cars.
Besides the bevy of sensors facing outward to detect the roadway, self-driving cars are likely to have a camera or two that are facing inward too.
Why so?
It is assumed that self-driving cars will be mainly used in fleets for purposes of ridesharing. The fleet owners will seemingly be large firms like automakers, car rental agencies, ridesharing firms, and the like (though, I am somewhat of a contrarian and argue that individuals will also own self-driving cars).
With those self-driving cars roaming around and taking on passengers, there is quite obviously the chance that a rowdy rider will decide to mark graffiti on the interior or maybe rip up the seats. By having a camera that points inward, the actions of those unruly people can be tracked. The AI might be trained to look for such behavior and immediately warn or scold the person, potentially stopping the miscreant from doing any further damage.
The inward-facing cameras are readily recording whatever happens inside the vehicle.
Seems innocent and sensible.
Consider another perspective.
Suppose you went to the bar after work and opted to take a self-driving car home, doing so rightfully rather than driving yourself in your conventional car while drunk. During the journey home, you blabber about your work, your personal life, and all sorts of confidential commentary that is spilling out of you while intoxicated.
It is all captured on video.
Who owns that video?
What will they do with it?
Currently, this is an open question and each self-driving car owner is presumably going to decide whatever they believe is appropriate to do about those videos.
Meanwhile, you personally have inadvertently created a personal privacy exposure for yourself and are potentially left in the lurch.
How does this relate to AI Ethics precepts?
Take a look again at the OECD five.
You might hopefully realize that at least one of the AI Ethics principles applies, and indeed it could readily be argued that all five apply to this scenario.
Some of the automakers and self-driving tech firms would currently say that the inward-facing camera is merely a piece of technology and they would absolve themselves of any duty related to how it might be used.
Likewise, if they have an AI monitoring system that can examine the video in real-time or after-the-fact to catch those that are doing something untoward, the makers of that AI would almost surely claim that however it might be used is not their concern.
If those organizations were fully embracing the AI Ethics guidelines, they would instead be having serious and contemplative discussions about the societal implications of that feature. Also, they would aim to figure out how to best put the feature into use and including for those that buy their self-driving cars and opt to use them as ridesharing vehicles.
Without the nudge of the AI Ethics precepts, it can be quite easy to overlook important and societally impactful facets such as this and meanwhile naively profess a type of head-in-the-sand rationale for not dealing with thorny issues.
Conclusion
AI Ethics are only now becoming prominent and we will need to wait and see how those involved in making and fielding AI opt to adopt such guidelines.
The excuse that incorporating AI Ethics is too much effort or will needlessly delay getting AI onto the streets is false and frankly, hogwash.
You can potentially fast-track the AI Ethics facets when (and only when) there is an AI that might be instrumental for the benefit of humanity and for which the timing presupposes an urgency, but this is not the norm and those doing a wink-wink to undercut the AI Ethics elements are doing a disservice to themselves and the rest of us.
Maybe there ought to be an inward-facing camera watching those making AI and we could catch those kinds of scoundrels in the act.