You might not realize that you are already sitting on a million dollars and merely need to embark on a relatively modest effort to turn the hidden treasure trove into an in-your-hands pile of cash.
And at the same time be benefiting humanity.
Well, in the second part, benefiting humanity, it’s a core requirement to get the money and likely will be immediately followed by fame and acclaim if you like that kind of thing.
How can you snag the dough?
There is a contest underway that promises a prize of $1,000,000 to someone or some entity that has managed to innovatively perform an outstandingly good deed with AI that demonstrably benefits humanity.
It is legit and on the up-and-up.
The cool million dollars will be awarded at the next instance of the annual conference by the esteemed Association for the Advancement of Artificial Intelligence (AAAI).
As a long-time AI expert and having served as a scholarly researcher and AI practitioner, I’ve been an active participant in the AAAI for decades and can attest to the seriousness and devotion that the AAAI has toward the advancement of AI (the AAAI was originally founded in 1979).
To also make things apparent, I’ve been a speaker at their conferences and symposium, along with having served on various committees over the years, and thus stalwartly believe in this non-profit scientific society and its stated mission, namely “advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”
I recently mentioned to some of my fellow colleagues both inside of AI and at the outskirts of the AI realm that there is a million-dollar contest underway; lamentably, many had not heard or seen any news headlines or media coverage about the matter. That is a darned shame since there are many slaving away on AI systems that might fit the criteria of the competition yet are unaware of the within-their-grasp beefy reward accruing to their late-night, coffee-laden humanitarian toils.
Even if you aren’t sitting on an AI system that might qualify, the event nonetheless might be of interest to you if you are wondering what kinds of AI systems are being built and fielded, especially for those of you focusing on AI For Good, which is a rising locution for the crafting of AI that benefits the world in some manner or another.
In case you are wondering if there is a counterpart, such as AI For Bad, yes, regrettably there is such a thing.
There are lots of unsavory types out there in the devilish zone that are creating AI systems to crack into our everyday computers and steal your info or ruin your privacy. There are also gaggles of evildoers that want to use AI to take down the electrical grid, or hope to mess-up our traffic lights, or want to otherwise use AI to aid in creating chaos, starting wars, and undermining society.
AI is most certainly a two-sided coin.
Let’s hope the AI For Good is able to outweigh and overpower the AI For Bad.
In that vein, it’s heartwarming that some are choosing to be on the side of good, and they are putting together AI systems that will make life better and improve our living conditions.
Besides being heartwarming, it would be nice too to add some sweetener to those pursuing that line of work, and as such, perhaps the million-dollar competition provides that icing on the cake.
For AI-related startups, the million dollars might be more than the icing, and instead might be the cake, meaning that with the prize money they could afford to keep going ahead on their AI For Good, whatever it might be, and use the funds to further their compassionate goals.
Those seeking to apply must do so by May 24, 2020, and need to fill out an online nomination form (see this link here), and the awarding of the prize will occur at the AAAI conference slated for February 2021.
Here are some important housekeeping details:
· This AI for the Benefit of Humanity contest is intended to recognize “positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”
· This is the first time the award is being undertaken.
· The prize is being administered by the AAAI, along with support from the European Artificial Intelligence Association (EurAI), the Chinese Association for Artificial Intelligence (CAAI), and via financial support provided by Squirrel AI.
· Applicants can be individuals, or groups, or organizations, as long as the applicant(s) were the main contributors toward the indicated aspects described in the nomination submission form.
· There are various conflict-of-interest rules that need to be observed and could potentially limit or preclude some applicants.
· If you submit this year and don’t win, you may resubmit the following year, and continue to do so annually, but only for three consecutive years.
· And so on (make sure to carefully read the instructions prudently).
You might be generally wondering what constitutes an AI system that benefits humanity.
There is a lot of leeway within that overall notion.
Per the award instructions, here is what the official description of intent consists of:
· Implementations of artificial intelligence techniques that improve how critical resources or infrastructure are managed
· Applications of artificial intelligence to support disadvantaged or marginalized populations
· Learning tools that significantly improve access and quality of education
· Intelligent systems that improve the quality of life for their users
Consider an example of an AI system that might fit into that rubric.
One firm has decided to submit its AI-based self-driving car project as an indication of using AI to support those of a disadvantaged or marginalized population (the second bullet point in the above list).
How could a self-driving car relate to a humanitarian purpose, you might be wondering?
The aim of their AI-driven self-driving car is to allow those that are today mobility disadvantaged to gain access to mobility, via the advent of appropriately designed self-driving cars, and it’s a topic of rising awareness and importance (see my coverage at this link here of the annual Princeton summit that entails such designs and uses of self-driving cars).
Overall, some assert that if we are able to produce safe and reliable AI-based self-driving cars, there will be a transformative impact on society, and we will reach a vaunted mobility-for-all achievement.
By the way, yes, I realize that I’ve let the cat-out-of-the-bag as to their submission, which might seem somewhat untoward on my part, but I asked them beforehand if it was okay that I might mention their intention, and they said that they welcomed my doing so (without naming them per se) and that perhaps the mere generic mention of their AI For Good effort might inspire others accordingly.
There is an award committee that will ultimately be deciding upon the winner for the competition.
One does not envy the difficulties they will have since likely there are going to be lots of valid submissions and each with its own respective heart-tugging and bona fide use of AI For Good. That being said, the upside is indeed the chance to discover the variety and vivaciousness of AI benefiting humanity that is being worked on, worldwide, and become bedazzled and elated to know that so many such efforts are underway.
The official award committee as indicated and described per the posting about the contest, and consists of (listed in alphabetical order by last name):
· Yoshua Bengio is a professor in the Department of Computer Science and Operations Research at the Universite de Montreal and holds the Canada Research Chair in Statistical Learning Algorithms.
· Tara Chklovski is CEO and founder of global tech education nonprofit Technovation (formerly Iridescent).
· Edward A. Feigenbaum is Kumagai Professor of Computer Science Emeritus at Stanford University.
· Yolanda Gil (Award Committee Chair) is Director of Knowledge Technologies at the Information Sciences Institute of the University of Southern California, and Research Professor in Computer Science and in Spatial Sciences.
· Xue Lan is Cheung Kong Chair Distinguished Professor and Dean of Schwarzman College, and Dean Emeritus, School of Public Policy and Management at Tsinghua University.
· Robin Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M and directs the Center for Robot-Assisted Search and Rescue.
· Barry O’Sullivan holds the Chair in Constraint Programming at University College Cork in Ireland.
Coming Up With AI For Humanity Ideas
Let’s shift gears and move onto another topic, though a related matter that demonstrably underlies the overarching theme of AI For Good.
If you are an AI developer or perhaps an investor in AI systems, you might be thinking about trying to aim toward undertaking an AI project that would be considered an AI system for the benefit of humanity, and yet not have any immediate ideas of what such an endeavor might be focused on.
Sometimes, one of the hardest parts of pursuing an AI system is the identification of what the AI will be intended to accomplish.
This might seem surprising for those that aren’t into AI, but keep in mind that oftentimes there are AI specialists that are akin to the classic line about having a hammer and wanting to use it on everything you see. In other words, you might know how to craft an AI system, and not be especially sure of where and what to focus on, meanwhile poised to apply AI to something that hopefully has merit and gumption.
In mentoring those that have chosen to become AI-versed, I like to point out to those bravely bent on AI For Good to consider the nature of the world’s pressing problems. It seems likely beneficial to try and solve a global issue via AI.
Of course, one AI system alone is not going to suddenly and miraculously “solve” an entire planetary difficulty. Let’s not kid ourselves and over-inflate what might be done via AI. Nonetheless, it would be handy to start chipping away at the corners and edges of worldwide issues, hoping that AI will become a means to gradually and inexorably reduce or mitigate those problems.
We can hope so.
One handy source of the existent worldwide global risks is provided by an annual survey conducted via the World Economic Forum (WEF).
Here’s an abbreviated listing of the Global Risks 2020 via the WEF report:
· Economic
o Asset bubbles
o Deflation
o Failure of major financial mechanisms
o Failure of critical infrastructure
o Fiscal crises
o Highly structure unemployment
o Illicit trade
o Severe energy price shock
o Unmanageable inflation
· Environmental
o Extreme weather
o Failure of climate-change mitigation
o Major biodiversity loss
o Major natural disaster
o Human-made environmental damage
· Geopolitical
o Failure of national governance
o Failure of global governance
o Interstate conflict
o Large-scale terrorist attacks
o State collapse
o Weapons of mass destruction
· Social
o Failure of urban planning
o Food crises
o Large-scale involuntary migration
o Profound social instability
o The rapid spread of infectious disease
o Water crises
· Technological
o Adverse consequences of technological advances
o Breakdown of critical infrastructure
o Large-scale cyberattacks
o A massive incident of data fraud or theft
As you can see, the list is rather daunting.
Per the WEF, each of those aspects represents an uncertain event or condition, for which, if it occurs, could possibly cause a significant and severe negative impact, doing so within and among countries, and happening within the next 10 years.
You might have keenly noticed that one of the listed items is the rapid spread of infectious diseases, notably on the list and as published prior to the present-day pandemic.
Here’s how to make use of the list.
Ask yourself these questions:
· Is there an item on the list that resonates as a particular focus or interest to you?
· Could AI be devised to reduce the chances of that item occurring?
· Could AI be devised to mitigate the impacts if the item does arise?
· What would the AI do and is it feasible for AI to perform such tasks?
· How large an effort would be required to craft AI to do so?
· If such AI existed, who would want to use it and how would they do so?
· Could the AI be combined with other AI systems tackling the same item?
· Could the AI be intermixed with AI that tackles akin items on the list?
· Are there barriers to devising and fielding such AI?
· Is the envisioned AI reasonably feasible or a pipe-dream?
Those are a lot of hard-hitting questions, but it makes sense to give them due consideration.
No point in embarking down a path that will be a dead-end or that might usurp your attention toward some other AI project that might have greater chances of reaching fruition.
AI Aimed At AI
Let’s next take a macro-view of the matter.
There is AI that you might craft for a particular purpose, such as the aforementioned global risks that could be possibly mitigated via AI.
There is also the AI that can help AI that is seeking to help the world.
Say what?
Well, if you take a big picture perspective, one interesting angle involves trying to make sure that AI is made and deployed in an AI For Good manner, and not for an AI For Bad fashion.
Thus, you could use AI for the overarching aim, steering pedantic AI that might otherwise be headed off-road and into the neverlands of malevolent pursuits.
As readers know, I’ve been covering the societal and AI ethics topics for quite a while (see the links here), and besides asking people to be mindful of their AI, there is also the added bolstering via using AI to serve as a guider of those crafting AI and their resultant AI systems (this almost seems recursive, for those of you that relish software development and programming).
Some have been calling for a kind of AI International Treaty, governing the direction and future of AI and its implementations.
One such discussion by Oren Etzioni, CEO of the Allen Institute for AI (AI2) and a professor at the University of Washington, and his senior assistant at the famed AI2, Nicole DeCario, offered these relatively in-common principles that seem to be bandied around on this topic:
- Uphold human rights and values
- Promote collaboration
- Ensure fairness
- Provide transparency
- Establish accountability
- Limit harmful uses of AI
- Ensure safety
- Acknowledge legal and policy implications
- Reflect diversity and inclusion
- Respect privacy
- Avoid concentrations of power
- Contemplate implications for employment
Whatever such a list might ultimately contain, the point here is that there is an opportunity for those that know AI to try and use AI for the sake of aiding the future of AI as to its societal implications (including these AI DoD-related ethical considerations that I analyze at this link here).
If you are an AI developer or investor that says you don’t know anything about the global risks items and are unsure of how you could say aid the mitigation of climate risks or financial instability global risks, you might though be versed in AI sufficiently to look inward at the AI field itself.
In short, could you devise AI that would aid in having other AI stay within the guardrails of the someday to be formed principles of AI?
This far-reaching notion can be characterized via the Upstream Parable (see my analysis at this link here), namely that rather than waiting until the horse is already out of the barn, you can potentially do as much good by keeping the horse in the barn, or once the horse gets out that it is guided as to where it will go, which otherwise becomes a monster of a problem due to lack of upfront steps that should have been undertaken to begin with.
AI, as they say, might be used to heal itself.
Or, bring itself to heel when veering over into the AI For Bad encampment.
Conclusion
It can be hard to be altruistic and seek to devise AI that is for the benefit of humanity. Sure, there is pride to be had and it offers a means to make the world a better place.
Meanwhile, you’ve got to have food on your plate and somehow sustenance to use your energies and efforts toward that altruistic AI goal.
Why not win a million dollars?
And, in terms of whether to submit your own nomination, it’s like buying a lottery ticket in that if you don’t play, you don’t have a chance of winning.
Best of luck and I’ll be reporting on the winner, perhaps contributing toward the fame and acclaim that rightfully begets those that are seeking to make AI for the benefit of humanity.
You are all a quite treasured lot.