- The Vatican called for stronger regulation of the use of artificial intelligence in a plan announced on Friday, Reuters first reported.
- The document also said AI tools should work fairly, transparently, reliably, and with respect for human life and the environment.
- Microsoft and IBM joined Pope Francis in endorsing the document, according to Reuters.
- This isn’t the first time Francis has weighed in on the moral and ethical issues that come with new technologies.
- Visit Business Insider’s homepage for more stories.
Pope Francis wants to see facial recognition, artificial intelligence, and other powerful new technologies follow a doctrine of ethical and moral principles.
In a joint document made public on Friday, the pope, along with IBM and Microsoft, laid out a vision that outlined principles for the emerging technologies and called for new regulations, Reuters first reported.
The Vatican’s “Rome Call for AI Ethics” said that AI tools should be built “with a focus not on technology, but rather for the good of humanity and of the environment” and consider the “needs of those who are most vulnerable.”
The “algor-ethics” outlined in the document included transparency, inclusion, responsibility, impartiality, reliability, security, and privacy, alluding to debates that have emerged around topics like algorithmic bias and data privacy.
Along those lines, it called for new regulations around “advanced technologies that have a higher risk of impacting human rights, such as facial recognition.” Facial-recognition technology in particular has sparked concerns in recent years, thanks to research showing its problems with racial bias and the lack of transparency from companies that develop it.
The document, which was endorsed by Microsoft and IBM, is not the first time Francis has weighed in on ethical issues surrounding technology. At a Vatican conference in September, the pontiff warned that technological progress, if not kept in check, could lead society to “an unfortunate regression to a form of barbarism.”
Others, both within and outside the tech community, have rolled out plans to address the side effects of AI. In January, the Trump administration unveiled a binding set of rules that federal agencies must follow when designing AI policies, while the European Union announced its own nonbinding principles in April.
Various people and organizations within the tech industry have spoken out about regulating AI, including Tesla CEO Elon Musk and Alphabet CEO Sundar Pichai, as well as AI ethics groups like AI Now and OpenAI.