Technology

AI weapons pose threat to humanity, warns top scientist


The computer scientist Stuart Russell met with officials from the UK’s Ministry of Defence in October to deliver a stark warning: building artificial intelligence into weapons could wipe out humanity.

But the pioneering artificial intelligence researcher, who has spent the past decade trying to ban AI from being used to locate and kill human targets, was not able to extract any promises from those present at the meeting.

This week, Russell, a British professor at the University of California in Berkeley who co-wrote one of the seminal textbooks on AI more than 25 years ago, will use BBC radio’s annual Reith Lectures to press his case further.

His calls for an international moratorium on autonomous lethal weapons have been echoed across the academic community. Last week, more than 400 German AI researchers published an open letter to the German government asking it to stop the development of these systems by its armed forces.

“The killing of humans should never be automated based on algorithmic formulas,” the letter said. “Such dehumanisation of life and death decision-making by autonomous weapons systems must be outlawed worldwide.”

Russell, who regularly meets governments internationally, said the US and Russia, together with the UK, Israel and Australia, were still against a ban.

“There is still a communication failure, a lot of governments and militaries are not understanding what the objection is,” Russell said in an interview with the Financial Times. “Put very simply, we don’t sell nuclear weapons in Tesco — and with these weapons it will be exactly like that.

AI lethal weapons were “small, cheap, and easily manufactured”. With no checks, they could soon be as ubiquitous as automatic rifles, more than 100m of which are in private hands.

In the second of his four Reith lectures on “Living with Artificial Intelligence”, to be broadcast on BBC radio from Wednesday, Russell warned that AI weapons were no longer science fiction, but were developing apace, completely unregulated. “You can buy them today. They are advertised on the web,” he said.

In November 2017, a Turkish arms manufacturer called STM announced the Kargu, a fully autonomous killer drone the size of a rugby ball, which could perform targeted hits based on image and face recognition. The drone was used in Libyan conflicts in 2020 to selectively home in on targets, despite an arms embargo on weapon sales to Libya, according to the United Nations.

“STM is a relatively small manufacturer in a country that isn’t a leader in technology. So you have to assume that there are programmes going on in many countries to develop these weapons,” Russell said.

He also described the Israeli government’s Harpy — a 12-foot-long fixed-wing aircraft that carries a 50-pound explosive payload, and its descendent, the Harop. The planes can be flown remotely or can run autonomously after a geography and target is specified by a human operator.

The Harop may have been sold to India, and Azerbaijan, where it was spotted in a video produced by the army. A press release from the Israeli Aerospace Industries that year said “hundreds” of these had been sold.

Russell warned that the proliferation of AI weapons posed an imminent and existential threat. “A lethal AI-powered quadcopter could be as small as a tin of shoe polish . . . about three grammes of explosive are enough to kill a person at close range. A regular container could hold a million lethal weapons, and . . . they can all be sent to do their work at once,” he said, in his lecture. “So the inevitable endpoint is that autonomous weapons become cheap, selective, weapons of mass destruction.”

In the absence of diplomatic action, academics are banding together to design their ideal version of an AI weapons ban treaty. In 2019, a handful of computer scientists, engineers and ethicists met to hash it out, at the Boston home of MIT professor Max Tegmark, who is co-founder of the Future of Life Institute.

The two-day meeting included roboethicist Ron Arkin from Georgia Tech; Paul Scharre, a former US Army officer who studies the future of war; and Russell, among others. Eventually, they agreed that specifying a minimum weight and explosive size should be compulsory, so autonomous weapons cannot be wielded as swarms. “What you’re trying to avoid is two blokes in a truck launching a million weapons,” Russell said.

Ultimately, he believes the only way to convince governments like the US, Russia and UK still resisting a ban is to appeal to their sense of self-preservation. As he said: “If the technical issues are too complicated, your children can probably explain them.”



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.