'Ambiguities and risks': Pentagon adopts new AI battlefield ethics

The new principles were recommended to the US Defence Department in October 2019 by the Defence Innovation Board, after spending more than a year gathering feedback and analysis from a wide range of leading AI experts.

US Secretary of Defence Mark Esper on Monday officially approved a set of new ethical principles for the use of artificial intelligence technology on the battlefield. The move comes as the Pentagon is preparing to increase its use of AI, instead of humans, in warfighting operations, according to a US Department of Defence (DoD) statement.

The new principles call for DoD personnel to “exercise appropriate levels of judgment and care” when deploying or using AI systems, including testing and verifying decisions made by automated systems.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” said Esper.

The Pentagon noted that “the use of AI raises new ethical ambiguities and risks”, thus the new principles will guarantee “the responsible use of AI by the department”.

A 2012 US military directive requires that automated weapons be controlled by a human, but that directive does not address the use of AI.

“Secretary Esper's leadership on AI and his decision to issue AI Principles for the Department demonstrates not only to DoD, but to countries around the world, that the US and DoD are committed to ethics, and will play a leadership role in ensuring democracies adopt emerging technology responsibly,” said Dr. Eric Schmidt, chairman of the Defence Innovation Board and a former Google CEO.

Lt. Gen. Jack Shanahan, head of the department's Joint Artificial Intelligence Center, said the new principles would help regain support from the tech industry.

In 2018, Pentagon AI efforts reached an impasse following the departure of Google from the programme as a result of Google employees refusing to involve the company in Project Maven, which uses algorithms to interpret aerial images from conflict zones. Other companies quickly filled the vacuum left by the tech giant.

The new principles recommended to the Pentagon by the Defence Innovation Board were adopted after lengthy consultations with AI experts and with extensive feedback from commercial industry, government, academia and American public sources, according to reports.

ePaper - Nawaiwaqt