UN seeks ban on artificial intelligence without safeguards
The UN Human Rights chief Wednesday said that a ban on the sale and use of artificial intelligence (AI) systems that pose a severe risk to human rights is urgently needed until sufficient safeguards are in place.
Michelle Bachelet released a report titled “The right to privacy in the digital age” that she will also present at the 48th UN Human Rights Council session on Sept. 22.
The UN High Commissioner for Human Rights also called for AI applications that do not comply with international human rights law to be banned.
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared, and used is one of the most urgent human rights questions we face,” Bachelet said.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times.”
She warned, however, “AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
World relies on AI tools
At a press conference, Peggy Hicks, director of the thematic engagement division of the rights office, said the world relies on many AI tools in large data sets, including personal data.
There are embedded biases that can lead to discrimination when used for inferences about people or to forecast future behavior.
“There is a lack of transparency regarding artificial intelligence systems,” said Hicks, explaining that the report does not examine specific countries.
“The report goes on to look at how these issues play out in practice by examining how AI is having an impact on human rights,” she said.
She said reliance is in crucial areas such as law enforcement, national security, criminal justice, and border management.
Hicks was asked which organization could impose a moratorium on AI sales.
“There’s no place that is the magical spot we could turn to that would put in place a moratorium that would be effective across all the jurisdictions where this technology is being used,” but there is a debate on the question, she said citing the European Union.
In her statement, Bachelet said that the higher the risk for human rights, the stricter the legal requirements for using AI technology.
It would, however, take time before countries can assess and address AI risks, and they can “place moratoriums on the use of potentially high-risk technology.”
The report analyzes how AI can include profiling, automated decision-making, examining how machine-learning technologies can affect people’s right to privacy and other rights in technology and human rights.
They include the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.
“AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” said Bachelet.
The report examines how states and businesses alike have often rushed to incorporate AI applications while failing to do due diligence.
There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition.
Bachelet also said the use of biometric technologies, increasingly adopted by states, international organizations, and technology companies, need guidance “urgently.”