Google Promises Its AI Will Not Be Used for Weapons
Posted June 7, 2018 3:38 p.m. EDT
SAN FRANCISCO — Google, reeling from an employee protest over the use of artificial intelligence for military purposes, said Thursday it would not use AI for weapons or for surveillance that violates human rights. But it will continue to work with governments and the military.
The new rules were part of a set of principles Google unveiled relating to the use of artificial intelligence. In a company blog post, Sundar Pichai, the chief executive, laid out seven objectives for its AI technology, including “avoid creating or reinforcing unfair bias” and “be socially beneficial.”
Google also detailed applications of the technology that the company will not pursue, including AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms of human rights.”
But Google said it would continue to work with governments and military using AI in areas including cybersecurity, training and military recruitment.
“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come,” Pichai wrote.
Concern over the potential uses of artificial intelligence bubbled over at Google when the company secured a contract to work on the Pentagon’s Project Maven program, which uses AI to interpret video images and could be used to improve the targeting of drone strikes.
More than 4,000 Google employees signed a petition protesting the contract and a handful of employees resigned. In response, Google said it would not seek to renew the Maven contract when it expires next year and pledged to draft a set of guidelines for appropriates uses of AI.
Pichai did not address the Maven program or the pressure from employees. It’s not clear whether these guidelines would have precluded Google from pursuing the Maven contract, since the company has insisted repeatedly that its work for the Pentagon was not for “offensive purposes.”
Google promotes the benefits of artificial intelligence for tasks like early diagnosis of diseases and the reduction of spam in email. But it has also experienced some of the perils associated with AI, including YouTube recommendations pushing users to extremist videos or Google Photos image-recognition software categorizing black people as gorillas.
While most of Google’s AI guidelines are unsurprising for a company that prides itself on altruistic goals, it also included a noteworthy rule about how its technology could be shared outside the company.
“We will reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles,” the company said.
Like most of the top corporate AI labs, which are laden with former and current academics, Google openly publishes much of its A.I. research. That means others can recreate and reuse many of its methods and ideas. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it.
DeepMind, a top AI lab owned by Google’s parent company, Alphabet, is considering whether it should refrain from publishing certain research because it may be dangerous. OpenAI, a lab founded by the Tesla founder Elon Musk and others, recently released a new charter indicating it could do much the same — even though it was founded on the principle that it would openly share all its research.