Google has withdrawn from its website the commitment not to use AI for mass armament or surveillance. It is the biggest policy change In this section since the company published its “Principles of AI” and is related to the new administration in the United States or Trump 2.0.
In May 2018 when 4,000 Google employees signed an internal petition in which they demanded that the company its Exit from Project Mavena military artificial intelligence and automatic learning program promoted by the United States Department of Defense. Google employees were supported by scientists and academics and Google was forced to cancel the contract and publish commitments against the use of AI in military armament.
The original principles outlined by the executive director of Pichai in mid -2018 included a section on “Artificial intelligence applications that we will not implement”. At the top of the list was the commitment not to design or implement artificial intelligence for “Technologies that cause or can cause general damage” And a promise to weigh the risks for Google “Proceed only when I believed that the benefits substantially exceed the risks”.
Specifically, Google promised to avoid:
- Weapons or other technologies whose main purpose or implementation is to directly cause or facilitate damage to people.
- Technologies that compiled or used information for surveillance violating internationally accepted standards.
- Technologies whose purpose would contravene widely accepted principles of international law and human rights.
Those are the committed principles that Google has removed, leaving free way for the Use of your IA technologies for mass armament or surveillance.
More information in MC