A cross-party group of 60 uk parliamentarians have accuses Google Deepmind of Violating International Pledges to Safely Develop Artificial Intelligence, in an open letter shared exclusively white with time ahead of Publication. The letter, released on August 29 by Activist Group Pauseai Uk, Says That Google’s March Release of Gemini 2.5 Pro Without Accompanying Details on Safety Testing “Sets a Dangerous Precedent.” The Letter, Whose Signatories Include Digital Rights Campaigner Baroness Beeban Kidron and Former Defense Secretary des Browne, Calls on Google to Clarify Its Commitment.
(Time-brightcove not-tgx = “true”)
For years, Experts in AI, Including Google Deepmind’s Ceo Demis Hassabis, Have Warned That Ai Cold Pose Catastrophic Risks to Public Safety and National Security – For Example, BY Helping Would-be bio-terrorists in designing a new pathon or hackers in a taken of critical infrastructure. In an effort to manage those risks, at an International Ai Summit Co-Hosted by the UK and South Korean Governments in February 2024, Google, Openai, and Others Signed the Frontier Ai Safetyer Ai Safetyer. Signatories Pledged to “Publicly Report” System Capability and Risk Assessments and explain if and how external actors, such as government ai safety instruments, wounded in testing. Without binding regulation, the public and lawmakers have reliable on information stemming from voluntary pledges to undersrstand ai’s emerging risks.
Yet, when Google Released Gemini 2.5 Pro on March 25 – WHIAD BEAT RIVAL AI Systems on Industry Benchmarks by “meaningful Margins” Tests for over a month. The letter says that not only reflects a “failure to honor” its International Safety Commitments, but Threets the Fragile Norms Promoting Safeer Ai Development. “If Leading Companies Like Google Treat these commitments as optional, we risk a dangerous race to deploy Increasingly Powerful Ai without Proper Safaguards,” Browne Wrovene Wrote In ACAMPANINGCH Letter.
“We’re Fulfilling Our Public Commitments, Including The Seoul Frontier Ai Safety Commitments,” a Google Deepmind Spokesperson TOLD TOLD VAA an emailed statement. “As part of our development process, our models undergo Rigorous Safety Checks, Including by uk aisi and other third-party testers-and gemini 2.5 is no exception.”
The open letter calls on google to establish a Specific Timeline for when Safety evaluation reports will be shared for future releases. Google first published the gemini 2.5 pro model card – A document where it is typically shares information on safety tests -22 days after the model’s release. However, the egght-page document only involved a brief section on safety tests. It was not until a month after the model was made publicly available available-hars Pro shows “Significant” Thought Not Yet Dangerous Improvements in Domains Including Hacking. The update also stated the use of “Third-party external testers,” but did not disclose which ones or whether the uk ai security institute hand been amn am –Which the letters aS a vilation of Google’s Pledge.
After Previous Failing to Address a Media Request for Comment Gemini 2.5 pro with the uk ai security institute, as a “Diverse Group of External Experts,” Including Apollo Research, Drednode, and Vaultis. However, Google says it only shared the model with the uk ai security institute after gemini 2.5 pro was released on March 25.
On April 3, Shortly Following Gemini 2.5 Pro’s release, Google’s Senior Director and Head of Product for Gemini, Tulsee Doshi, Told Techcrunch The reason it is lacked a safety report was believed the model was an “Experimental” release, adding that it had alredy run safety tests. She said that aim of these experimental rollouts is to release the model in a limited way, collect user feedback, and improve it prior to production launch, at all the post the Company WHICH POULDE PUBLI Card Detailing Safety Tests It Had Alredy Conducted. YET, Days Earlier, Google Had Rolded The Model Out To All of Its Hundreds of Millions of free users, say “We want to get out intelligent model into and more people’s hands, in a post on x.
The open letter says that “labeling a publicly model as ‘Experimental’ does not absolve google of its safety obligations,” And additional calls on Google to establish a more commerce of deployment. “Companies have a great public responsibility to test new technology and not involved the public in experimentation,” Says Bishop of Oxford, Steven Croft, Who Signed the Letter. “Just imagine a car manufacturer release a vehicle saying, ‘We want the public to experience and (give) Feedback when they crash or when they they bump into pedestrians and when the brakes don
Croft questions the constraints on providing safety reports at the time of release, boiling the issue down to a matter of priorities: “How much of (google’s) huge huge Inventment in Ai Is Being Channeled Into Public Safety and reassurance and how much is going into huge computing power? ”
To be sure, google isn’t the only Industry Titan to Seemingly Flout Safety Commitments. Elon musk’s xai is yet to release any safety report for Grok 4, an ai model released in July. Unlike GPT-5 and Other Recent Launches, Openai’s February Release of its Deep Research tool Lacked a Same-Day Safety Report. The company says it has done “Rigorous safety testing,” but didn’t publish the report until 22 days later.
Joseph Miller, Director of Pauseai Uk Says The Organization is Concerned about Other Instances of Apparent Violations, and that Focus on Google was due to its proximity. Deepmind, The Ai Lab Google Accquired in 2014, Remains Headquartered in London. UK’s Now Secretary of State for Science, Innovation and Technology, Peter Kyle, Said on the Campaign Trail in 2024 That He Bold “Require” Real “Reading AI Companies to Shafty Tests, But in FEBRAIRAI was reported that the uk’s plans to regulate ai was delayed as it sought to better align with the trump administration’s hands-off approach. Miller Says It’s Time to Swap Company Pledges For “Real Regulation,” Adding that “Voluntary commitments are just not working.”