By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Top AI companies’ security practices are falling short, according to a new report
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Top AI companies’ security practices are falling short, according to a new report
News

Top AI companies’ security practices are falling short, according to a new report

News Room
Last updated: 2025/12/03 at 1:38 PM
News Room Published 3 December 2025
Share
Top AI companies’ security practices are falling short, according to a new report
SHARE

As leading artificial intelligence companies release increasingly capable AI systems, a new report is sounding the alarm on what it says are some of those companies’ lagging security practices.

The Winter 2025 AI Safety Index, which examines the safety protocols of eight leading AI companies, found that their approaches “lack the concrete safeguards, independent oversight and credible long-term risk management strategies that such powerful systems require.”

Sabina Nong, an AI security researcher at the nonprofit Future of Life Institute (FLI), which organized the report and is working to address large-scale risks from technologies such as nuclear weapons and AI, said in an interview at the San Diego Alignment Workshop that the analysis revealed a gap in organizations’ approach to security.

“We see two clusters of companies in terms of their safety promises and practices,” Nong said. “Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order, and five other companies are at the next level.”

The bottom tier of five companies includes xAI and Meta, along with Chinese AI companies Z.ai, DeepSeek and Alibaba Cloud. Chinese models are increasingly being adopted in Silicon Valley as their capabilities have developed rapidly, and they are readily available as they are largely open source.

Anthropic, the highest-ranked company on the list, received a C+ grade, while Alibaba Cloud, the lowest-ranked, received a D-.

The index examined 35 security indicators across six domains, including companies’ risk assessment practices, information sharing protocols and whistleblower protections, in addition to support for AI security research.

Eight independent AI experts, including Professor Dylan Hadfield-Menell of the Massachusetts Institute of Technology and Yi Zeng, a professor at the Chinese Academy of Sciences, assessed the extent to which companies met the safety indicators.

FLI President Max Tegmark, an MIT professor, said the report provides clear evidence that AI companies are quickly heading into a dangerous future, in part because of a lack of regulation around AI.

“The only reason there are so many C’s, D’s and F’s in the report is because there is less regulation on AI than there is on sandwich making,” Tegmark told NBC News, citing the continued lack of adequate AI laws and the established nature of food safety regulation.

The report recommends that AI companies share more information about their internal processes and reviews, use independent safety assessors, increase efforts to prevent AI psychosis and harm, and reduce lobbying, among other measures.

Tegmark, Nong and FLI are particularly concerned about the potential for AI systems to cause catastrophic damage, especially given calls from AI leaders like Sam Altman, the CEO of OpenAI, to build AI systems that are smarter than humans – known as artificial superintelligence.

“I don’t think companies are prepared for the existential risk of the super-intelligent systems they are about to create and so ambitiously want to march towards,” Nong said.

The report, released Wednesday morning, follows several groundbreaking AI model launches. Google’s Gemini 3 model, released in late November, has set performance records in a series of tests designed to measure the capabilities of AI systems.

In a statement, a Google representative said: “Our Frontier Safety Framework outlines specific protocols for identifying and mitigating serious risks from powerful frontier AI models before they manifest. As our models become more sophisticated, we will continue to innovate in safety and governance at pace with capabilities.”

On Monday, one of China’s leading AI companies, DeepSeek, released an advanced model that appears to match Gemini 3’s capabilities in several domains.

While AI capability tests are increasingly criticized as flawed, in part due to the potential for AI systems to become hyper-focused on overcoming a specific set of unrealistic challenges, the record-breaking scores of new models indicate that systems are performing relatively better than the competition.

Although DeepSeek’s new model performs at or near the limit of its AI capabilities, Wednesday’s Safety Index report says DeepSeek fails on many important safety considerations.

The report scored DeepSeek second-to-last out of eight companies in overall security. The report’s independent panel found that, unlike all leading U.S. companies, DeepSeek does not publish a framework outlining security-focused assessments or solutions, nor does it disclose a whistleblower policy that could help identify key risks from AI models.

For companies operating in California, frameworks are now required that outline the company’s safety policies and testing mechanisms. These frameworks can help companies avoid serious risks, such as the possibility that AI products could be used in cybersecurity attacks or the design of bioweapons.

The report classifies DeepSeek in the lower category of security-oriented companies. “The lower tier companies continue to fall short on basic elements such as security frameworks, governance structures and comprehensive risk assessment,” the report said.

Tegmark said: “Tier 2 companies are completely obsessed with catching up on the technical frontier, but now that they have done that, they no longer have an excuse not to also prioritize security.”

Advances in AI capabilities have made headlines recently as AI systems are increasingly applied to consumer-facing products such as OpenAI’s Sora video generation app and Google’s Nano Banana image generation model.

However, Wednesday’s report said the steady increase in capabilities is seriously outpacing any expansion of security-oriented efforts. “This widening gap between capacity and safety means that the sector is structurally unprepared for the risks it actively creates,” the report says.

This reporter is a Tarbell Fellow, funded through the Tarbell Center for AI Journalism, a nonprofit organization dedicated to supporting reporting on artificial intelligence. The Tarbell Center has received funding from the Future of Life Institute, which is the subject of this article. The Tarbell Center had no input into NBC News’ reporting.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Tech Wrap 2025: 6 Best Budget Tablets Under Rs 15,000 That Are Perfect For Study, Entertainment, Tech Wrap 2025: 6 Best Budget Tablets Under Rs 15,000 That Are Perfect For Study, Entertainment,
Next Article Anyone can try to edit Grokipedia 0.2 but Grok is running the show Anyone can try to edit Grokipedia 0.2 but Grok is running the show
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Formalizing Generative Active Learning for Instance Segmentation | HackerNoon
Formalizing Generative Active Learning for Instance Segmentation | HackerNoon
Computing
Level Up for Less With 25% Off the PlayStation DualSense Controller for Cyber Week
Level Up for Less With 25% Off the PlayStation DualSense Controller for Cyber Week
News
Release candidates of iOS 26.2, macOS 26.2 now available
Release candidates of iOS 26.2, macOS 26.2 now available
News
Putin’s failed summer offensive shatters the myth of inevitable Russian victory
Putin’s failed summer offensive shatters the myth of inevitable Russian victory
News

You Might also Like

Level Up for Less With 25% Off the PlayStation DualSense Controller for Cyber Week
News

Level Up for Less With 25% Off the PlayStation DualSense Controller for Cyber Week

4 Min Read
Release candidates of iOS 26.2, macOS 26.2 now available
News

Release candidates of iOS 26.2, macOS 26.2 now available

1 Min Read
Putin’s failed summer offensive shatters the myth of inevitable Russian victory
News

Putin’s failed summer offensive shatters the myth of inevitable Russian victory

9 Min Read
The 11 best noise-cancelling headphones of 2025, tested
News

The 11 best noise-cancelling headphones of 2025, tested

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?