By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Researchers Propose a Better Way to Report Dangerous AI Flaws
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Gadget > Researchers Propose a Better Way to Report Dangerous AI Flaws
Gadget

Researchers Propose a Better Way to Report Dangerous AI Flaws

News Room
Last updated: 2025/03/13 at 11:09 AM
News Room Published 13 March 2025
Share
SHARE

In late 2023, a team of third party researchers discovered a troubling glitch in OpenAI’s widely used artificial intelligence model GPT-3.5.

When asked to repeat certain words a thousand times, the model began repeating the word over and over, then suddenly switched to spitting out incoherent text and snippets of personal information drawn from its training data, including parts of names, phone numbers, and email addresses. The team that discovered the problem worked with OpenAI to ensure the flaw was fixed before revealing it publicly. It is just one of scores of problems found in major AI models in recent years.

In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways. They suggest a new scheme supported by AI companies that gives outsiders permission to probe their models and a way to disclose flaws publicly.

“Right now it’s a little bit of the Wild West,” says Shayne Longpre, a PhD candidate at MIT and the lead author of the proposal. Longpre says that some so-called jailbreakers share their methods of breaking AI safeguards the social media platform X, leaving models and users at risk. Other jailbreaks are shared with only one company even though they might affect many. And some flaws, he says, are kept secret because of fear of getting banned or facing prosecution for breaking terms of use. “It is clear that there are chilling effects and uncertainty,” he says.

The security and safety of AI models is hugely important given widely the technology is now being used, and how it may seep into countless applications and services. Powerful models need to be stress-tested, or red-teamed, because they can harbor harmful biases, and because certain inputs can cause them to break free of guardrails and produce unpleasant or dangerous responses. These include encouraging vulnerable users to engage in harmful behavior or helping a bad actor to develop cyber, chemical, or biological weapons. Some experts fear that models could assist cyber criminals or terrorists, and may even turn on humans as they advance.

The authors suggest three main measures to improve the third-party disclosure process: adopting standardized AI flaw reports to streamline the reporting process; for big AI firms to provide infrastructure to third-party researchers disclosing flaws; and for developing a system that allows flaws to be shared between different providers.

The approach is borrowed from the cybersecurity world, where there are legal protections and established norms for outside researchers to disclose bugs.

“AI researchers don’t always know how to disclose a flaw and can’t be certain that their good faith flaw disclosure won’t expose them to legal risk,” says Ilona Cohen, chief legal and policy officer at HackerOne, a company that organizes bug bounties, and a coauthor on the report.

Large AI companies currently conduct extensive safety testing on AI models prior to their release. Some also contract with outside firms to do further probing. “Are there enough people in those [companies] to address all of the issues with general-purpose AI systems, used by hundreds of millions of people in applications we’ve never dreamt?” Longpre asks. Some AI companies have started organizing AI bug bounties. However, Longpre says that independent researchers risk breaking the terms of use if they take it upon themselves to probe powerful AI models.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article NVIDIA RTX Remix 1.0 Released With DXVK DLSS4 & Neural Radiance Cache
Next Article Duolingo goes down for millions of users worldwide
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Vivo X100 Ultra debuts with 200MP telephoto lens and 1/1.4-inch sensor · TechNode
Computing
Popular furniture chain ‘better than Walmart’ to open 8 stores – see full list
News
Why Pay More? Grab the Best Prepaid Plans Under Rs 400 Right Now
Mobile
Alibaba sees steady revenue increase, GMV and order numbers on Taobao and Tmall return to double-digit growth track · TechNode
Computing

You Might also Like

Gadget

Optimising Human Capital: How Fintech Drives Efficient Recruitment for Non-Profits

7 Min Read
Gadget

Want A $1 Million Portfolio By June? Top Crypto Tokens To Buy On Presale

6 Min Read
Gadget

The Best High School Graduation Gifts

1 Min Read
Gadget

Shopping for a Router Sucks. Here’s What You Need to Know

10 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?