Artificial intelligence aggregator OmniGPT Inc. has allegedly been breached, with a hacker releasing over 34 million lines of user conversations and 30,000 user emails and phone numbers on a popular hacking forum.
As an aggregator, OmniGPT acts as a middleman that allows users to access AI and large language models from various companies, such as OpenAI’s ChatGPT, Google LLC’s Gemini, Athropic PBC’s Claude and others. The aggregation model has become fairly popular for those seeking to try and use different models without having to maintain a subscription to each one.
The hacker who claims to have stolen the data goes by the name of “Gloomer” on the infamous hacking site Breach Forums, a site that the U.S. Federal Bureau of Investigation has tried to take down before, such as in May 2024, for the site to always return in one form or another.
Gloomer wrote on the site, “This leak contains all messages between the users and the chatbot of this site, as well as all links to the files uploaded by users and also 30k user emails. You can find a lot of useful information in the messages, such as API keys and credentials. Many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information.”
Exactly how the breach took place has not been revealed, but according to researchers at Hackread.com, the leaked data included messages exchanged between users and the chatbots and links to uploaded files, some of which contain credentials, billing information and application programming interface keys. Also discovered were over 8,000 email addresses that users shared with chatbots during conversations.
The leaked data also included file upload links to documents stored on OmniGPT’s servers, which may contain sensitive information in PDF and Document format, but more importantly, would also indicate that the data was indeed stolen from OmniGPT, be it that the company itself is yet to comment.
“If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like generative AI can still get penetrated and that industry best practices around application security assessment, attestation and verification should be followed,” Andrew Bolster, senior research and development manager at application security solutions Black Duck Software Inc., told News via email. “But what’s potentially most harrowing to these users is the nature of the deeply private and personal ‘conversations’ they have with these chatbots; chatbots are regularly being used as ‘artificial-agony-aunts’ for intimate personal, psychological, or financial questions that people are working through.”
Eric Schwake, director of Cybersecurity Strategy at API security company Salt Security Inc., warned of the risks involved, noting that “though the reported data leak involving OmniGPT awaits official confirmation, the possible exposure of user information and conversation logs- including sensitive items like API keys and credentials – highlights the urgent need for strong security measures in AI-powered platforms.”
“Should this be verified, the incident would bring to light the risks tied to the storage and processing of user data in AI interactions,” Schwake added. “Organizations creating and deploying AI chatbots must prioritize data protection throughout the entire lifecycle, ensuring secure storage, implementing access controls, utilizing strong encryption and conducting regular security evaluations.”
Image: News/Ideogram
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU