OpenAI today released a new report highlighting the company’s growing efforts to identify, expose and disable misuse of its models for cyberattacks, scams and state-linked influence operations.
The “Disrupting Malicious Uses of AI” report, part of a monthly series of reports from OpenAI, details a growing trend of threat actors integrating artificial intelligence into their existing toolchains rather than developing AI-driven workflows from scratch and provides various examples.
Russian-language threat groups were found to be attempting to refine malware components such as remote-access trojans and credential stealers, Korean-language operators were found to be developing command-and-control systems and alleged China-linked groups were found to be crafting phishing content and debugging malware targeting Taiwan’s semiconductor sector, U.S. academia and political groups.
In all cases, OpenAI’s safeguards blocked direct malicious requests and findings were shared with industry partners to strengthen collective defenses.
A major section of the report focuses on organized crime scams originating from Cambodia, Myanmar and Nigeria; both Cambodian and Myanmar scam operators have made headlines globally over the last year due to the size of their operations and some argue that Thailand’s current conflict with Cambodia is linked to these operations, which account for as much as 60% of Cambodia’s gross domestic product.
The groups in the three countries were found to be using ChatGPT to translate messages, generate social media content and craft fraudulent investment personas. Some operations even asked the model to remove em dashes, a known indicator of AI text, in attempts to disguise their use of AI.
OpenAI found that despite the attempts at misuse, its models were used to detect scams three times more often than to create them, as millions of users sought help identifying fraudulent activity.
The report also details alleged authoritarian-linked abuses linked to Chinese actors. The users sought assistance designing social media monitoring tools, profiling dissidents and generating propaganda proposals, activities that violated OpenAI’s national security policies.
OpenAI banned these accounts and reiterated its commitment to building “democratic AI,” emphasizing transparency, safety and protection against surveillance-state misuse.
“As the threatscape evolves, we expect to see further adversarial adaptations and innovations, but we will also continue to build tools and models that can be used to benefit the defenders – not just within AI labs, but across society as a whole,” the report notes.
Discussing the report, Cory Kennedy, chief threat intelligence officer at security ratings firm SecurityScorecard Inc., told News via email that “the report highlights how threat actors are increasingly combining multiple AI models to scale their operations.”
“While OpenAI banned the accounts involved, it noted that some attempts, such as proposals for large-scale monitoring of social media and movement, offer insight into how generative AI is being explored for population tracking and narrative control,” added Kennedy. “These findings underscore the urgency of proactive disruption, vendor transparency and cross-platform threat intelligence where AI tools intersect with sensitive data and global influence efforts.”
Image: News/Reve
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.