Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims.
Reportedly, hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users, with relative ease. It’s understood that attackers could also gain memory persistence, which essentially grants long-term access and control to compromised data.
The findings will concern technology chiefs everywhere, who have already indicated that cybersecurity is their top concern in 2025. And with a lot of employees using AI in secret, its security gaps may be more numerous than many senior leaders think.