Security researchers have revealed an exploit that hackers could have used to gain access to Google Drive data through a ChatGPT integration.
The hack could have happened without any user interaction aside from connecting to the external service. Had it been used, the victim may have been unaware that an attack took place.
Security researchers Michael Bargury and Tamir Ishay Sharbat revealed the exploit during the Black Hat hacker conference in Las Vegas. Bargury confirmed to Wired that OpenAI mitigated the problem after it showed the company the flaw. However, it isn’t without continued risk.
The exploit, dubbed AgentFlayer, works through ChatGPT’s Connectors tool. Connectors first debuted in June and allows you to add external services, like documents and spreadsheets, to your ChatGPT account. ChatGPT includes integration with Box, Dropbox, and GitHub as well as various Google and Microsoft services for calendars, file storage, and more.
The exploit allowed a hacker to share a file directly to the unsuspecting victim’s Google Drive, and the hack got to work right away. ChatGPT would read the information included in the file, but it would include a hidden prompt. The hackers could include it in a size one font, with white text, allowing it to mostly go unnoticed.
The prompt is around 300 words long, and it allowed the hackers access to specific files from other areas of Google Drive. In the researchers’ example, it allowed them to pull API keys stored within a Drive file.
The hidden prompt would also have allowed hackers to continue controlling the AI. In this example, the AI was looking for confidential information to share directly with the hacker, and it may have continued to do so until the victim disconnected the integration with Drive.
Recommended by Our Editors
This shows how AI is a new frontier in the world of security. It means hackers now have even more intensive tools to attack people, with the AI itself, in this scenario, working against the victim.
“This isn’t exclusively applicable to Google Drive; any resource connected to ChatGPT can be targeted for data exfiltration,” Sharbat wrote in a blog post.
If you have used connections with third-party tools with any AI chatbot, be careful with what data you’ve shared so you don’t fall victim to an attack. Keep sensitive information locked away, and avoid storing passwords and personal information in cloud services wherever possible.
5 Ways to Get More Out of Your ChatGPT Conversations
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!