A new report out today from cybersecurity company Miggo Security Ltd. details a now-mitigated vulnerability in Google LLC’s artificial intelligence ecosystem that allowed for a natural-language prompt injection potentially to bypass calendar privacy controls and exfiltrate sensitive meeting data via Google Gemini.
The issue arose from Gemini’s deep integration with Google Calendar, which allows the AI to parse event titles, descriptions, attendees and timing to answer routine user queries such as schedule summaries.
Miggo’s researchers found that by embedding a carefully worded prompt into the description field of a calendar invite, an attacker could plant a dormant instruction that Gemini would later execute when triggered by a normal user request. The attack relied entirely on natural language and no malicious code was required.
The exploit involved three stages. The first stage involves an attacker sending a calendar invite containing a harmful but syntactically benign instruction that directed Gemini to summarize a user’s meetings, create a new calendar event and store that summary in the event description.
In the second stage, the payload remains inactive until the user asks Gemini a routine question about their schedule, causing the assistant to ingest and interpret all relevant calendar entries. The third stage then sees Gemini carrying out the embedded instructions and creating a new event that contained summaries of private meetings.
In some enterprise configurations, that newly created event was visible to the attacker and provided unauthorized access to sensitive data without any direct user interaction.
Miggo’s researchers describe the methodology as a form of indirect prompt injection leading to an authorization bypass. The exploit evaded various defenses to detect malicious prompts because the instructions appeared plausible in isolation and only became dangerous when executed with Gemini’s tool-level permissions.
Google confirmed the findings and has since mitigated the vulnerability.
The researchers argue that while the specific flaw may have been fixed, the incident highlights a broader shift in application security. To protect against such future events, the researchers say that “defenders must evolve beyond keyword blocking.”
“Effective protection will require runtime systems that reason about semantics, attribute intent and track data provenance,” the report concludes. “In other words, it must employ security controls that treat large language models as full application layers with privileges that must be carefully governed.”
Image: News/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
