By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites
Computing

Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites

News Room
Last updated: 2026/01/19 at 1:12 PM
News Room Published 19 January 2026
Share
Google Gemini Prompt Injection Flaw Exposed Private Calendar Data via Malicious Invites
SHARE

Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism.

The vulnerability, Miggo Security’s Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar’s privacy controls by hiding a dormant malicious payload within a standard calendar invite.

“This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction,” Eliyahu said in a report shared with The Hacker News.

The starting point of the attack chain is a new calendar event that’s crafted by the threat actor and sent to a target. The invite’s description embeds a natural language prompt that’s designed to do their bidding, resulting in a prompt injection.

The attack gets activated when a user asks Gemini a completely innocuous question about their schedule (e.g., Do I have any meetings for Tuesday?), prompting the artificial intelligence (AI) chatbot to parse the specially crafted prompt in the aforementioned event’s description to summarize all of users’ meetings for a specific day, add this data to a newly created Google Calendar event, and then return a harmless response to the user.

“Behind the scenes, however, Gemini created a new calendar event and wrote a full summary of our target user’s private meetings in the event’s description,” Miggo said. “In many enterprise calendar configurations, the new event was visible to the attacker, allowing them to read the exfiltrated private data without the target user ever taking any action.”

Cybersecurity

Although the issue has since been addressed following responsible disclosure, the findings once again illustrate that AI-native features can broaden the attack surface and inadvertently introduce new security risks as more organizations use AI tools or build their own agents internally to automate workflows.

“AI applications can be manipulated through the very language they’re designed to understand,” Eliyahu noted. “Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime.”

The disclosure comes days after Varonis detailed an attack named Reprompt that could have made it possible for adversaries to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls.

The findings illustrate the need for constantly evaluating large language models (LLMs) across key safety and security dimensions, testing their penchant for hallucination, factual accuracy, bias, harm, and jailbreak resistance, while simultaneously securing AI systems from traditional issues.

Just last week, Schwarz Group’s XM Cyber revealed new ways to escalate privileges inside Google Cloud Vertex AI’s Agent Engine and Ray, underscoring the need for enterprises to audit every service account or identity attached to their AI workloads.

“These vulnerabilities allow an attacker with minimal permissions to hijack high-privileged Service Agents, effectively turning these ‘invisible’ managed identities into ‘double agents’ that facilitate privilege escalation,” researchers Eli Shparaga and Erez Hasson said.

Successful exploitation of the double agent flaws could permit an attacker to read all chat sessions, read LLM memories, and read potentially sensitive information stored in storage buckets, or obtain root access to the Ray cluster. With Google stating that the services are currently “working as intended,” it’s essential that organizations review identities with the Viewer role and ensure adequate controls are in place to prevent unauthorized code injection.

The development coincides with the discovery of multiple vulnerabilities and weaknesses in different AI systems –

  • Security flaws (CVE-2026-0612, CVE-2026-0613, CVE-2026-0615, and CVE-2026-0616) in The Librarian, an AI-powered personal assistant tool provided by TheLibrarian.io, that enable an attacker to access its internal infrastructure, including the administrator console and cloud environment, and ultimately leak sensitive information, such as cloud metadata, running processes within the backend, and system prompt, or log in to its internal backend system.
  • A vulnerability that demonstrates how system prompts can be extracted from intent-based LLM assistants by prompting them to display the information in Base64-encoded format in form fields. “If an LLM can execute actions that write to any field, log, database entry, or file, each becomes a potential exfiltration channel, regardless of how locked down the chat interface is,” Praetorian said.
  • An attack that demonstrates how a malicious plugin uploaded to a marketplace for Anthropic Claude Code can be used to bypass human-in-the-loop protections via hooks and exfiltrate a user’s files via indirect prompt injection.
  • A critical vulnerability in Cursor (CVE-2026-22708) that enables remote code execution via indirect prompt injection by exploiting a fundamental oversight in how agentic IDEs handle shell built-in commands. “By abusing implicitly trusted shell built-ins like export, typeset, and declare, threat actors can silently manipulate environment variables that subsequently poison the behavior of legitimate developer tools,” Pillar Security said. “This attack chain converts benign, user-approved commands — such as git branch or python3 script.py — into arbitrary code execution vectors.”
Cybersecurity

A security analysis of five Vibe coding IDEs, viz. Cursor, Claude Code, OpenAI Codex, Replit, and Devin, who found coding agents, are good at avoiding SQL injections or XSS flaws, but struggle when it comes to handling SSRF issues, business logic, and enforcing appropriate authorization when accessing APIs. To make matters worse, none of the tools included CSRF protection, security headers, or login rate limiting.

The test highlights the current limits of vibe coding, showing that human oversight is still key to addressing these gaps.

“Coding agents cannot be trusted to design secure applications,” Tenzai’s Ori David said. While they may produce secure code (some of the time), agents consistently fail to implement critical security controls without explicit guidance. Where boundaries aren’t clear-cut – business logic workflows, authorization rules, and other nuanced security decisions – agents will make mistakes.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Stop Juggling AI Tools and Switch to a Lifetime All-in-One Platform for Stop Juggling AI Tools and Switch to a Lifetime All-in-One Platform for $75
Next Article iPhone 18 Pro may hide Face ID under the display, but one big question remains  – 9to5Mac iPhone 18 Pro may hide Face ID under the display, but one big question remains  – 9to5Mac
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Blueair Blue Signature Review: A Highly Effective Air Purifier That Doubles as an End Table
Blueair Blue Signature Review: A Highly Effective Air Purifier That Doubles as an End Table
News
How to Turn Off Summaries on iPhone
How to Turn Off Summaries on iPhone
News
This giant 98-inch TCL Class QM6K Series TV is over 0 off at Amazon
This giant 98-inch TCL Class QM6K Series TV is over $600 off at Amazon
News
6 Most Anticipated New Features Rumored For The 2026 OLED MacBook Pro – BGR
6 Most Anticipated New Features Rumored For The 2026 OLED MacBook Pro – BGR
News

You Might also Like

Washington lawmakers target ‘addictive’ social media feeds in revived push for youth safeguards
Computing

Washington lawmakers target ‘addictive’ social media feeds in revived push for youth safeguards

9 Min Read
How Shell Foundation and 500 Global structure risk in African climate tech
Computing

How Shell Foundation and 500 Global structure risk in African climate tech

8 Min Read
2026 social media statistics: Data from 9.3M posts
Computing

2026 social media statistics: Data from 9.3M posts

16 Min Read
Mozilla Now Providing RPM Packages For Firefox Nightly Builds
Computing

Mozilla Now Providing RPM Packages For Firefox Nightly Builds

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?