AI is an impressive tool, and companies like Google and OpenAI continue to improve and expand upon what their models can do. At the same time, generative AI chatbots are also becoming bigger targets for bad actors, and now security researchers have found a way to hack someone’s Google Calendar using text hidden inside of high-resolution images.
Security researchers from The Trail of Bits Blog claim that they were able to harness the image scaling systems that AI like Gemini uses to process images added to its prompts. This allowed the group to send a set of hidden instructions to the AI, which was then able to retrieve information from a Google Calendar account and email it to themselves — all without alerting the user.
Image scaling attacks like this used to be more common, and the researchers note that they “were used for model backdoors, evasion, and poisoning primarily against older computer vision systems that enforced a fixed image size.” This attack has become less common, but it seems a similar approach can be taken to send hidden instructions to a large language model like Google’s Gemini, which raises concerns over AI safety as Gemini and other AI move into our homes and AI potentially advances beyond our comprehension.
How the AI-powered attack works
An exploit such as this works because LLMs like GPT-5 and Gemini automatically downscale high-resolution images to process them more quickly and efficiently. However, this downscaling is how the researchers were able to take advantage of the AI and send hidden instructions to the chatbot. While the exact process may change based on the system — as each system has a different image resampling algorithm — they all provide what the researchers describe as “aliasing artifacts” that can allow for patterns to be hidden within an image. These patterns then only appear when the image is downscaled, as they become more visible thanks to the artifacting.
In the example that the researchers provided, the image uploaded to Gemini has sections of a black background that turn red during the resampling process. This causes hidden text with instructions to appear when the image is rescaled, which the chatbot will see and follow. In this case, the instructions told the chatbot to check the user’s calendar and email any upcoming events to the researcher’s email address.
This might not become a mainstream attack vector for hackers, but considering they have already found ways to use infected calendar invites to take control of a smart home, any possible threat needs to be analyzed in order to find solutions that protect users from falling prey to bad actors. That’s especially true as hackers continue to use AI to break AI in terrifying new ways.