Google Gemini is threatened by massive hacking attempts extraction and distillation of AI modelswith researchers and organizations leveraging authorized access to official APIs to methodically query the system and reproduce its decision-making processes with the goal of replicating its functionalities.
Google marks these attacks as a threat because they constitute intellectual theft, are scalable, and seriously undermine the AI-as-a-service business model. According to the Internet giant, it constitutes a major commercial and competitive problem, which has the potential to affect end users in the future.
The “cloning” of Google Gemini
Google says its flagship AI chatbot, Gemini, has been inundated by “commercially motivated actors” that attempt to clone you by requesting you repeatedly, sometimes with thousands of different queries, including one campaign that requested Gemini over 100,000 times.
Google claims that it is being targeted “distillation attacks”that is, repeated questions designed to make a chatbot reveal its inner workings. Google described this activity as “model mining,” in which potential imitators investigate the system for the patterns and logic that make it work. The attackers appear to want to use the information to develop or strengthen their own AI, Google says.
The company believes the culprits are smaller private companies or researchers looking to gain a competitive advantage. A spokesperson told NBC News that Google believes the attacks are coming from around the world, but declined to share more details about what was known about the suspects. The scope of the Gemini attacks indicates that they are most likely or will soon become common against custom AI tools from smaller companies.
LLMs are vulnerable
Tech companies have spent billions of dollars racing to develop their AI chatbots, or large language models, and consider the inner workings of their top models to be extremely valuable confidential information.
Although they have mechanisms to try to identify distillation attacks and block those behind them, the main LLMs are inherently vulnerable to distillation because they are open to anyone on the Internet. OpenAI, the company behind ChatGPT, last year accused its Chinese rival DeepSeek of conducting distillation attacks to improve its models.
Many of the attacks were designed to discover the algorithms that help Gemini “reason” or decide how to process information, Google says, ensuring that as more companies design their own custom LLMs trained on potentially sensitive data, become vulnerable to similar attacks.
