The way people work, communicate and innovate has changed with the emergence of large language models (LLM). This advanced form of artificial intelligence, whether known as ChatGPT, Gemini, or any other name, has enormous transformative power, enabling faster workflows, deeper insights, and smarter tools.
Trained with enormous volumes of data, LLMs not only surprise by generating human-like texts, but can also be applied to multiple use cases. “Although this growing adoption also comes with responsibility”asserts Fernando Anaya, country manager of Proofpoint for Spain and Portugal. “If something goes wrong, the consequences can be serious, from the exposure of sensitive data, the spread of harmful or misleading content, breaches of regulations and even the loss of trust in AI systems”.
Sometimes we forget that LLMs can be wrong being so convincing. Furthermore, the more you depend on them, the harder it becomes to question what they say. That is why it is essential that, in parallel with the construction of more intelligent models, a critical vision is maintained about what may be overlooked or omitted because of them.
Blind spots in AI and LLM security
“Traditional cybersecurity was not designed with LLMs in mindwhich opens up a new category of vulnerabilities to deal with”indicates Fernando Anaya, from Proofpoint. Far from producing predictable results, LLMs generate a dynamic and risky language that cannot be patched or audited in the same way as other systems. It is difficult to understand how they generate certain results, since LLMs operate as black boxes, making it difficult to detect potential problems such as instruction or prompt injection and data poisoning.
In addition to being clever with commands to manipulate LLMs, cybercriminals can guess what data a model has been trained with or cause it to connect to insecure APIs or third-party plugins. Another malicious tactic would be to overload models with long, repetitive prompts, which can slow down or crash AI services. However, large-scale social engineering phishing is currently the most frequently followed method by attackers, as LLMs facilitate the creation and distribution of credible messages that imitate legitimate communication for credential theft and data breaches.
“When it comes to a technology that evolves so quickly and powerfully, the challenge is also unique; and security measures must be solid to guarantee data protection and current regulations”explains the Proofpoint manager. The AI trend does not seem to be slowing down as LLMs become integrated into everyday tools for users such as Google Workspace and Microsoft 365, so defense needs to keep up and adapt to the pace in order to reveal any security blind spots.
Risks related to LLM are not a future concern
A couple of years ago, Samsung engineers pushed the company’s source code and internal information into ChatGPT to help them debug code and summarize notes. There was no malicious intent behind it, it was simply part of a routine task. However, since ChatGPT stores data entered by users to improve its performance, there were fears that it would be trade secrets will be leakedso after the incident, Samsung restricted the use of ChatGPT and created its own AI tools for internal use.
There is also the case of DeepSeek AI, the Chinese startup with a powerful and more accessible language model than others, but which stores user data on servers that can be accessed by the Chinese government, raising concerns about the privacy and security of that data.
When it comes to security with LLM, the first thing is limit shared data to what is strictly necessary and always review responses to avoid exposing confidential information. From a technical point of view, it is advisable to apply role-based access controls, customize security restrictions, and perform periodic audits and penetration tests that specifically consider the risks associated with LLMs.
“Traditional data security strategies must evolve to incorporate adaptive capabilities and intelligent response mechanisms adapted to the AI environment, authenticating users, preventing unauthorized access and continually evaluating each interaction. By doing this, LLMs will gain trust and the path to new ideas and innovations can be kept open.”points out Fernando Anaya.
