Artificial Intelligence has ceased to be a technology for the future and has become an everyday tool for the almost three million small and medium-sized companies (SMEs) in Spain. Automating tasks, analyzing data or improving customer service are already common uses in the day-to-day life of these companies. However, “an accelerated adoption of AI, without a clear security strategy, data governance or social awareness, is putting many of these companies at risk”explains Javier Tejada, co-president and head of Technology at the consulting firm h&k, specialized in comprehensive solutions for companies.
The data confirms this concern. According to the Bank of Spain, one in five Spanish companies already uses some AI system; The main one is the use of generative AI (18.1%), especially in smaller companies. But 60% of these companies still use AI in an experimental phase, without consolidated processes or advanced controls, which represents a danger for them.
In 2024, the National Cybersecurity Institute (INCIBE) managed more than 97,000 cybersecurity incidents, of which 31,000 directly affected companies, many of them SMEs, showing that digital risk is no longer theoretical, but real and growing. Given the growing use of AI in SMEs, the technology consulting firm h&k points out that many organizations are incorporating these technologies without a solid security foundation. And, based on his experience, he has identified five recurring dangers:
1. More credible and cheaper attacks
AI is making phishing attacks, in which cybercriminals impersonate legitimate entities such as banks to deceive users for financial gain, more credible. Thanks to AI, it is possible to generate emails, messages or even automated calls with perfect language, adapted to the context of the company, its sector or its clients, making their detection difficult. For SMEs, where verification processes tend to be more informal, these types of attacks increase the risk of fraud, credential theft or identity theft.
2. Messy data and poorly defined access that amplify AI risk
In many SMEs, critical information (documentation, emails, customer histories or financial data) is dispersed, duplicated and with permissions inherited for years. «By connecting Artificial Intelligence tools to these environments, the technology does not create the problem, but amplifies it. “AI accesses, processes and summarizes information that should not be available to certain users or uses.”concludes Javier Tejada.
3. Data leak due to ‘AI out of control’
This threat arises when AI tools are used without internal rules or oversight, typically through non-corporate applications or personal accounts. In this way, employees can inadvertently introduce internal documents, customer data, financial information or sensitive communications, losing the company control over where that data is stored or how that data is reused. «For SMEs, the absence of policies and technical controls turns this disorderly use of AI into a direct route for the leakage of critical information»emphasizes the co-president and head of Technology at h&k.
4. ‘Prompt injection’ and wizards that reveal internal information
«This scenario occurs when SMEs connect chatbots or agents to email, CRM or document repositories without an adequate security and permissions design. Through techniques such as prompt injection, an attacker can force the assistant to generate responses that include sensitive information, bypassing the intended restrictions.says Tejada. In addition, these systems can suggest or execute unwanted actions, such as modifying records or forwarding information, turning the assistant into a new point of internal data exposure.
5. Non-compliance and penalties for personal data (GDPR) when using AI
The use of AI tools without proper guarantees can lead SMEs to breach the GDPR without being aware of it. Entering customer or employee data into systems that use this technology, reusing content generated without control or training models with internal information can involve unauthorized processing, loss of data control and economic sanctions, in addition to a significant reputational risk for organizations without advanced reporting structures. compliance.
