Entelgy In recent years, it has positioned itself as one of the technology consultancies that have best understood that AI is not just about models and APIs, but about people, culture and governance. Under the motto “Human driven technology”, its value proposition pivots on a very conscious balance between advanced capabilities in AI, cloud and data, and change management that puts real adoption at the center.
In this context, the figure of Alfredo ZurdoHead of Digital Change at Entelgy, has become one of the most interesting voices when it comes to understanding what it really means to deploy AI in complex organizations.
In this conversation with MCPRO, Alfredo Zurdo addresses without euphemisms the impact of European AI Regulation (RIA) in Spanish companies, making it clear that it marks the end of “AI without governance” and that treating it as a simple legal issue is a sure recipe for failure.
From their perspective, AI governance is, above all, a profound cultural change: it is not enough to create committees or appoint a Chief AI Officer if the organization continues to operate in “hopefully the case” mode. Culture, remember, “eats compliance for dessert,” and companies that do not understand it in time will end up managing crises, not competitive advantages.
The interview also enters fully into some of the most uncomfortable debates around generative AI and autonomous agents: who signs when an algorithmic decision causes harm, how to incorporate sustainability (carbon footprint of the models) into corporate governance or what “AI literacy” really means when the RIA requires it for profiles as different as a plant operator and a CFO.
(MCPRO) How many Spanish companies really understand that the European AI Regulation (RIA) is not just a legal issue, but marks the end of “AI without governance”? How do you from Digital Change see that companies are addressing “AI governance” as a cultural and organizational change, not just as a matter of legal compliance? Are they creating Chief AI Officers, establishing internal policies, or simply waiting for a fine to arrive?
(Alfredo Zurdo) Let’s be honest, most Spanish companies continue to treat the RIA as a PDF to be read “whenever it’s time.” What I see from Entelgy is a worrying polarization: companies creating Chief AI Officers and ethical committees, compared to a silent majority operating in “let’s see if it works” mode, waiting for the first exemplary fine.
AI governance is not a department or a document, it is a profound cultural change. Culture eats strategy for breakfast… and compliance for dessert. Companies that understand this now will have a competitive advantage; those that don’t will have very busy lawyers.
(MCPRO) How do we convince an organization that implementing AI is a people and culture issue first, not a technology issue? What is the biggest mistake you see in companies; skip the literacy and adoption phase? And how do you convince a CEO that investing in “change” (workshops, mentoring, communication, change leadership) is as critical as investing in the tool itself?
(Alfredo Zurdo) According to McKinsey, 70% of digital transformations fail. And it is not because of technology, but because no one prepared people for the change. The most common mistake is what I call “new toy syndrome”: buying the shiniest tool, deploying it with “fanfare,” and discovering three months later that no one uses it well.
How do I convince a CEO? I ask him: «What good is a Ferrari if your team doesn’t know how to drive?» Investing in change management is not a “soft” expense; it is the insurance that your technological investment generates real value.
(MCPRO) How does a company manage ethical and legal responsibility when it has autonomous AI agents making decisions, if it has not yet been established who is “responsible for deployment”? From your position as a change management expert, how do you guide an organization on who is responsible (legally, operationally, and ethically) when an AI agent makes a decision about approving credit, suspending data access, or poorly generating a report that then causes harm? Is it a gap that the RIA has not yet fully closed?
(Alfredo Zurdo) This is the million dollar question, literally. Many companies deploy AI agents without first answering: “If this fails, who signs?” The RIA does define clear roles (supplier, responsible for deployment) and article 26 details the obligations. The legal framework exists, but the practical application in complex chains remains shaky ground.
That’s why we insist on going beyond compliance: ethics committees with real power, human oversight of critical decisions, and documenting everything. Traceability is not bureaucracy; It is what allows you to sleep peacefully when a client asks «“Why did you deny me credit?”.
(MCPRO) If some models generate 50x more emissions than others depending on complexity, why are hardly any Spanish companies measuring the “IA carbon footprint” of their operations, and how does that fit into corporate governance? How do you see from Entelgy that “AI sustainability” enters (or does not enter) the digital transformation agenda? Should it be a tool selection criterion?
(Alfredo Zurdo) There is a fascinating contradiction here: companies with impeccable sustainability reports deploying AI models without knowing their carbon footprint. According to the University of Munich, some models generate up to 50 times more emissions than others. How many Spanish companies include energy consumption as a selection criterion? Practically none. The good news: techniques such as quantization already allow us to reduce consumption significantly.
Sustainability must enter AI governance: choose appropriate models, optimize prompts, avoid unnecessary queries. It’s not about stopping using AI; It’s about using it wisely.
(MCPRO) The RIA mandates “AI literacy” for everyone, but what exactly does “literacy” mean? For a factory operator, is it the same as for a CFO?
(Alfredo Zurdo) Here’s the elephant in the room: the RIA requires literacy from February 2025, but doesn’t define what exactly that means. What is clear is what it should NOT be: a generic two-hour PowerPoint that is the same for everyone. An operator needs to know when to trust the automated system and when to yell “stop.” A CFO needs to understand why “what the AI says” is not a valid legal defense.
At Entelgy we call it stratified literacy: that each person knows what they can and cannot do with these tools in their context. Because we are already seeing the alternative, “shadow AI”, and it is not pretty.
