He Government has approved the draft governance law of artificial intelligencewith which it seeks to guarantee a use of AI that is ethical, inclusive and beneficial for people. This is the normative instrument that will be used to adapt Spanish legislation to the European AI Regulations. This is already in force, with a regulatory approach that drives innovation.
The draft will be processed urgently, and from now on, the necessary procedures will follow before returning to the Council of Ministers to be permanently approved as a bill. After that, it will be sent to the cuts for approval.
This regulation prohibits certain malicious uses of AI, and introduces more rigorous obligations for high -risk systems. For the rest, it establishes minimal transparency requirements. It also incorporates a new provisional withdrawal right of the Spanish market of AI systems, which can carry out the competent surveillance authority when they have produced a serious incident.
Prohibited practices entered into force on February 2, and from August 2 they can be sanctioned with fines and other additional measures. For example, the adaptation of practices may be required to be carried out according to the system, or prevent them from being marketed. The sanctioning regime incorporated in the draft law will be applied in each case, within the forks established by the European regulations.
Among prohibited practices is the use of subliminal techniques, such as imperceptible images or sounds, to manipulate decisions without consent causing serious damage to the person. As addictions, gender violence or reduction of its autonomy. Also exploit vulnerabilities related to age, disability, or the socioeconomic situation to significantly alter behaviors, so that it causes or can cause serious damages.
Likewise, the biometric classification of people is prohibited by race or political, religious or sexual orientation; as well as the score of individuals or groups based on social behaviors or personal traits as a selection method. An example of this is the use of AI classification and punctuation systems to deny loans or subsidies.
Nor can assessments of the risk of a person commit a crime based on personal data or in the history of their family, as well as their educational level or their residence link, with legal exceptions. Nor infer emotions in work centers or educational centers as a method of evaluation for promotion or dismissal, unless medical or security reasons intervene.
The sanctions that will be imposed for this type of systems are between 7.5 and 35 million euros, or between 2% and 7% of the world’s world business volume of the previous year, if the latter figure is higher. This will be so except in the case of SMEs. Then, the sanction may be the slightest of the two amounts.
The high -risk AI systems are the following: all that can be added as security elements to industrial products (machines or elevators, among others), toys, radio equipment, health products and transport products and vehicles. Also the systems that are part of the following areas: biometry, critical infreructures, education and professional training, employment, access to essential private services (credit or insurance systems), to essential services and public benefits, and to enjoy these services and benefits.
Likewise, high -risk AI systems will be considered those developed for the guarantee of the right, migration, asylum and border control management, and those designed to use in the administration of justice and in democratic processes.
These systems will have, among other obligations, to have a risk management system and human supervision, technical documentation, data governance, record conservation, transparency and information communication to those responsible for the deployment or quality system, among other things. If they do not have one or more of these obligations they can be sanctioned with fines that depend on the severity of the infraction.
The very serious sanctions will be, among others, the non -communication of a serious incident or breach of the orders of a market surveillance authority. In this case the sanctions will be between 7.5 and 15 million euros, or between 2% and 3% of the business volume of the previous year of the company globally.
Serious infractions, such as not having human supervision in an AI system that incorporates biometry at work to control face -to -face, or not have a quality management system in Robots with AI, will be sanctioned with between 0.5 and 7.5 million euros, or between 1% and 2% of the world’s world business volume.
It will also be a severe infraction not to correctly label an image, audio or video generated or manipulated with AI and show real or non -real people saying or doing things they have never done or at points where they have not been. That is, the use of unidentification deepfakes. These contents will have to be identified as the content of the generator by AI in a clear and distinguishable way, no later than in their first exposure or interaction.
As for minor infractions is the non -incorporation of CE marking in the high -risk AI system or, when you cannot, in its packaging or the documentation that accompanies the product, to indicate the conformity with the IA regulation.
In addition, it must be borne in mind that as of August 2, 2026, the European Regulation forces member countries to establish at least one controlled environment of AI tests that foster innovation and facilitate development, training, evidence and validation of innovative systems of LI for limited time before marketing them or putting them into service, agreeing between the suppliers and the competent authority.
Las authorities responsible for monitoring prohibited systems They are the Spanish data protection agency For biometric systems and border management, the General Council of the Judiciary For AI systems in the field of justice, the Central Electoral Board for electoral processes systems, and the Aesiathat is, the Spanish Agency for the Supervision of Artificial Intelligence, in the rest of the cases. .