In early August The sanctioning regime of the European Artificial Intelligence Regulation (RIA) has entered into forcewhich implies that companies that do not comply with the regulations set for the development, implementation and use of AI systems may be fined with important economic sanctions, in addition to having to comply with various restrictive measures. The fines they can receive can reach 35 million euros, or 7% of their global annual turnover.
However, the skepticism in Spain On the regulation between society it is clear, and reveals a concern high about the privacy of the data. According to the latest data on the survey on the perception of citizens with respect to the AI of the Entelgy technology consultant, Only 8.8% of respondents believe that there are currently a regulation of AI regulation that is strict enough.
This feeling of distrust is not limited to the regulations. Citizens are not convinced with the authorities that are responsible for the security and adequate development of AI. Thus, 88.6% of the survey participants believe that the institutions do not transmit the necessary security with respect to the control and supervision of the AI. This reflects a gap between the expectations of society and the action of institutions.
To this is joined a low level of knowledge of the regulations in force. Only 11.4% of the citizens surveyed ensure that it is aware of the current regulations on Ia. The percentage is something higher among people aged 18 to 29, as it reaches 19.3%.
On the other hand, the lack of trust joins a generalized concern for privacy. 80% of citizens show concern about the possibility that a system of collecting personal information without sufficient protection guarantees. This concern is especially high between people between 20 and 49 years (81.4%) and those over 50, with 81%.
Since August 2, the practices that will be sanctioned according to the European Artificial Intelligence Regulation in Spain are considered as unacceptable risk for the fundamental rights and freedoms of people. For example, subliminal or misleading manipulation, the exploitation of vulnerabilities, “social scoring,” mass facial recognition, emotions analysis in work and educational environments, biometric categorization and criminal prediction.
In addition, to avoid sanctions related to transparency and the data with which the AI models are trained, companies must ensure that their AI systems comply with European regulations, guaranteeing transparency, adequate technical documentation and human supervision when necessary.
They will have to clearly inform users when they are interacting with an AIand actively collaborate with the Spanish Agency for Artificial Intelligence Supervision (AESIA). It will also be key to rigorously review the general purpose models that integrate into their services, incorporating safeguards to avoid legal risks.
From Entelgy they point out that «the Entry into force of the sanctioning regime of the European Artificial Intelligence Regulation represents a fundamental step to ensure that the development and use of AI is carried out safely, ethically and responsible. The low level of knowledge that citizens have on current regulations makes it difficult to generate confidence in institutions to protect their rights against these technologies and demand additional effort in transparency and training«.