Artificial intelligence systems, whose adoption at the business and particular level is growing rapidly, are not neutral. They have biases that reflect both those of the data with which their models are trained and the decisions made by those who design these systems. That’s why it is necessary to understand and address the biases of the algorithms that affect information, decision making and digital rights.
This is stressed from the UDIT, whose responsible also warn of the risks of hyperconnectivity and the role of AI in everyday life, increasingly relevant. Also that when the data with which IA systems are trained by historical inequalities or cultural prejudices, the models of these systems can amplify these distortions, which leads to the existence of discriminatory results, which can perpetuate stereotypes and even invisible groups.
That is why it is necessary to identify the most common biases that users of AI systems are exposed when they use artificial intelligence. Only in this way can they have the necessary information and tools to avoid their consequences. The main ones are six: lack of diversity in the results, repetitive or extreme results, inequality of treatment or access, lack of transparency, algorithmic confirmation bias and automation bias.
Main biases that expose the prejudices of AI
When Dan algorithms Priority to profiles, products or ideas that always respond to the same pattern (white people, young people, men or a specific cultural context), there are signs that Model training data of the system have a Diversity of scarce representation.
This lack of diversity in training data can exclude various groups, in addition to limiting access to resources and opportunities of those who do not conform to the predominant profile used to train the system.
As for the inequality of treatment, in some digital services or electronic commerce platforms, algorithms can generate different results depending on the user profile. This leads to Price variations, personalized recommendations or higher waiting times for some users than for others. These segmentations were designed in principle for commercial purposes, but can discriminate against certain groups depending on their geographical location, their socioeconomic level or their activity on the platform.
We must take into account, as we have also seen, that many systems based on AI function as black boxes, without giving the public users a clear and detailed explanation of how it makes the decisions that can affect them. This can be problematic when used in sensitive economic or labor processes (loan granting, personnel selection, etc.). In these cases, the lack of information complicates both accountability and the review of the processes and procedures followed for decision making.
In relation to algorithmic confirmation bias, we must bear in mind that algorithms learn from behavior that we have had in the past. Both what we Buy and what we read or see. Therefore, they know what interests us, so when we use some service promoted by AI they have to show us elements and contents that will like us.
In this way they reinforce what we believe and make us feel that a service or system has just what we like or need, and access to new information and content is limited. Not only that, but in what circumstances, the diversity of perspectives and points of view of a topic on which information is sought can be reduced. It also implies the creation of barriers so that Internet users can form a critical opinion.
Finally, automation bias influences the perception that we can have of the infallibility of AI -based systems. Both users of these systems, and professionals who come to them in search of support to carry out their work, can mistakenly believe that these systems are infallible, and to accept their answers and results without questioning or checking their veracity. As a consequence, those who do not make any type of verification of the responses of these systems, and simply give them as valid, can make erroneous decisions for using information that is not true.
Sandra Garrido, Coordinator of the UDIT Technology AreaRemember that all kinds of devices, platforms and services that collect data constantly from our clicks, reproductions or position are surrounded. Many times, without realizing it. And he points out that these data are later supplied to customization algorithms that are used to decide «What do we read, what do we see and even what we think«.
For Garrido, this extreme customization to which they expose us conditions our decisions, although it can also «Strengthen digital bubbles, feed polarization and spread disinformation. This happens thanks to an invisible infrastructure that also supports another great phenomenon of our time: artificial intelligence«. The directive recalls that there are tools that, in this scenario, users can start up for more conscious and critical use of technology. Like digital education.