NOS News•
How do you get agreement between people with different beliefs? It is an eternal problem, which seems even more topical in today’s world full of polarization. Perhaps artificial intelligence can help: A new AI tool helps people with different opinions reach common ground.
This new AI tool, developed by Google DeepMind, creates a summary after a discussion between people that reflects the common position, without overlooking minority views. The participants found the AI’s summaries to be more informative, clearer and less colored than those of human mediators. The research into this has been published in the scientific journal Science.
“We wanted to investigate whether AI can be used to improve collective decision-making,” says Michiel Bakker, a TU Delft graduate in quantum computers and one of the leaders of the research. He emphasizes that such processes between people often proceed slowly. There is also a danger that not all points of view are sufficiently represented, especially as groups become larger.
“This is a wonderful application of large language models,” says Eric Postma, professor of artificial intelligence at Tilburg University and not involved in the study. “This is the future, to overcome the human shortage.”
Postma is also positive about the conduct of the research. “For example, in a previous study, AI wrote policy proposals that both Democrats and Republicans could agree with. And there have been a number of other studies. But this may be the best yet, partly thanks to the deep pockets and computing power of DeepMind”.
The study was conducted with small groups of five people. Michiel Bakker of DeepMind is optimistic about the future possibilities: “This study shows the potential of AI to help groups of people find common ground,” he says.
He sees possible applications in politics to make it easier to reach agreement between different parties. Google DeepMind itself is silent about possible political applications, possibly partly because the American elections are just around the corner.
Brainstorming
Another possibility is application within organizations: “The model reaches the human level, it is even better,” says Postma. “That can be a big advantage in brainstorms. It sometimes comes up with more original solutions than humans. And yes, sometimes also with nonsense, as we know.”
Postma does not think that this new tool is suitable for taking the sting out of discussions on social media. Finding agreement is not the goal there. “People don’t want to agree with each other there at all.”
Colored information
In addition, the new AI model now has a number of limitations that are important for application in the real world. For example, the model does not yet do fact-checking, and it may therefore happen that the compromise reached does not correspond to the facts. It also does not check whether participants stay on topic and do not wander off.
There is also the danger that the AI will provide biased information or show certain political preferences. But Eric Postma is less concerned about that. “I’m more concerned about human bias.”