Some scientists are building a program that can do what your parents can’t. Get you all to stop arguing already.

Medicine bottles in a cabinet including one that contains nature.

A computer mediator shows two bickering brothers wher they agree. (Illustration by News Decoder)

This article was produced exclusively for News Decoder’s global news service. It is through articles like this that News Decoder strives to provide context to complex global events and issues and teach global awareness through the lens of journalism. Learn how you can incorporate our resources and services into your classroom or educational program. 

Writer and attorney Julie Shields was living in England when she became so concerned about the lack of dialogue in the United States that she moved back there in 2024. 

“I was worried about our country,” she said. 

She and her friends created an organization called Kitchen Table Talk — focus groups that hold informal discussions to increase dialogue between different kinds of people.

But no Republican had agreed to participate leading up to the November 2024 U.S. presidential election. Now, after the election, a few far-right Trump supporters are trickling into discussion groups. But progress is slow, she said.

When members of two polarized political parties end up demonizing the other side to get the upper hand in an election, being open to dialogue post-election can be challenging. 

Finding ways to get back together

The frequency and intensity of arguments has become so intolerable for some families that they are no longer celebrating holidays together.

And it isn’t just families being affected by rifts in the U.S. political discourse. Journalist Anna Russell wrote in The New Yorker magazine about a young woman from Alberta, Canada who said she cut ties with her family partly because of frequent disagreements over then-U.S. presidential candidate Donald Trump and U.S. Supreme Court Justice Brett Kavanaugh.

The polarization is literally ripping families apart, and has been since Trump entered the political arena. Its dangers were clear to social psychologist Jonathan Haidt as long ago as 2016, when Trump first ran for president, when he said of the United States in a TED podcast: “Our left-right divide is the most important divide we face.”

A team of researchers at Google DeepMind in London, England, is trying to address this problem with artificial intelligence. According to a study published in Science in October 2024, the team trained a large-language AI model called the Habermas Machine to take people’s opposing opinions as input and spit out widely agreeable statements on different political issues, including Brexit, immigration, the minimum wage, universal child care and climate change.  

The machine is named for Jürgen Habermas, the German philosopher whose theory of communicative action holds that oral language and co-ordination of social action go hand-in-hand.

Mediating through artificial intelligence

What would happen if an AI machine was made available to help people like those in Shields’ groups see eye-to-eye?

AI machines are neural networks of code that, in response to input data, produce output data that is coherent with the input data. It’s just like what happens in a conversation, says neuroscientist Christopher Summerfield, one author of the Science study.

“It is like when two people have a conversation whereby one person says something and the other one responds with information that somehow links to what the first person said,” Summerfield said. 

This function is called data mapping and the best AI machines improve significantly at performing this function with practice.

Summerfield said the Habermas Machine was trained to input private opinions of thousands of people on a particular issue and to compute output as a statement that summarized a position that represented the majority of opinions. Summerfield said this group statement was then rated by individuals and that feedback was then used to tweak the code of the Habermas Machine to improve performance.

Dialogue facilitation

One characteristic of a good dialogue facilitator is ensuring all participants feel included. Summerfield said that the Habermas Machine is able to do this by preventing the tyranny of the majority effect, a phenomenon in which the rights or needs of minority groups are sidelined in favor of preferences of the majority.  

Summerfield’s study demonstrates that even on a large scale where thousands of opinions are being weighed, AI has the ability to increase the volume of opinions that come from a minority so that its voice does not get muted out by the majority voice.

A second characteristic of a good dialogue facilitator is making people not just feel heard but also that their point of view is appreciated. 

In 2023, in the multidisciplinary science journal Proceedings of the National Academy of Sciences of the United States of America (PNAS), researcher Lisa Argyle and her team from across the United States described how they used a large language model in the form of an AI chat assistant to make evidence-based, real-time recommendations meant to improve participants’ perception of feeling understood. 

Though people did not change their positions on the chosen topic of gun control, the availability of an optional AI assistant simulated the role a trained human moderator would play but with important advantages, such as intervening before someone said something that negated the tone of the conversation.

Can dialoguing through AI be dangerous?

What are the downsides of using AI as a moderator? Even Summerfield says: “All technology can be used for good or for bad.”

And this is the problem that Regula Hänggli, professor in political communication at the University of Fribourg in Switzerland, has with the idea of quickly inserting AI into the arena of political dialogue. 

An expert on digital democracy, Hänggli said that the basic question is whether AI and its algorithms should be used to influence citizens’ lives. She said that people should first ask: Do we (the people) want this?

There needs to be an established ethical way to regulate the use of AI in the public sphere. Research centres are looking into ways that humans and AI can work effectively together so AI is not given too much power. 

The MIT Center for Collective Intelligence probed the contexts in which humans and AI complement one another, and in October 2024 published its findings in the journal Nature Human Behaviour. 

It showed that because writing text is more of a creative task than an analytical one, there is benefit in humans engaging with AI. Innovative language thrives on the human ability to apply context and to spot nuances, while AI can handle the background research, data analysis and pattern recognition. 

As Shields found with her Kitchen Table Talks, the best dialogue, which relies on the strength of language, requires human participation. “People need to be willing to come to the table to engage,” Shields said. 

Three questions to consider:

  1. What role does a meditator play in helping people resolve differences?
  2. How can AI play the role of a mediator in disputes?
  3. In what ways do you think a human would be better or worse than AI as a mediator?
Karolina Krakowiak

Julia Yarkoni is a family medicine physician living in Israel who is passionate about bringing to light the effect of current events on family health and relationships. She is a fellow in global journalism at the Dalla Lana School of Public Health at the University of Toronto.

Share This
PoliticsCan artificial intelligence bridge real political divides?