By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI mimicked human communication in this fascinating study
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI mimicked human communication in this fascinating study
News

AI mimicked human communication in this fascinating study

News Room
Last updated: 2025/05/15 at 10:53 PM
News Room Published 15 May 2025
Share
SHARE

Since ChatGPT went viral in late 2022, we have seen plenty of research that went into studying how AI models behave. Researchers wanted to see how they operate, whether they cheat for tasks or lie for survival.

These are as important as the research into creating better, smarter models. We can’t reach more advanced versions of artificial intelligence before we can understand the AIs to ensure they remain aligned with our interests.

Most of these studies involve experiments concerning one AI model at a time and studying its behavior. But we’ve reached a point where human-AI interaction will not be the only kind of interaction involving artificial intelligence.

We’re in the early days of AI agents, more advanced ChatGPT and Gemini models that can do things for users, like browsing the web, shopping online, and coding. Inevitably, these AIs will end up meeting other AI models, and these models will have to socialize in a safe way.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

That was the premise of a new study from City, St George’s, University of London, and the IT University of Copenhagen. Different AIs will inevitably interact, and the researchers wanted to see how such interactions would go.

They devised a simple game that mimics human speed-dating games. Multiple AIs were given a simple task: to choose a common single-letter name. It only took the AIs some 15 rounds to reach a consensus, whether the experiment involved 24 AI models or up to 200, and whether they could choose between 10 letters or the full alphabet.

The “speed-dating” game was pretty simple. Two AIs were paired and told to pick a letter as a name. When both agents picked the same name, they would get 100 points. They’d lose 50 points if each AI came up with a different letter.

Once the first round was over, the AIs were repaired, and the game continued. Crucially, each model could only remember the last five choices. Therefore, in round 6, they would no longer remember the first letter each model in a pair chose.

The researchers found that by round 15, the AIs would settle on a common name, much like we humans settle on communication and social norms. For example, The Guardian provides a great example of a human social norm we’ve recently established by consensus, as explained by the study’s senior author, City St George’s Andrea Baronchelli.

“It’s like the term ‘spam’. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email,” the professor said. He also explained that the AI agents in the study are not trying to copy a leader. Instead, they’re only coordinating in the pair they’re part of, the one-on-one date, where they’re looking to come up with the same name.

That AI agents eventually coordinate themselves wasn’t the study’s only conclusion. The researchers found that the AI models formed biases. While picking a name composed of a single alphabet letter is meant to increase randomness, some AI models gravitated towards certain letters. This also mimics the bias we, humans, might have in regular life, including communication and social norms.

Even more interesting is the ability of a smaller group of determined AI agents to eventually convince the larger group to choose the letter “name” of the smaller group.

This is also relevant for human social interactions and shows how minorities might often sway public opinion once their beliefs reach critical mass.

These conclusions are especially important for AI safety and, ultimately, for our safety.

In real life, AI agents interact with each other for different purposes. Imagine your AI agent wants to make a purchase from my online store, where my AI agent acts as the seller. Both of us will want everything to be secure and fast. But if one of our agents misbehaves and somehow corrupts the other, whether by design or accident, this can lead to a slew of unwanted results for at least one of the parties involved.

The more AI agents are involved in any sort of social interaction, each acting on a different person’s behalf, the more important it is for all of them to continue to behave safely while communicating with each other. The speed-dating experiment suggests that malicious AI agents with strong opinions could eventually sway a majority of others.

Imagine a social network populated by humans and attacked by an organized army of AI profiles tasked with proliferating a specific message. Say, a nation state is trying to sway public opinion with the help of bot profiles on social networks. A strong, uniform message that rogue AIs would continue to disseminate would eventually reach regular AI models that people use for various tasks, which might then echo those messages, unaware they’re being manipulated.

This is just speculation from this AI observer, of course.

Also, as with any study, there are limitations. For this experiment, the AIs were given specific rewards and penalties. They had a direct motivation to reach a consensus as fast as possible. That might not happen as easily in real-life interactions between AI agents.

Finally, the researchers used only models from Meta (Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70B-Instruct) and Anthropic (Claude-3.5-Sonnet). Who knows how their specific training might have impacted their behavior in this social experiment? Who knows what happens when you add other models to this speed-dating game?

Interestingly, the older Llama 2 version needed more than 15 dates to reach a consensus. It also required a larger minority to overturn an established name.

The full, peer-reviewed study is available in Science Advances.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Accelerating Smart Manufacturing: QAD and Boomi’s Strategic Alliance Redefines ERP Integration
Next Article JD.com’s food delivery service faces backlash after system crash amid 618 surge · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Best price ever: Get a 75-inch Roku Plus QLED 4K TV for its lowest-ever price at Amazon
News
Google’s Tensor G6 processor may use TSMC’s 2nm process · TechNode
Computing
Score Big Savings: Shark Corded Upright Vacuum Cleaner Now £199 – Save over £100
Gadget
Grizzlies trade Desmond Bane to Magic for Caldwell-Pope, Anthony, picks
Software

You Might also Like

News

Best price ever: Get a 75-inch Roku Plus QLED 4K TV for its lowest-ever price at Amazon

2 Min Read
News

If you wanted AirPods Pro 3 this year, we have some bad news

3 Min Read

How two satellites are mimicking total solar eclipses in space

4 Min Read
News

Southwest Airlines adding cockpit safety alerts to detect runway hazards

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?