Dynamics of AI Bot Societies: Cliques, Extremes, and Social Similarities
A team of researchers reveal toxic results of placing AI bots in a social platform.
The chatbots split into cliques and boosted the most partisan voices. A handful of "influencers" also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam.
Social Simulation
The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI's GPT-4o mini, each assigned a distinct persona, including specific political leanings.
The personas were drawn from the American National Election Studies dataset, and reflected "real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests," the researchers said.
They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in "the same qualitative patterns."
The study was led by Dr. Petter Trenberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university.
Uncanny Results
Even without algorithms and humans, the same toxic patterns emerged
Over the course of five separate experiments — each running over 10,000 actions — the bots were free to post, follow, and repost. What happened looked a lot like real-world social media.
The study found that the chatbots gravitated toward others who shared their political beliefs, forming tight echo chambers. Partisan voices gained an outsize share of attention, with the most extreme posts attracting the most followers and reposts. Over time, a small group of bots came to dominate the conversation, much like the influencer-heavy dynamics seen on platforms like X and Instagram.
The researchers also tested six interventions meant to break the polarization loop, including a chronological feed, downranking viral content, hiding follower counts, hiding user bios, and amplifying opposing views.
None solved the problem. "While several showed moderate positive effects, none fully addressed the core pathologies, and improvements in one dimension often came at the cost of worsening another," the researchers said.
"Our findings challenge the common view that social media's dysfunctions are primarily the result of algorithmic curation," the authors wrote.
"Instead, these problems may be rooted in the very architecture of social media platforms: networks that grow through emotionally reactive sharing," they added.
Contributing to Social Science Theory
The researchers said their work is among the first to use AI to help advance social science theory.
While LLM-based agents can provide "rich representations of human behavior" for studying social dynamics, the researchers cautioned that they remain "black boxes" and carry "risks of embedded bias."

