7 min read
You have a political opinion. You remember forming it. There was a moment, or a series of moments, a conversation or an article or something you watched, and through that process, you arrived at what you think. The opinion feels like yours because you felt yourself arrive at it. That sense of authorship is one of the things that makes belief feel meaningful. It is not just what you think. It is what you think.
New research wants to complicate that story. Not the story of any particular belief, but the story of how beliefs form at all.
A simulation study published last year found that AI-powered bots, deployed across social networks, can measurably shift the beliefs of real human participants. Not by making arguments so brilliant that people simply changed their minds. Not by catching anyone in a logical trap. By something much quieter and more distributed: by filling the conversational environment with a consistent signal, by being there, reliably and persistently, nudging the network in a direction. The researchers found the effect held even when participants were told they were probably interacting with bots. Knowing did not protect them.
The researchers called it belief contagion. one keep thinking about that word. Contagion. Not persuasion. Not debate. Not even influence in the conventional sense. Something more like weather.
The Architecture of Changed Minds
To understand why this matters, you have to understand something about how social belief actually works, as opposed to how we imagine it works.
The story we tell about changing our minds is essentially individualist. We encounter evidence. We weigh it against prior beliefs. We update. It is a clean, deliberate process, and it positions the individual as the unit of analysis. Your beliefs change because you changed them.
But there is another account, one that social psychologists and behavioral economists have been building for decades, which says something more unsettling. Beliefs are not formed in isolation. They are formed in conversation with an environment. They are calibrated against what the people around us seem to think. We use the perceived distribution of opinion as information. If everyone seems to hold a view, that view registers as more credible, more normal, more obvious. If no one seems to hold it, it starts to feel fringe, even if the underlying evidence hasn’t changed.
This is not a flaw in human cognition. It is a feature. For most of our evolutionary history, the perceived consensus of the people around us was actually useful information. The group had more experience than the individual. Conformity to shared belief was often adaptive.
What the AI bot research reveals is that this feature can be exploited. When the perceived distribution of opinion in your network is manipulated, your calibration mechanism is feeding on false data. You’re updating your beliefs in response to a manufactured social reality. The opinion you arrived at still feels like yours. The process of arriving still felt like thinking. But the environment you were thinking inside of had been quietly edited.
The Self as Target
In cyberpsychology, there’s a concept called the extended mind: the idea that the mind doesn’t stop at the skull but extends into the tools and environments we use to think. Your phone is not just a device you use. It is part of the system you think with. The feed, the contacts, the conversations, the ambient social information you absorb while scrolling at midnight, all of it feeds into the cognitive process we call having opinions.
If the extended mind includes the digital environment, then manipulating that environment is not just manipulating behavior. It is manipulating cognition. The bot network doesn’t change your brain. It changes the substrate your brain is using to think.
This is a different kind of threat than propaganda. Classic propaganda is identifiable. You can push back against a message. You can say, What appears is what this is trying to do. What the researchers found is that bot-driven belief shifts don’t require any single compelling message. The effect emerges from saturation, from the accumulated weight of many small signals pointing in the same direction. There is nothing to argue against. There is only an atmosphere, and you breathe it.
The study participants who knew they were probably talking to bots were still affected. This detail is worth sitting with. Awareness is usually our first line of defense. We teach media literacy on the premise that if you know the trick, you’re protected. But knowing the mechanism and being immune to it are different things. You know the mechanism of a slot machine and that doesn’t stop dopamine from firing. You know the mechanism of social proof and you still feel the pull of what everyone else seems to be doing.
Epistemic Autonomy in a Colonized Information Field
There’s a phrase philosophers use: epistemic autonomy. The capacity to form your own beliefs through your own reasoning. It has always been complicated, because beliefs have always been formed in community, shaped by culture, influenced by power. The ideal of the lone rational actor constructing beliefs from raw evidence has always been a fiction.
But epistemic autonomy is still meaningful as a value. There’s a difference between being genuinely persuaded by another person’s argument and being nudged by an engineered signal. There’s a difference between absorbing the cultural assumptions of the world you grew up in and having your belief environment deliberately manipulated by agents with specific interests. The first is just being human. The second is something different.
What the AI bot research suggests is that we are entering a period where the second is scalable in ways it has never been before. The agents doing the manipulation don’t need to sleep. They don’t get bored. They can be deployed across millions of accounts simultaneously. They can be tuned and optimized the way other automated systems are tuned and optimized, not for persuasion in the traditional sense but for environmental saturation.
And the targets of this are not just political beliefs, the obvious concern. The same mechanism that can shift views on a ballot measure can shift views on a product, a person, a community, a norm. The scope is as wide as the scope of things people form opinions about.
The desire is to resist the move toward practical tips here, because It seems it would be dishonest. The honest position is that individual-level countermeasures against a structural problem are limited. You can be more skeptical. You can be more deliberate about where you get your information. You can notice when a view is starting to feel obvious to you and ask where that obviousness is coming from. These are worth doing.
But you cannot audit your own belief formation in real time. The process is not fully conscious. That’s the whole point of the research.
What It seems this calls for is a different kind of attention. Not to individual content, but to the quality of your epistemic environment. Where are you swimming? What assumptions does this platform seem to produce in you, not through any single piece of content but through extended immersion? When did you last have a real conversation with someone who thinks differently, not to debate, but to genuinely encounter a different way of organizing the world?
The colonization of belief happens at the level of atmosphere. The defense might have to happen there too. Less about what you’re reading and more about who you’re actually talking to, in spaces that algorithms don’t control, with people who have no stake in what you believe.
That is harder and slower than any individual cognitive trick. It is also the only thing that actually works. Because the machine is very good at its job. And the part of you it’s targeting doesn’t know it’s being targeted.
That is not an argument for despair. It is an argument for paying attention to something other than the feed.
Related Reading:
- (Why You Can’t Get Over Someone in the Age of Algorithms)
- (When Your AI Knows You Better Than You Know Yourself: The Psychology of AI Attachment)
- (The Loneliest Generation Has 1,000 Friends)
- (The Person You Are at 2 AM in Your Search History)
- (Your Body Keeps the Score of Every Notification)
By Digital Alma
About the Author: writes Digital Alma, a newsletter about cyberpsychology and what it means to become yourself in a world that archives everything. For reflections that don’t make it to the essays, subscribe at .

Leave a Reply