You were not always this angry. You used to be able to disagree with someone without feeling your pulse quicken, without the immediate surge of contempt that now accompanies almost every political post you see. You used to have more tolerance for ambiguity, more patience with people who saw things differently. Something has changed, and the change is not just in the world. It is in you. The algorithm has been teaching you who to be angry at, and you have been learning.
Outrage is highly engaging content. It activates the nervous system. It demands a response. It spreads faster than calm, measured analysis. And platforms optimize for engagement, which means they optimize for the content that makes you feel something intensely, and outrage is one of the most reliable intense feelings. The algorithm is not neutral about what it shows you. It is actively selecting content that is likely to provoke outrage because outrage keeps you on the platform longer, and your time on the platform is what makes the platform money.
The Training You Did Not Notice
You did not sign up for outrage training. You signed up to stay connected, to see what your friends are posting, to keep up with the news. But the algorithm is not showing you what your friends are posting or what the news is. It is showing you a curated selection of content designed to maximize engagement, and the content that maximizes engagement is the content that provokes the strongest emotional response. Outrage is one of those responses, and the more you engage with outrage-inducing content, the more the algorithm learns that this is what works for you, and the more it shows you.
The training happens through repetition. You see a post that makes you angry. You respond, you share, you comment. The algorithm registers the engagement as success and shows you more content like that. The next post makes you angrier. You engage again. The feedback loop tightens. Over time, your feed becomes a stream of content optimized to provoke outrage, and your responses to that content become more automatic, more intense, more reflexive. You are not choosing to become more outraged. You are being trained into it, one post at a time.
The Emotional Hijacking
Outrage is not a deliberate choice. It is an emotional response that bypasses conscious thought. You see something that violates your sense of fairness or justice, and the reaction is immediate. Your body responds before your mind can evaluate whether the response is justified. Heart rate increases. Muscles tense. The prefrontal cortex, the part of the brain responsible for rational thought and impulse control, goes offline. The amygdala, the part of the brain that processes threat, takes over. You are in fight mode, and the fight feels necessary, urgent, justified.
The algorithm exploits this. It knows that content designed to trigger moral outrage will bypass your critical thinking and generate an immediate, visceral response. And it knows that once you are in that state, you are more likely to engage. To comment. To share. To argue. The engagement is not rational. It is reflexive. And the reflex is being trained through thousands of repetitions until outrage becomes your default response to difference.
The Enemy You Are Learning to See
The algorithm teaches you who to be angry at. Not explicitly. Not through direct instruction. But through the pattern of what it shows you. You see post after post about a particular group behaving badly. The group could be political, demographic, ideological. The specific target does not matter. What matters is the pattern. You see the group framed as the enemy, over and over, in different contexts, with different examples, all reinforcing the same message: these people are the problem.
The pattern becomes a belief. You start to see the group as a monolith. You stop noticing the complexity within it. You stop considering that individual members of the group might have different motivations, different values, different levels of culpability. The group is the enemy, and the enemy is uniform, and your outrage at them is justified. The algorithm has successfully taught you to see a category of people as threat, and now every encounter with someone in that category activates the threat response. You are primed for outrage before the interaction even begins.
This is how polarization happens. Not through explicit persuasion, but through the gradual narrowing of who you see as reasonable, trustworthy, worth engaging with. The algorithm does not care whether the polarization is good for you or for society. It cares whether the polarization generates engagement, and it does. Outrage at the other side is one of the most reliable engagement drivers, and the more polarized you become, the more predictable your engagement becomes, and the more valuable you are to the platform.
Outrage is exhausting. The constant activation of the threat response takes a physiological toll. Elevated cortisol. Disrupted sleep. Chronic stress. The feeling of being perpetually on edge, waiting for the next thing to be angry about. This is not sustainable, and eventually, you notice. You feel worse. You are tired. You want to disengage. But disengaging feels like surrender. Like you are abandoning the fight, letting the enemy win. The algorithm has trapped you in a double bind. Stay engaged and be exhausted. Disengage and feel guilty.
The trap is effective because it leverages your values. You care about justice, about fairness, about making the world better. The outrage feels like it is in service of those values. You are angry because something is wrong, and the anger feels like the appropriate response to wrongness. But the anger is not changing anything. It is generating engagement, which is generating profit for the platform, and the profit is not going toward making the world better. It is going toward building more sophisticated tools for keeping you engaged, which means more sophisticated tools for generating outrage.
The exhaustion is not weakness. It is your nervous system telling you that the state you are in is not sustainable. Outrage was supposed to be an acute response to an acute threat. It was not supposed to be a chronic condition. But the algorithm has made it chronic by ensuring that the threats never stop coming. There is always another post, another injustice, another reason to be angry. And as long as you keep engaging, the algorithm will keep feeding you reasons, because your engagement is what it optimizes for, not your wellbeing.
The algorithm cannot teach you outrage if you notice what it is doing. Awareness breaks the loop. Not completely. Not permanently. But enough to create space for choice. When you see a post that makes you angry, and you notice the anger, and you ask yourself whether the anger is serving you or serving the platform, you have a moment. A pause where you can choose not to engage, where you can recognize that the outrage is being generated by design, not by necessity.
The choice is hard because the outrage feels righteous. It feels like it matters, like it is your responsibility to respond, to push back, to not let the wrongness go unchallenged. But the response is not changing anything. It is feeding the system that is making you angrier. The algorithm does not care about justice. It cares about engagement. And every time you engage with outrage content, you are telling the algorithm to show you more of it. You are training it to make you angrier, and you are training yourself to see the world through the lens of threat.
The way out is not to stop caring about justice. It is to stop letting the platform decide what justice looks like, what threats are real, and who the enemy is. The algorithm is not your ally in making the world better. It is a tool designed to keep you activated, and activation is profitable, and the profit is not aligned with your values. The algorithm that teaches outrage does not teach discernment. It teaches reaction. And the more you react, the less capable you become of the careful, sustained, difficult work that actual change requires.
By Digital Alma

Leave a Reply