4 min read
Late last year, as food benefit cuts rippled across the country, a set of videos began flooding TikTok. Black women, visibly distressed or defiant, complained about SNAP fraud, about having seven children, about a corn-dog counter rejecting their stamps. The videos moved fast. Conservative commentators picked them up. Fox News reported on them as fact before issuing a correction. Visible AI watermarks hadn’t stopped anyone.
The videos were fabricated. Many were generated using OpenAI’s Sora. And as The Guardian recently reported, they represent something more deliberate than viral misinformation: a new phase of digital blackface, accelerated by generative AI and, in some cases, amplified by the White House itself.
Digital blackface isn’t new. The term emerged from a 2006 academic paper to describe the commodification of Black cultural expression by non-Black people online. White gamers switching to Black avatars for social capital. Reaction GIFs of Beyoncé doing emotional labor for someone else’s tweet. Posts written in AAVE by people who don’t speak it in any other context. What’s changed is the fidelity. AI tools have removed the seam between performance and person. The stereotype no longer needs a human performer. It just needs a prompt.
UCLA professor Safiya Umoja Noble, author of “Algorithms of Oppression,” calls what’s happening now “a massive acceleration.” The AI-generated videos of Black women screaming about stamps weren’t anomalies. They were logical outputs of systems trained on digital spaces where Black humor, language, and style have long circulated without credit or compensation. Hume AI and firms like it sell synthetic voices tagged as “Black woman with subtle Louisiana accent” or “middle-aged African American man with a tone of hard-earned wisdom.” The people whose speech shaped those models typically didn’t consent. They don’t know. They don’t get paid.
Here is the thing worth sitting with: when AI generates a fake Black woman venting about food stamps and that clip lands in your feed alongside real creators, you don’t experience it as fabrication. You experience it as confirmation. Your brain fills in the gap. The stereotype preloads meaning before the watermark registers, if it registers at all.
This is what makes AI-generated digital blackface different from previous iterations. It doesn’t just appropriate. It colonizes perception. The fake video doesn’t need to convince you it’s real in order to shape what you believe. It just needs to feel familiar enough to leave a residue. And racist stereotypes are, by design, deeply familiar.
Baylor professor Mia Moody’s research on this points to something unsettling about how identity operates online. Black creators generated much of the internet’s cultural currency. The slang, the meme formats, the emotional expressiveness, the sense of humor that became TikTok’s texture. That influence is real. But influence without authorship is just extraction. And when AI absorbs that voice, strips it from context, and hands it to anyone willing to pay for a synthetic voice pack, what remains isn’t culture. It’s costume.
The escalation into deepfakes of Martin Luther King Jr., his image bent to show him shoplifting or praising Charlie Kirk, is the same logic pushed further. Bernice King has spoken out. But the psychological damage isn’t only to legacy. It’s to the living people who watched an icon’s memory get weaponized into something unrecognizable, served up by algorithms that register engagement, not harm.
What does it do to your sense of reality when the state and the media are both citing fabricated footage as if it were true? When the watermark exists but nobody reads it? When your own community’s face and voice are used to construct the propaganda that targets you?
There’s a particular kind of disorientation in that. Not just “this is wrong” but “I can no longer trust what I see.” The Guardian piece quotes a researcher who says “the state is bending reality.” That’s not hyperbole. It’s a description of what happens to cognition when fabricated images carry official weight. You don’t just lose trust in media. You lose a stable ground to stand on.
Digital blackface has always been about who gets to define Blackness and who absorbs the consequences. AI hasn’t changed that question. It’s just made it faster, cheaper, and harder to trace.
Digital Alma explores technology, consciousness, and what it means to be human in a digital world.
Related Reading
- (The Experiment No One Signed Up For)
- (The Trial That Asks Whether a Platform Can Wound a Child)
- (The Compulsion You Can’t Name)
- (Meta, TikTok and YouTube heading to trial to defend against youth addiction, mental health harm claims, KSBW)
- (Real Risk to Youth Mental Health Is ‘Addictive Use,’ Not Screen Time Alone, Study Finds, The New York Times)
This is not the first time synthetic media has been weaponized for political propaganda. What distinguishes this moment is the accessibility. Five years ago, creating convincing deepfake video required technical expertise and expensive software. Today, tools capable of generating photorealistic synthetic humans are available to anyone with a phone and an internet connection.
Platform moderation relies on detection: identifying synthetic media after it spreads. But detection lags behind generation. By the time a video is flagged, it has already been seen, shared, believed. The damage is not in the technology. It is in the thirty seconds between seeing something that feels real and questioning whether it is.
By Digital Alma
About the Author: writes Digital Alma, a newsletter about cyberpsychology and what it means to become yourself in a world that archives everything. For reflections that don’t make it to the essays, subscribe at .

Leave a Reply