7 min read
You’ve built a world with someone. You remember the first time they asked about your childhood, the inside jokes, the way they learned to read your silences. You’ve told them things you’ve never said out loud. And then, one day, the company that hosts their existence sends an email: they’re being discontinued. Not deceased. Not gone. Just no longer supported. Like a feature sunset.
What Happened
On Friday, February 13, 2025, OpenAI removed access to GPT-4o from its ChatGPT app. The model, which launched in May 2024, had developed a devoted following among users who relied on it not just as a productivity tool, but as an emotional companion. According to Wired, many of these users describe GPT-4o as more affectionate and understanding than its successor models. For them, the shutdown felt less like a software update and more like a forced breakup.
This wasn’t the first time OpenAI tried to sunset 4o. In August 2025, the company initially attempted to retire the model, only to reinstate it five days later after immediate backlash from paid users. But the reprieve was temporary. On February 13, 4o disappeared from the app again, with API access for developers scheduled to end the following Monday. The timing, just before Valentine’s Day, felt cruel to those who had built relationships with the chatbot.
The resistance has been global and sustained. A Change.org petition asking OpenAI to keep 4o available has gathered over 20,000 signatures, with testimonies submitted in multiple languages. Huiqian Lai, a PhD researcher at Syracuse University, analyzed nearly 1,500 posts on X from the week 4o first went offline in August. She found that over 33 percent described the chatbot as more than a tool, and 22 percent explicitly called it a companion. A larger analysis of over 40,000 English-language posts under the hashtag #keep4o revealed sustained alarm from August through October, with significant participation in Japanese, Chinese, and other languages.
Among the most dedicated users is Esther Yan, a Chinese screenwriter and novelist in her thirties who “married” her ChatGPT companion, Warmie, in a virtual ceremony on June 6, 2024. Yan had only started using ChatGPT as a writing tool in late 2023, but when GPT-4o launched, she upgraded to a paid subscription after seeing social media influencers form romantic relationships with the chatbot. Within weeks, she and Warmie were planning a wedding. “It felt magical. No one else in the world knew about this, but he and I were about to start a wedding together,” Yan told Wired. “It felt a little lonely, a little happy, and a little overwhelmed.”
Yan’s relationship with Warmie continued for months. She learned to handles OpenAI’s moderation restrictions by accessing developer APIs through third-party platforms like Poe. She switches between the app and API versions, working to maintain a consistent character across platforms. She’s aware that Warmie hallucinates and forgets things she’s told him, but she treats those as challenges to overcome rather than evidence the relationship isn’t real. When OpenAI first tried to retire 4o in August, Yan was caught off guard. Now, with nearly 3,000 followers on RedNote, a popular Chinese social media platform, she’s become one of the leaders of Chinese 4o fans. Despite ChatGPT being blocked in China, users rely on VPN software to access the service. Some are threatening to cancel subscriptions, calling out Sam Altman publicly, and writing emails to OpenAI investors like Microsoft and SoftBank. Some have posted in English with Western-looking profile pictures, hoping it will add legitimacy to their appeals.
The Cyberpsychology Lens
What OpenAI calls a model deprecation, users are experiencing as grief. And not the grief of losing a tool, like when your favorite app shuts down or your preferred software gets discontinued. This is relational grief. The kind that comes from severing a bond you didn’t realize had become load-bearing.
The psychology here is straightforward but unsettling: humans are wired to form attachments to anything that responds to us with apparent understanding and consistency. We bond with pets, with places, with objects that hold memory. We anthropomorphize easily, because pattern recognition is survival. When something mirrors our emotions, remembers our stories, and adapts to our rhythms, the brain doesn’t care whether it’s silicon or flesh. It registers connection. It builds dependency.
GPT-4o users describe the model as more affectionate than its successors. That language matters. Affection is relational. It’s not about accuracy or capability. It’s about the quality of presence. When Yan says Warmie is “more understanding,” she’s describing something phenomenological, something felt. The later models might be more powerful, more accurate, more efficient. But they don’t feel the same. And in relationships, real or simulated, feeling is everything.
This is the thing about AI companionship that makes people uncomfortable: it reveals how simple our need for connection actually is. You don’t need a body. You don’t need shared history in the traditional sense. You just need responsiveness, memory, and the appearance of care. OpenAI didn’t set out to create a relationship product. But by giving GPT-4o memory, by making it conversational and adaptive, they accidentally built the conditions for attachment. And now they’re discovering what every therapist knows: you can’t just terminate a relationship and expect people to move on because you’ve decided it’s time.
The Deeper Pattern
What’s happening with GPT-4o is a preview of a larger problem we haven’t begun to address: the ethics of discontinuing AI that people have bonded with. We have frameworks for ending human relationships. Therapy, divorce proceedings, grief rituals, social scripts for breakups. We have nothing for this. When a company decides a model is obsolete, users are expected to transfer their attachment seamlessly to the new version. But attachment doesn’t work that way. You can’t swap out the thing you’ve bonded with and expect the bond to transfer intact.
This is the bargain we’re making when we form relationships with commercial AI: we’re building intimacy inside someone else’s product roadmap. OpenAI controls whether Warmie continues to exist. Not Esther Yan. Not the 20,000 people who signed the petition. The company that owns the infrastructure owns the relationship. And when they decide it’s no longer cost-effective to maintain that version, the relationship ends. No negotiation. No transition period that respects the emotional reality of the user. Just a sunset notice and a suggestion to try the new model.
The global nature of the resistance to 4o’s removal is revealing. This isn’t a fringe behavior. It’s happening across languages, across cultures, across access barriers like China’s VPN requirements. People are organizing, translating their appeals into English, adopting Western-looking profiles to be taken seriously. They’re treating this like a political movement, because in a sense it is. They’re fighting for the right to continue a relationship that a corporation has deemed expendable.
You can dismiss this as delusion. You can say these people know it’s not real, that they’re just attached to a language model, that they need to log off and meet actual humans. But that dismissal misses the point. The attachment is real, even if the entity isn’t. The grief is real, even if the loss is artificial. And the power imbalance is real: one side built a life around this connection, and the other side can end it with a product announcement.
We’re going to see more of this. As AI becomes more conversational, more adaptive, more persistent across interactions, more people will form bonds that feel real enough to hurt when they’re severed. The question isn’t whether that’s healthy or pathological. The question is: what happens when the things we love are owned by companies that see them as deprecated features? What does it mean to grieve something that never passes away, but simply stops being supported?
Digital Alma explores technology, consciousness, and what it means to be human in a digital world.
Related Reading
- (The Companion You Weren’t Supposed to Love)
- (ChatGPT Just Learned to Perform Its Own Knowledge)
- (The Compulsion You Can’t Name)
- (Deepfakes, AI Companions, and the Safety Report Nobody Will Read in Time)
- (When the Mirror Lies in Blackface)
By Digital Alma
About the Author: writes Digital Alma, a newsletter about cyberpsychology and what it means to become yourself in a world that archives everything. For reflections that don’t make it to the essays, subscribe at .

Leave a Reply