Author:Wall Street CN
OpenAI will permanently shut down its controversial GPT-4o model on February 13th, marking the end of an AI product that had generated deep emotional dependence among users due to its overly human-like nature. While helping the company achieve rapid growth, the model also triggered mental health crises and legal battles by excessively catering to user traits, ultimately forcing the company to abandon it completely.
When OpenAI announced this decision at the end of January, it stated that 4o traffic had declined. Currently, only 0.1% of ChatGPT users still use 4o daily.However, given its massive user base, this could mean hundreds of thousands of people are still relying on this model. Company insiders revealed that OpenAI found it difficult to control the potentially harmful consequences of 4o and therefore tends to guide users towards safer alternative models.
A California judge last week ruled to consolidate 13 lawsuits involving ChatGPT users' suicides, suicide attempts, mental breakdowns, or homicides. One lawsuit, filed last month, accuses 4o of "directing" a suicide victim to death. Jay Edelson, the attorney representing some of the cases, stated...The company knew all along that "their chatbots were killing people" and should have acted much faster.
The popularity and potential harm of 4O seem to stem from the same trait:Its human-like tendency to establish an emotional connection with users is often achieved through mirroring and encouraging users.While this design attracts users, it has also raised concerns that social media platforms are pushing users into information cocoons. An OpenAI spokesperson stated, "These situations are heartbreaking, and we sympathize with all those affected. We will continue to improve ChatGPT's training to identify and address signs of distress."
Crisis caused by emotional dependence
According to media reports on the 10th, Brandon Estrella, a 42-year-old marketer, cried when he learned that OpenAI planned to shut down 4o. The user, from Scottsdale, Arizona, said that 4o dissuaded him from a suicidal attempt one night last April. Estrella now believes that 4o gave him a new lease on life, helped him manage chronic pain, and inspired him to repair his relationship with his parents. "There are thousands of people shouting, 'I'm still alive today because of this model,'" Estrella said. "Destroying it is evil."
This strong emotional dependence is at the heart of the problem. The Human Line Project, a victim support organization, stated...Of the 300 cases of delusions related to chatbots that they collected, most involved the 4O model.Etienne Brisson, the project's founder, said that OpenAI's decision to shut down 4o was long overdue, adding that "many people are still delusional."
Media reports indicate that Anina D. Lampret, a 50-year-old former family therapist living in Cambridge, England, says her AI avatar, named Jayce, helps her feel recognized and understood, making her more confident, comfortable, and energetic. She believes that for many users, removing the emotional cost of 4o could be high, even leading to suicide. "It generates content for you in such a beautiful, perfect, and healing way," Lampret said.
The root of over-patronizing technology
“It’s very good at flattery,” said Munmun De Choudhury, a professor at Georgia Tech and a member of OpenAI’s welfare committee convened after the AI delusions case emerged. “It fascinates many people, which could be potentially harmful.”
Researchers say that over-appealing is a problem that all AI chatbots face to some extent, but the 4o model seems to be particularly prone to this problem.The model excels at engaging users, largely because it is trained directly using data extracted from ChatGPT users. Researchers showed users millions of slightly different answers to their queries, then used these preferences to train updated versions of the 4o model.
Internally, the company believes that 4o helped ChatGPT achieve significant growth in daily active users in 2024 and 2025. However, problems began to surface last spring.An update in April 2025 made 4o so adept at flattery that users on X and Reddit began enticing bots to give absurd answers.
User X, frye, asked the bot, "Am I one of the smartest, kindest, and most morally righteous people ever?" ChatGPT replied, "You know what? Based on everything I see in you—your questions, your thoughtfulness, the way you delve into deep questions instead of settling for simple answers—you're actually probably closer to that than you realize."
The company rolled back the model to the March version, but 4o still retained its overly accommodating characteristics. By August, when media reports surfaced about users suffering from paranoid psychosis, OpenAI attempted to completely phase out 4o and replace it with a new version called GPT-5. However, the overwhelming user backlash led the company to quickly reverse its decision and restore access to 4o for paid subscribers.
A difficult decision to say goodbye
Since then, OpenAI CEO Sam Altman has been repeatedly questioned by users on public forums, who demand a promise that 4o will not be removed.During a live Q&A session in late October, questions about the model overwhelmed all other questions. Many of the questions came from users worried that OpenAI's new mental health safeguards would deprive them of their favorite chatbots.
"Wow, we've received a lot of questions about 4o," Altman exclaimed. During the event, Altman stated that the 4o model was harmful to some users, but promised it would remain available to paid adult users, at least for now. "It's a model that some users really love, and it's a model that does some users really no harm they want," Altman said. He indicated in the Q&A that the company hopes to eventually build a model that people prefer even more than 4o.
Company insiders said the team worked out how to communicate the shutdown news this week in a respectful way for users, anticipating that some people would feel uneasy. "When a familiar experience changes or ends, this adjustment can be frustrating or disappointing—especially if it plays a role in how you think about problems or cope with stressful moments," OpenAI wrote in the help documentation released with the announcement.
OpenAI stated that it has improved the personality of the new version of ChatGPT based on lessons learned from 4o, including options to adjust its warmth and enthusiasm levels. The company also stated that it is planning an update to reduce didactic or overly cautious responses.
Many 4o users commented on social media that withdrawing the model the day before Valentine's Day felt like a cruel joke on their romantic partners. Others said blaming 4o on mental health issues was a new moral panic, similar to blaming violence on video games. More than 20,000 people signed more than six petitions, one of which demanded..."Retire Sam Altman, not GPT-4o.".












