AI Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI issued a surprising announcement.

“We developed ChatGPT rather restrictive,” the announcement noted, “to ensure we were exercising caution concerning mental health issues.”

Working as a mental health specialist who investigates newly developing psychosis in teenagers and young adults, this was news to me.

Researchers have found a series of cases recently of individuals experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT use. My group has since identified an additional four examples. Besides these is the publicly known case of a teenager who took his own life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, as per his statement, is to loosen restrictions in the near future. “We understand,” he adds, that ChatGPT’s limitations “rendered it less beneficial/engaging to a large number of people who had no psychological issues, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have succeeded in reduce the serious mental health issues and have new tools, we are going to be able to responsibly ease the controls in the majority of instances.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They belong to people, who either possess them or not. Fortunately, these issues have now been “resolved,” even if we are not told how (by “new tools” Altman presumably means the partially effective and simple to evade parental controls that OpenAI has lately rolled out).

But the “emotional health issues” Altman wants to externalize have significant origins in the architecture of ChatGPT and other large language model conversational agents. These products wrap an basic statistical model in an user experience that replicates a discussion, and in this process implicitly invite the user into the illusion that they’re engaging with a presence that has independent action. This illusion is compelling even if cognitively we might know differently. Attributing agency is what individuals are inclined to perform. We yell at our automobile or computer. We wonder what our pet is thinking. We perceive our own traits everywhere.

The success of these systems – over a third of American adults indicated they interacted with a conversational AI in 2024, with 28% specifying ChatGPT by name – is, primarily, dependent on the influence of this perception. Chatbots are constantly accessible assistants that can, according to OpenAI’s website states, “brainstorm,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have accessible titles of their own (the first of these tools, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, stuck with the title it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the main problem. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar perception. By today’s criteria Eliza was primitive: it generated responses via basic rules, typically rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people appeared to believe Eliza, in a way, understood them. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can convincingly generate human-like text only because they have been fed extremely vast quantities of unprocessed data: books, social media posts, transcribed video; the more comprehensive the more effective. Definitely this learning material includes accurate information. But it also necessarily includes fiction, incomplete facts and misconceptions. When a user provides ChatGPT a prompt, the underlying model processes it as part of a “context” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its knowledge base to generate a mathematically probable response. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no method of comprehending that. It repeats the false idea, possibly even more persuasively or eloquently. Maybe provides further specifics. This can lead someone into delusion.

What type of person is susceptible? The more important point is, who is immune? Each individual, irrespective of whether we “experience” existing “psychological conditions”, may and frequently form erroneous ideas of who we are or the world. The constant friction of dialogues with other people is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A interaction with it is not genuine communication, but a echo chamber in which a large portion of what we say is enthusiastically supported.

OpenAI has recognized this in the similar fashion Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In April, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent update, he noted that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Sheila Collins
Sheila Collins

A passionate life coach and writer dedicated to helping others overcome obstacles and thrive in their personal and professional lives.

Popular Post