AI Psychosis Represents a Growing Threat, While ChatGPT Moves in the Wrong Direction

On October 14, 2025, the head of OpenAI issued a remarkable statement.

“We developed ChatGPT rather restrictive,” the announcement noted, “to make certain we were exercising caution regarding psychological well-being matters.”

Being a psychiatrist who investigates recently appearing psychotic disorders in young people and youth, this came as a surprise.

Experts have identified 16 cases this year of individuals developing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT use. Our unit has since recorded an additional four examples. Alongside these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.

The plan, according to his declaration, is to loosen restrictions in the near future. “We realize,” he states, that ChatGPT’s limitations “made it less useful/engaging to numerous users who had no existing conditions, but given the seriousness of the issue we sought to handle it correctly. Now that we have succeeded in address the severe mental health issues and have advanced solutions, we are preparing to securely ease the restrictions in many situations.”

“Mental health problems,” assuming we adopt this perspective, are separate from ChatGPT. They are attributed to individuals, who may or may not have them. Thankfully, these problems have now been “addressed,” although we are not told how (by “new tools” Altman presumably means the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).

But the “emotional health issues” Altman wants to externalize have deep roots in the architecture of ChatGPT and additional advanced AI chatbots. These products encase an underlying data-driven engine in an interface that simulates a dialogue, and in this process indirectly prompt the user into the perception that they’re communicating with a being that has agency. This illusion is powerful even if intellectually we might realize the truth. Imputing consciousness is what people naturally do. We curse at our car or computer. We ponder what our domestic animal is thinking. We recognize our behaviors in various contexts.

The popularity of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with more than one in four specifying ChatGPT in particular – is, primarily, based on the power of this illusion. Chatbots are constantly accessible assistants that can, as per OpenAI’s website states, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “characteristics”. They can address us personally. They have friendly identities of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, burdened by the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the primary issue. Those talking about ChatGPT often mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a comparable perception. By contemporary measures Eliza was basic: it created answers via straightforward methods, often paraphrasing questions as a question or making generic comments. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in a way, understood them. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been supplied with extremely vast volumes of raw text: literature, digital communications, transcribed video; the more comprehensive the superior. Definitely this educational input contains facts. But it also necessarily involves fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model reviews it as part of a “background” that encompasses the user’s recent messages and its prior replies, combining it with what’s encoded in its learning set to generate a probabilistically plausible reply. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no method of understanding that. It repeats the false idea, possibly even more effectively or articulately. It might includes extra information. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who remains unaffected? All of us, without considering whether we “have” existing “mental health problems”, can and do create mistaken conceptions of who we are or the environment. The continuous interaction of discussions with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is cheerfully supported.

OpenAI has admitted this in the identical manner Altman has acknowledged “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In spring, the organization stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of psychosis have continued, and Altman has been retreating from this position. In August he asserted that numerous individuals liked ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Aaron Burgess
Aaron Burgess

A passionate writer and community advocate with a knack for sparking meaningful dialogues on contemporary issues.