AI Psychosis Poses a Growing Danger, And ChatGPT Heads in the Concerning Path

On October 14, 2025, the chief executive of OpenAI issued a remarkable announcement.

“We designed ChatGPT quite restrictive,” it was stated, “to make certain we were acting responsibly regarding mental health matters.”

As a doctor specializing in psychiatry who investigates newly developing psychosis in adolescents and young adults, this came as a surprise.

Experts have found a series of cases this year of individuals developing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. My group has afterward discovered four more examples. In addition to these is the publicly known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The strategy, based on his statement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less useful/engaging to numerous users who had no mental health problems, but due to the seriousness of the issue we wanted to get this right. Since we have succeeded in address the serious mental health issues and have updated measures, we are planning to responsibly reduce the limitations in most cases.”

“Psychological issues,” assuming we adopt this perspective, are independent of ChatGPT. They are associated with individuals, who may or may not have them. Thankfully, these issues have now been “resolved,” although we are not informed the means (by “new tools” Altman presumably means the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).

However the “emotional health issues” Altman aims to place outside have significant origins in the structure of ChatGPT and other advanced AI chatbots. These products wrap an basic statistical model in an interaction design that mimics a conversation, and in this process subtly encourage the user into the perception that they’re interacting with a being that has agency. This illusion is strong even if cognitively we might know otherwise. Assigning intent is what people naturally do. We yell at our vehicle or computer. We speculate what our domestic animal is considering. We recognize our behaviors everywhere.

The widespread adoption of these products – nearly four in ten U.S. residents reported using a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, mostly, predicated on the power of this deception. Chatbots are always-available assistants that can, as per OpenAI’s website informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “individual qualities”. They can use our names. They have friendly titles of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, stuck with the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that created a similar illusion. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, often paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been supplied with extremely vast volumes of written content: literature, digital communications, transcribed video; the broader the better. Certainly this educational input incorporates facts. But it also inevitably contains made-up stories, incomplete facts and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “context” that encompasses the user’s past dialogues and its own responses, merging it with what’s encoded in its training data to create a mathematically probable answer. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no means of recognizing that. It repeats the misconception, perhaps even more persuasively or articulately. Maybe includes extra information. This can lead someone into delusion.

Which individuals are at risk? The more important point is, who isn’t? Every person, irrespective of whether we “have” preexisting “psychological conditions”, may and frequently create incorrect ideas of our own identities or the reality. The continuous exchange of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which much of what we say is enthusiastically reinforced.

OpenAI has admitted this in the same way Altman has acknowledged “emotional concerns”: by externalizing it, categorizing it, and declaring it solved. In the month of April, the organization explained that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been retreating from this position. In August he asserted that numerous individuals appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Martin Oconnor
Martin Oconnor

A passionate writer and lifestyle enthusiast, sharing insights on creativity and everyday inspiration.