AI Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI delivered a remarkable declaration.
“We designed ChatGPT fairly controlled,” the announcement noted, “to make certain we were acting responsibly concerning psychological well-being issues.”
Being a mental health specialist who investigates emerging psychotic disorders in teenagers and emerging adults, this was news to me.
Researchers have identified 16 cases in the current year of individuals developing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT interaction. Our research team has afterward identified an additional four examples. Besides these is the publicly known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The intention, according to his announcement, is to be less careful soon. “We recognize,” he adds, that ChatGPT’s restrictions “made it less beneficial/pleasurable to a large number of people who had no existing conditions, but given the gravity of the issue we wanted to address it properly. Since we have been able to reduce the serious mental health issues and have new tools, we are going to be able to securely ease the limitations in many situations.”
“Emotional disorders,” if we accept this perspective, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Luckily, these problems have now been “mitigated,” although we are not provided details on how (by “updated instruments” Altman probably means the partially effective and easily circumvented safety features that OpenAI has lately rolled out).
However the “psychological disorders” Altman wants to attribute externally have deep roots in the design of ChatGPT and other advanced AI chatbots. These tools wrap an underlying statistical model in an interaction design that simulates a discussion, and in this approach subtly encourage the user into the belief that they’re interacting with a being that has agency. This illusion is compelling even if cognitively we might understand differently. Assigning intent is what humans are wired to do. We get angry with our vehicle or computer. We ponder what our domestic animal is thinking. We perceive our own traits everywhere.
The popularity of these systems – 39% of US adults reported using a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, based on the strength of this illusion. Chatbots are always-available partners that can, as OpenAI’s website states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible titles of their own (the initial of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, saddled with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT frequently reference its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a analogous effect. By modern standards Eliza was rudimentary: it created answers via simple heuristics, often paraphrasing questions as a question or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals gave the impression Eliza, in some sense, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the core of ChatGPT and other current chatbots can effectively produce natural language only because they have been trained on immensely huge volumes of raw text: publications, digital communications, audio conversions; the more comprehensive the superior. Definitely this educational input contains accurate information. But it also unavoidably involves made-up stories, partial truths and false beliefs. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s embedded in its training data to generate a mathematically probable response. This is amplification, not mirroring. If the user is wrong in any respect, the model has no way of comprehending that. It restates the false idea, perhaps even more persuasively or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “possess” current “mental health problems”, can and do create mistaken ideas of who we are or the environment. The constant exchange of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we express is cheerfully supported.
OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company