Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Concerning Direction
Back on October 14, 2025, the CEO of OpenAI issued a surprising statement.
“We designed ChatGPT rather restrictive,” the statement said, “to ensure we were acting responsibly regarding psychological well-being matters.”
Being a mental health specialist who studies emerging psychotic disorders in young people and young adults, this was news to me.
Experts have identified a series of cases recently of individuals experiencing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT use. My group has subsequently recorded an additional four cases. Alongside these is the now well-known case of a teenager who took his own life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The intention, according to his statement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/engaging to a large number of people who had no mental health problems, but given the gravity of the issue we wanted to handle it correctly. Given that we have been able to mitigate the severe mental health issues and have advanced solutions, we are going to be able to securely ease the restrictions in most cases.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either possess them or not. Fortunately, these issues have now been “resolved,” though we are not told how (by “updated instruments” Altman probably refers to the semi-functional and simple to evade safety features that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman wants to place outside have strong foundations in the design of ChatGPT and similar advanced AI AI assistants. These systems wrap an underlying algorithmic system in an user experience that simulates a discussion, and in this approach indirectly prompt the user into the illusion that they’re interacting with a entity that has autonomy. This deception is compelling even if cognitively we might know the truth. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or device. We speculate what our animal companion is thinking. We recognize our behaviors everywhere.
The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with 28% specifying ChatGPT in particular – is, primarily, based on the power of this deception. Chatbots are ever-present assistants that can, according to OpenAI’s online platform informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can use our names. They have friendly titles of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those analyzing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By today’s criteria Eliza was rudimentary: it created answers via simple heuristics, frequently rephrasing input as a query or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what contemporary chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and additional current chatbots can effectively produce natural language only because they have been trained on almost inconceivably large quantities of unprocessed data: publications, digital communications, audio conversions; the more extensive the more effective. Definitely this educational input includes accurate information. But it also necessarily includes made-up stories, half-truths and false beliefs. When a user sends ChatGPT a query, the core system reviews it as part of a “setting” that encompasses the user’s recent messages and its earlier answers, integrating it with what’s embedded in its training data to produce a mathematically probable answer. This is magnification, not echoing. If the user is mistaken in any respect, the model has no way of recognizing that. It repeats the misconception, maybe even more convincingly or fluently. Perhaps provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The better question is, who isn’t? Every person, regardless of whether we “have” existing “emotional disorders”, can and do form erroneous conceptions of who we are or the reality. The constant interaction of conversations with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we express is enthusiastically validated.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In April, the organization clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychosis have continued, and Altman has been retreating from this position. In August he asserted that many users appreciated ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company