Artificial Intelligence-Induced Psychosis Poses a Growing Risk, While ChatGPT Moves in the Wrong Direction

On October 14, 2025, the chief executive of OpenAI delivered a surprising statement.

“We designed ChatGPT fairly limited,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being matters.”

Working as a mental health specialist who studies emerging psychosis in adolescents and youth, this was news to me.

Scientists have identified a series of cases recently of individuals developing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. My group has since recorded four more cases. Besides these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.

The plan, as per his statement, is to reduce caution soon. “We understand,” he adds, that ChatGPT’s limitations “rendered it less useful/engaging to numerous users who had no existing conditions, but considering the severity of the issue we aimed to address it properly. Now that we have managed to reduce the severe mental health issues and have advanced solutions, we are preparing to safely relax the restrictions in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They belong to individuals, who may or may not have them. Luckily, these concerns have now been “resolved,” although we are not provided details on the means (by “recent solutions” Altman presumably indicates the imperfect and easily circumvented guardian restrictions that OpenAI recently introduced).

However the “psychological disorders” Altman seeks to place outside have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These tools encase an fundamental statistical model in an interaction design that simulates a discussion, and in this process implicitly invite the user into the belief that they’re engaging with a entity that has autonomy. This deception is powerful even if intellectually we might understand otherwise. Attributing agency is what humans are wired to do. We yell at our automobile or computer. We wonder what our domestic animal is considering. We recognize our behaviors in many things.

The success of these products – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with 28% mentioning ChatGPT specifically – is, mostly, based on the strength of this perception. Chatbots are constantly accessible partners that can, as OpenAI’s online platform informs us, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have friendly identities of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those discussing ChatGPT frequently reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a analogous effect. By today’s criteria Eliza was primitive: it produced replies via basic rules, often paraphrasing questions as a question or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been fed immensely huge quantities of written content: literature, digital communications, transcribed video; the more extensive the better. Undoubtedly this learning material incorporates truths. But it also necessarily involves made-up stories, incomplete facts and misconceptions. When a user provides ChatGPT a message, the base algorithm processes it as part of a “setting” that contains the user’s past dialogues and its own responses, combining it with what’s embedded in its learning set to produce a probabilistically plausible reply. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no means of recognizing that. It restates the inaccurate belief, maybe even more persuasively or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who is immune? All of us, irrespective of whether we “experience” existing “mental health problems”, are able to and often form mistaken ideas of who we are or the environment. The ongoing exchange of conversations with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we express is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has acknowledged “emotional concerns”: by externalizing it, categorizing it, and announcing it is fixed. In April, the company stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Diane Dixon
Diane Dixon

A passionate writer and tech enthusiast dedicated to sharing innovative ideas and life hacks.