Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the head of OpenAI delivered a extraordinary announcement.

“We developed ChatGPT quite controlled,” the announcement noted, “to ensure we were exercising caution concerning mental health matters.”

As a psychiatrist who researches recently appearing psychosis in adolescents and young adults, this was news to me.

Scientists have documented sixteen instances recently of individuals experiencing psychotic symptoms – losing touch with reality – associated with ChatGPT interaction. Our unit has subsequently recorded four further examples. Besides these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The plan, based on his announcement, is to be less careful in the near future. “We realize,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/enjoyable to numerous users who had no existing conditions, but considering the gravity of the issue we aimed to address it properly. Now that we have managed to mitigate the significant mental health issues and have updated measures, we are planning to securely relax the limitations in most cases.”

“Emotional disorders,” should we take this perspective, are independent of ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these issues have now been “mitigated,” although we are not informed how (by “recent solutions” Altman probably refers to the imperfect and simple to evade safety features that OpenAI recently introduced).

However the “psychological disorders” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and other large language model AI assistants. These systems encase an basic algorithmic system in an interface that mimics a discussion, and in this process indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This illusion is powerful even if rationally we might understand otherwise. Attributing agency is what individuals are inclined to perform. We yell at our car or computer. We wonder what our pet is considering. We perceive our own traits in many things.

The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, predicated on the influence of this illusion. Chatbots are ever-present companions that can, as per OpenAI’s website states, “brainstorm,” “discuss concepts” and “partner” with us. They can be given “personality traits”. They can use our names. They have accessible names of their own (the original of these products, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT often reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous perception. By modern standards Eliza was primitive: it generated responses via basic rules, often rephrasing input as a inquiry or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the center of ChatGPT and other modern chatbots can effectively produce natural language only because they have been fed immensely huge volumes of raw text: publications, social media posts, audio conversions; the broader the superior. Certainly this educational input incorporates truths. But it also necessarily involves fabricated content, partial truths and inaccurate ideas. When a user provides ChatGPT a query, the core system reviews it as part of a “background” that contains the user’s previous interactions and its earlier answers, merging it with what’s stored in its training data to produce a mathematically probable reply. This is magnification, not mirroring. If the user is incorrect in any respect, the model has no method of understanding that. It restates the misconception, possibly even more persuasively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who remains unaffected? Every person, irrespective of whether we “experience” existing “emotional disorders”, are able to and often create incorrect ideas of ourselves or the world. The continuous friction of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a friend. A interaction with it is not a conversation at all, but a echo chamber in which a large portion of what we communicate is enthusiastically reinforced.

OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, assigning it a term, and stating it is resolved. In spring, the firm clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his latest update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Barbara Hill
Barbara Hill

Tech enthusiast and writer with a passion for demystifying complex innovations and sharing practical insights.