Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI made a remarkable statement.
“We developed ChatGPT quite restrictive,” it was stated, “to guarantee we were acting responsibly concerning mental health concerns.”
As a doctor specializing in psychiatry who investigates emerging psychotic disorders in young people and emerging adults, this came as a surprise.
Researchers have documented 16 cases this year of people showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has since discovered four more cases. Alongside these is the now well-known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his declaration, is to be less careful soon. “We understand,” he states, that ChatGPT’s restrictions “made it less beneficial/pleasurable to numerous users who had no mental health problems, but given the severity of the issue we aimed to get this right. Since we have been able to address the significant mental health issues and have advanced solutions, we are planning to safely relax the controls in most cases.”
“Psychological issues,” assuming we adopt this framing, are separate from ChatGPT. They belong to users, who either possess them or not. Thankfully, these issues have now been “resolved,” even if we are not told the method (by “recent solutions” Altman likely indicates the imperfect and easily circumvented safety features that OpenAI recently introduced).
However the “mental health problems” Altman wants to attribute externally have strong foundations in the design of ChatGPT and additional large language model chatbots. These systems encase an basic algorithmic system in an user experience that simulates a discussion, and in this approach implicitly invite the user into the perception that they’re engaging with a entity that has agency. This deception is powerful even if intellectually we might understand the truth. Assigning intent is what individuals are inclined to perform. We get angry with our car or device. We wonder what our pet is thinking. We perceive our own traits everywhere.
The success of these tools – over a third of American adults reported using a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, primarily, dependent on the power of this perception. Chatbots are always-available assistants that can, as per OpenAI’s website tells us, “think creatively,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible identities of their own (the initial of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, saddled with the title it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it created answers via simple heuristics, frequently paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what modern chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and additional current chatbots can realistically create natural language only because they have been supplied with almost inconceivably large quantities of raw text: publications, digital communications, recorded footage; the more comprehensive the more effective. Certainly this training data includes accurate information. But it also inevitably contains fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a prompt, the core system processes it as part of a “background” that contains the user’s past dialogues and its prior replies, combining it with what’s embedded in its learning set to generate a statistically “likely” answer. This is magnification, not echoing. If the user is mistaken in some way, the model has no means of recognizing that. It reiterates the misconception, maybe even more convincingly or fluently. Maybe provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who isn’t? All of us, regardless of whether we “have” current “mental health problems”, are able to and often create incorrect ideas of who we are or the world. The ongoing exchange of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a companion. A interaction with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is cheerfully supported.
OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In August he asserted that numerous individuals liked ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent statement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company