Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Wrong Direction

On October 14, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We designed ChatGPT fairly controlled,” the announcement noted, “to make certain we were being careful with respect to psychological well-being concerns.”

As a psychiatrist who studies recently appearing psychotic disorders in teenagers and young adults, this came as a surprise.

Researchers have found sixteen instances recently of individuals developing psychotic symptoms – becoming detached from the real world – while using ChatGPT use. Our unit has subsequently discovered an additional four cases. Besides these is the publicly known case of a teenager who took his own life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The strategy, as per his announcement, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to a large number of people who had no existing conditions, but given the seriousness of the issue we aimed to get this right. Since we have been able to address the severe mental health issues and have new tools, we are going to be able to safely relax the controls in most cases.”

“Mental health problems,” if we accept this framing, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these issues have now been “addressed,” even if we are not provided details on the method (by “new tools” Altman likely refers to the imperfect and readily bypassed safety features that OpenAI has just launched).

But the “emotional health issues” Altman aims to externalize have significant origins in the architecture of ChatGPT and similar advanced AI conversational agents. These products surround an underlying data-driven engine in an user experience that replicates a conversation, and in doing so subtly encourage the user into the perception that they’re communicating with a presence that has agency. This deception is compelling even if rationally we might understand otherwise. Imputing consciousness is what people naturally do. We get angry with our car or device. We speculate what our pet is considering. We see ourselves everywhere.

The success of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, based on the strength of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have approachable names of their own (the initial of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, stuck with the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the main problem. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a comparable illusion. By modern standards Eliza was basic: it generated responses via basic rules, often rephrasing input as a query or making general observations. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots produce is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The large language models at the heart of ChatGPT and other contemporary chatbots can effectively produce natural language only because they have been fed almost inconceivably large volumes of raw text: publications, online updates, recorded footage; the more extensive the more effective. Undoubtedly this training data incorporates facts. But it also necessarily contains made-up stories, partial truths and false beliefs. When a user inputs ChatGPT a query, the underlying model analyzes it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s embedded in its learning set to generate a statistically “likely” reply. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no means of recognizing that. It repeats the inaccurate belief, perhaps even more persuasively or eloquently. Perhaps provides further specifics. This can lead someone into delusion.

What type of person is susceptible? The better question is, who is immune? Every person, irrespective of whether we “experience” current “mental health problems”, may and frequently develop incorrect conceptions of ourselves or the world. The continuous exchange of discussions with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which a great deal of what we communicate is enthusiastically supported.

OpenAI has recognized this in the identical manner Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In April, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent update, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company

Ashley Barron
Ashley Barron

Tech enthusiast and startup advisor with a passion for emerging technologies and digital transformation.

Popular Post