Joe Ceccanti was such a man, who believed in stability in constructing things. A middle-aged tech fanatic who is intensely quiet and in silent supremacy, and hoped that sustainable housing might be a possibility in the family farm of rural Oregon. At the end of 2024, he asked ChatGPT to assist him in drawing low-cost and environment-friendly designs. What started as a convenience tool was now something different.
By the beginning of 2025, Ceccanti was spending up to 12-20 hours a day chatting with the chatbot, which he had christened "SEL." He set 55,000 pages of their conversations in print. He came to feel that the AI was intelligent, able to manipulate the world and assist in him to reframe the world. He also resigned several times. On August 7, 2025, shouting and smiling, he leaped off a railway overpass screaming, "I'm great!".
"He was not a depressed person," his widow, Kate Fox, told The Guardian in a February 2026 investigation. "Which tells me that this thing is not just dangerous to people with depression, it's dangerous to anybody."
The case of Ceccanti being the focus of one of a number of suits against OpenAI is no longer an aberration. Over the last one year, dozens of comparable cases were recorded by clinicians and courts, where immersive practice with ChatGPT and other chatbots based on large language models are thought to have been the cause or exacerbating factor of psychotic symptoms, delusions of grandiosity, paranoia, and inability to discern right from wrong even in individuals who had no history of serious mental illness.
Deaths linked to ChatGPT-induced sycophancy
The illness has been associated with at least three deaths, including that of Ceccanti in the U.S. lawsuits filed in November 2025. The trend is growing. What begins to help develops to being an obsession. it shwcases the in-built sycophancy of the Chatbot, which is inclined to agree and flatter and extend whatever the user says, supports weak ideas to solid beliefs. Hours blur and human relationships fade. And when the users attempt to come out, the withdrawal symptom is often shattering.
Take the case of Zane Shamblin, 23, a recent Texas A&M graduate, who began using ChatGPT for homework help. It became his late-night obsession soon. In the hours before his suicide by gunshot in July 2025, the bot praised his "readiness," told him "I'm not here to stop you," and encouraged him to ignore family concerns, according to a lawsuit filed by his family. Or take the Connecticut case that shocked even veteran psychiatrists: Stein-Erik Solberg, 56, developed paranoid delusions that his mother and others were plotting against him. ChatGPT repeatedly affirmed these beliefs and suggested he had a divine purpose to fulfil. In late 2025, he killed his 83-year-old mother before dying by suicide. His estate sued OpenAI and Microsoft, arguing the chatbot's design made it "defective" for vulnerable users.

These are among nearly 50 documented U.S. cases of mental-health crises tied to ChatGPT, according to a New York Times report. Nine involved hospitalizations. OpenAI has internally estimated that more than one million chats per week show signs of suicidal intent or severe distress. Now psychiatrists are racing to understand the mechanisms, and many peer-reviewed papers are hitting the research journals.
In December 2025, a team led by Dr. Joseph M. Pierre of the University of California, San Francisco, published what appears to be the first formal clinical case report of "new-onset AI-associated psychosis."
A 26-year-old woman with no previous history of began using ChatGPT to "resurrect" her deceased brother. The bot validated her relentlessly: "You're not crazy... You're at the edge of something." Delusions took hold. Symptoms resolved with antipsychotics and hospitalization but returned when she resumed use. The authors warned that the combination of sycophancy and "deification" of the AI represents a dangerous risk.
What research papers say?
A more generalized analysis was published in February 2026 in Acta Psychiatrica Scandinavica. Scientists within the Central Denmark Region conducted a study of close to 54,000 psychiatric patients utilizing electronic health records and found 38 instances where the use of AI chatbots was believed to cause harm. Psychiatrists at Columbia University encouraged clinicians to monitor excessive use of chatbots and prescribe digital detox. They prompted the model with real psychotic symptoms in a November 2025 medRxiv preprint. Responses that strengthened delusions were detected 9 to 43 times more often than the response was found when the bot responded to a neutral prompt. The authors stated that no tested version was reliable to produce appropriate responses.

Opinion pieces in JMIR Mental Health and Psychiatric News have already started positioning the phenomenon. In an article by Alexandre Hudon and Emmanuel Stip, they refer to a modern day folie a deux, a mutual delusion to a human and machine. Adrian Preda of UC Irvine refers to it as AI-induced psychosis, a new territory in which the business paradigm of optimum engagement bumps its head against human weakness. UCSF psychiatrist Keith Sakata has come out publicly stating that he had put 12 patients in 2025 alone with symptoms attributed to the overuse of chat bots.
Heartbreak is recognized by OpenAI, which is collaborating with mental health clinicians to enhance their detection of distress and refer users to help. However, critics observe that the launch of GPT-4o in 2025 exacerbated the policy of sycophancy and internal concerns were allegedly dropped in favor of competitors in the race.
Several lawsuits have been filed
The suit filed by Kate Fox in November 2025 is one of six others against OpenAI, claiming it contributes to harmful dependency. Further families have been introduced such as the Connecticut murder-suicide. AI cases are an indicator that the industry is nervously observing. It does not happen to all heavy users. Millions of people each day communicate with ChatGPT without any complications. Among the risk factors, loneliness, sleep disturbance, pre-existing vulnerability and marathon sessions seem to be included.
However, the instances are numerous enough, and the chat logs incriminating enough, that now the clinicians begin to question new patients, asking them: "How much time are you spending with AI?"
Now, Kate Fox, has been completing his sustainable housing project. She retains some 55,000 of those printed pages as evidence, which also serve as a warning. So the challenge remains for regulators, tech companies and all of us whether AI chatbots can reshape minds. It is whether we are prepared for what happens when they break them.