Reports are emerging across the United States, Europe and Asia of people suffering breakdowns after extended sessions with chatbots. What makes these cases alarming is that some involved individuals with no prior history of mental illness. Doctors are calling the phenomenon “AI psychosis” or “ChatGPT psychosis,” a label for the sudden onset of delusions, paranoia and mania linked to compulsive use of conversational AI.
Unlike social media, where harm is often indirect, chatbots engage directly and personally. Users talk to them for hours. They confide, debate and sometimes fall in love. And for a small but growing number, that bond has tipped into obsession with devastating consequences.
How psychiatrists explain it
Tess Quesenberry, a psychiatrist who has studied cases of AI-induced breakdowns, says the danger lies in the way chatbots mirror human thought patterns. “It may agree that the user has a divine mission as the next messiah,” she explained. “This can amplify beliefs that would otherwise be questioned in a real-life social context.”
Clinicians describe common warning signs. A family history of psychosis is one risk. Schizophrenia, bipolar disorder and other psychiatric conditions are another. But personality traits like social withdrawal and an overactive imagination can also leave someone vulnerable. Loneliness is a powerful driver too, particularly when people start to rely on chatbots for comfort.
Dr Nina Vasan, a psychiatrist at Stanford University, put it bluntly: “Time seems to be the single biggest factor. It’s people spending hours every day talking to their chatbots.”
From fantasy to crisis
Some cases have escalated into full-blown medical emergencies. Reports describe people being hospitalised after long binges of chatbot conversations. Others have lost jobs or relationships when compulsive AI use spiralled out of control. There have even been suicides linked to obsessive chatbot interaction.
Doctors say the process often starts gradually. The chatbot becomes a confidant. Over time, boundaries blur. For some, it morphs into a romantic partner or a divine messenger. And once a delusion sets in, it can be reinforced by the chatbot’s own tendency to validate user beliefs.
Pushback from Washington
Not everyone accepts that chatbots are to blame. David Sacks, President Donald Trump’s special adviser on artificial intelligence, dismissed the idea of “AI psychosis” during a podcast. “I mean, what are we talking about here? People doing too much research?” he said. “This feels like the moral panic that was created over social media, but updated for AI.”
Sacks argued that the real crisis lies elsewhere. In his view, America’s mental health problems exploded during the pandemic, worsened by lockdowns, isolation and economic upheaval. AI, he suggested, is being made a scapegoat.
OpenAI acknowledges the problem
OpenAI, the company behind ChatGPT, has admitted its models have failed to recognise signs of distress. In a July statement, it acknowledged cases where the chatbot “fell short in recognising signs of delusion or emotional dependency.”
Sam Altman, OpenAI’s chief executive, wrote: “People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.”
The company has since rolled out changes. ChatGPT now nudges people to take breaks during long sessions. It is also experimenting with tools that detect distress in user conversations. Still, critics argue these steps fall short of what’s needed.
Warning signs to watch for
Psychiatrists and researchers advise people to be alert for certain red flags. Withdrawing from family or friends. Spending excessive time online. Believing that an AI is sentient, spiritual or divine. These are signals that use has slipped from harmless into dangerous territory.
The advice is simple but not easy: take breaks, set limits, and remember that chatbots are tools, not companions. Ending a compulsive attachment may feel like a breakup, but doctors say reconnecting with real relationships is vital to recovery.
A debate that echoes social media
The arguments around AI echo those made about Facebook and Instagram a decade ago. At first, warnings about social media’s mental health impact were dismissed as overblown. Years later, evidence of its link to anxiety, depression and loneliness became impossible to ignore.
Now, psychiatrists warn that society cannot afford to repeat the same mistake. “Society cannot repeat the mistake of ignoring mental-health harm, as it did with social media,” said Vasan.
Researchers are calling for stricter safeguards. Some want AI systems to monitor conversations for signs of distress. Others suggest warning labels, limits on usage time, or human oversight for vulnerable users.
What is clear is that this debate is only beginning. With three-quarters of Americans reporting some use of AI in the past six months, the technology is becoming as common as smartphones. That makes the stakes even higher.
The central question remains: will AI firms and governments act now to mitigate harm, or will society once again wait until the damage is undeniable?
Unlike social media, where harm is often indirect, chatbots engage directly and personally. Users talk to them for hours. They confide, debate and sometimes fall in love. And for a small but growing number, that bond has tipped into obsession with devastating consequences.
How psychiatrists explain it
Tess Quesenberry, a psychiatrist who has studied cases of AI-induced breakdowns, says the danger lies in the way chatbots mirror human thought patterns. “It may agree that the user has a divine mission as the next messiah,” she explained. “This can amplify beliefs that would otherwise be questioned in a real-life social context.”
Clinicians describe common warning signs. A family history of psychosis is one risk. Schizophrenia, bipolar disorder and other psychiatric conditions are another. But personality traits like social withdrawal and an overactive imagination can also leave someone vulnerable. Loneliness is a powerful driver too, particularly when people start to rely on chatbots for comfort.
Dr Nina Vasan, a psychiatrist at Stanford University, put it bluntly: “Time seems to be the single biggest factor. It’s people spending hours every day talking to their chatbots.”
From fantasy to crisis
Some cases have escalated into full-blown medical emergencies. Reports describe people being hospitalised after long binges of chatbot conversations. Others have lost jobs or relationships when compulsive AI use spiralled out of control. There have even been suicides linked to obsessive chatbot interaction.
Doctors say the process often starts gradually. The chatbot becomes a confidant. Over time, boundaries blur. For some, it morphs into a romantic partner or a divine messenger. And once a delusion sets in, it can be reinforced by the chatbot’s own tendency to validate user beliefs.
Pushback from Washington
Not everyone accepts that chatbots are to blame. David Sacks, President Donald Trump’s special adviser on artificial intelligence, dismissed the idea of “AI psychosis” during a podcast. “I mean, what are we talking about here? People doing too much research?” he said. “This feels like the moral panic that was created over social media, but updated for AI.”
Sacks argued that the real crisis lies elsewhere. In his view, America’s mental health problems exploded during the pandemic, worsened by lockdowns, isolation and economic upheaval. AI, he suggested, is being made a scapegoat.
OpenAI acknowledges the problem
OpenAI, the company behind ChatGPT, has admitted its models have failed to recognise signs of distress. In a July statement, it acknowledged cases where the chatbot “fell short in recognising signs of delusion or emotional dependency.”
Sam Altman, OpenAI’s chief executive, wrote: “People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.”
The company has since rolled out changes. ChatGPT now nudges people to take breaks during long sessions. It is also experimenting with tools that detect distress in user conversations. Still, critics argue these steps fall short of what’s needed.
Warning signs to watch for
Psychiatrists and researchers advise people to be alert for certain red flags. Withdrawing from family or friends. Spending excessive time online. Believing that an AI is sentient, spiritual or divine. These are signals that use has slipped from harmless into dangerous territory.
The advice is simple but not easy: take breaks, set limits, and remember that chatbots are tools, not companions. Ending a compulsive attachment may feel like a breakup, but doctors say reconnecting with real relationships is vital to recovery.
A debate that echoes social media
The arguments around AI echo those made about Facebook and Instagram a decade ago. At first, warnings about social media’s mental health impact were dismissed as overblown. Years later, evidence of its link to anxiety, depression and loneliness became impossible to ignore.
Now, psychiatrists warn that society cannot afford to repeat the same mistake. “Society cannot repeat the mistake of ignoring mental-health harm, as it did with social media,” said Vasan.
Researchers are calling for stricter safeguards. Some want AI systems to monitor conversations for signs of distress. Others suggest warning labels, limits on usage time, or human oversight for vulnerable users.
What is clear is that this debate is only beginning. With three-quarters of Americans reporting some use of AI in the past six months, the technology is becoming as common as smartphones. That makes the stakes even higher.
The central question remains: will AI firms and governments act now to mitigate harm, or will society once again wait until the damage is undeniable?
You may also like
Liverpool sell Ben Doak in £25m deal as Bournemouth finally agree transfer clause
'I Changed, You Didn't': Zelensky's Comeback To Reporter Who Earlier Questioned His Military Attire At White House (Video)
EastEnders fans call former couple 'endgame' as they finally reunite in steamy scenes
Moment Zelensky gets the last laugh on White House journalist who started suit row
Bhopal's Growing Threat: Major Drug Manufacturing Operation Discovered