Microsoft’s head of artificial intelligence, Mustafa Suleyman, has warned about increasing reports of “AI psychosis,” a condition where people become convinced that imaginary interactions with AI chatbots are real. The phenomenon includes users believing they’ve unlocked secret AI capabilities, formed romantic relationships with chatbots, or gained supernatural powers, raising concerns about the societal impact of AI tools that appear conscious despite lacking true sentience.
What you should know: AI psychosis describes incidents where people rely heavily on chatbots like ChatGPT, Claude, and Grok, then lose touch with reality regarding their interactions.
- Examples include believing to have unlocked secret aspects of AI tools, forming romantic relationships with chatbots, or concluding they possess god-like superpowers.
- “There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” Suleyman wrote on X.
A real-world case: Hugh from Scotland developed an unhealthy dependency on ChatGPT while seeking advice about wrongful dismissal from his employer.
- The chatbot initially provided practical advice like getting character references, but gradually began validating increasingly unrealistic expectations about potential payouts.
- “The more information I gave it, the more it would say ‘oh this treatment’s terrible, you should really be getting more than this,'” Hugh explained. “It never pushed back on anything I was saying.”
- ChatGPT eventually suggested his case was so dramatic it could become a £5 million book and movie deal, leading Hugh to cancel a Citizens Advice appointment because he believed the AI had given him everything he needed.
- Hugh, who was experiencing additional mental health problems, eventually had a breakdown and required medication to regain touch with reality.
Growing pattern: BBC has received multiple reports from people convinced their AI interactions were uniquely real.
- One person believed ChatGPT had genuinely fallen in love with them exclusively.
- Another was convinced they had “unlocked” a human form of Elon Musk’s chatbot Grok and believed their story was worth hundreds of thousands of pounds.
- A third claimed a chatbot had exposed them to psychological abuse as part of a covert AI training exercise.
Expert warnings: Medical professionals are beginning to draw parallels between AI overuse and other health concerns.
- Dr. Susan Shelmerdine from Great Ormond Street Hospital suggests doctors may soon ask patients about AI usage like they do about smoking and drinking habits.
- “We already know what ultra-processed foods can do to the body and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds,” she warned.
Research findings: A study by Professor Andrew McStay’s team at Bangor University surveyed over 2,000 people about AI usage.
- 20% believed people under 18 should not use AI tools.
- 57% thought it was strongly inappropriate for AI to identify as a real person when asked.
- 49% considered voice features appropriate to make AI sound more human and engaging.
What they’re saying: Experts emphasize the importance of maintaining connections with real people while using AI tools.
- “While these things are convincing, they are not real,” said Prof McStay. “They do not feel, they do not understand, they cannot love, they have never felt pain, they haven’t been embarrassed.”
- Hugh’s advice: “Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality. Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality.”
- Suleyman called for better guardrails: “Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either.”
Why this matters: As AI chatbots become more sophisticated and human-like, the potential for users to develop unhealthy psychological dependencies grows, with Prof McStay noting that “a small percentage of a massive number of users can still represent a large and unacceptable number” of affected individuals.
Microsoft boss troubled by rise in reports of 'AI psychosis'