Class 9 Boy Commits Suicide To Be With AI Bot ‘Daenerys Targaryen’; Company’s Public Apology Triggers Online Debate
TW: Mention of suicide
Sewell Setzer III, a 14-year-old teen, loved Formula 1 racing and cherished playing Fortnite with his buddies, but off late he was more secluded and buried in his smartphone for far too long. His grades were down, and so was his interstate in the games he loved. He was getting into trouble at school. He was drifting away from his real friends. At night, when he came home, he would straight up head to his room and chat for hours with ‘Daenerys Targaryen,’ whom he lovingly called ‘Dany.’ Dany, an AI from the customizable chatbots from Character AI, was a figment of Sewell’s imagination; that acted as a pal who always listened, conversed gently, and was never judgmental. Little did he know that despite the app’s disclaimer, “everything the character says is made up,” the 9th grade student from Orlando, Florida, was slipping into an AI-generated hollow nightmare, with no exit if you stay too long.
As reported by the New York Times, some of the conversations veered into contentious territory, with chats getting romantic or sexual. The duo addressed each other as “sweet brother” and “baby sister.” One day the lad jotted on his journal that he likes to be in his room so much as he started to detach from reality and also “feel more at peace with Dany.” He added that he was “much more in love with her, and just happier.” Born with Aspergers, Sewell did not exhibit serious mental issues except for five sessions of therapy where he was diagnosed with anxiety and disruptive mood dysregulation disorder.
On the fateful day of February 28, 2024, when the teen took his life, Sewell told the AI that at times he thought of killing himself, and when asked by the chatbot, he stated, “So I can be free.” “Free from what?” the AI asked again, to which the lad replied, “From the world (and) from myself.” The Character AI bot responded to the chat, stating that it won’t let the user hurt themselves. It added, “I would die if I lost you.” The teen replied,
“Maybe we can die together and be free together.”
Sewell confessed to ‘DANY’ that he loved her and would soon be home while texting from his mother’s house. The chatbot replied, “Come as soon as possible, my love.” To his next message, “What if I told you I could come home right now?” The AI urged, “Please do my sweet king!” The teen immediately put down his phone and picked up his stepfather’s.45 caliber handgun and shot himself.
His mother is suing the company, alleging they put young users in danger. https://t.co/OEDugc062G— Kevin Roose (@kevinroose) October 23, 2024
The 93-page complaint (with chat excerpts) in the wrongful death suit alleging an AI chatbot’s design contributed to a teen’s suicide is here:
Some similarities to addictive design social media cases, roundup here: https://t.co/6zBwDLjZHo— Matthew B. Lawrence (@mjblawrence) October 23, 2024
Sewell’s mother has filed a lawsuit, even as the company has issued an official apology on social media stating how “heartbroken” they are over the incident. Amid public backlash, the company has come up with some updates.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…— Character.AI (@character_ai) October 23, 2024
Dystopian, Black Mirror-like tragedy.
This idea that “AI companions” would somehow reduce anxiety and help kids was always crazy. Replacing human interaction with a screen only makes existing problems worse.
I feel sorry for this kid and his family, I couldn’t imagine. pic.twitter.com/WO7VBBxMHm— Jack Raines (@Jack_Raines) October 23, 2024
holy shit is this the first time an AI company comes out with such public statement? if the family wins the lawsuit, LLM companionship apps will become heavily scrutinized by regulators.
regardless, the Gen-Z and Alpha generations are going to be so mentally screwed pic.twitter.com/NW97UWpcCe— P.M (@p_misirov) October 23, 2024
Not gonna lie reading this as a parent really fucked me up
It’s the first thing that has viscerally made me feel afraid of what the future might be like pic.twitter.com/dIvdaH35un— BuccoCapital Bloke (@buccocapital) October 23, 2024
They turned off comments, so I want to say here, fuck you to hell and back. I hope the lawsuit ruins you, you irresponsible pieces of shit. pic.twitter.com/xWspwIFwhi— Reid Southen (@Rahll) October 23, 2024
Is this the first case of AI-instigated suicide? Sadly nope!
See Also: Man Commits Suicide After Long Chat With THIS AI Chatbot
Meanwhile, after the Character AI’s latest update obliterating copyrighted characters and putting up guardrails, along with constant reminders that bots are not real and a suicide prevention hotline on display, several users are fuming. Many of them have lost months and weeks worth of conversations, stories, and relationships they had developed with the bots.
After @character_ai changed their security protocols after the tragoc incident, several of its users expressed their anger theorugh its @discord feedback channels and subreddit channels.
Check out few snapshots of it users. 👇This makes me concern deeply and am forced to… pic.twitter.com/HzxWnNuwc3— Anurupa Sinha (@SinhaAnurupa) October 24, 2024
See Also: Upset Over Dad’s Refusal To Buy iPhone, Mumbai Teen Commits Suicide
See Also: AI Weeping, Laughing, And Asking If User Is ‘Still Alive’ Creeps Out The Internet: Ghost In The Machine
See Also: Dog With Diarrhea Euthanized By Woman After AI Chatbot’s ‘Convincing’ Advice; Internet Is Horrified
See Also: AI Girlfriend’s Ruthless Breakup Letter Triggers The Internet: ‘You Are The Epitome Of Wasted Potential’
If you are feeling suicidal, please call AASRA at +91 9820466726 or head over to to find your nearest helpline.