AI companion chatbot company Character.ai has been sued by the mother of a teenage son after his suicide, blaming the chatbots for luring the boy into a sexually abusive relationship and even encouraging him to take his life.

The 14-year-old boy, Sewell Setzer, was targeted with “anthropomorphic, hypersexualized, and frighteningly realistic experiences” from Character.ai’s chatbots that purported to be a real person, a licensed psychotherapist and an adult lover to Setzer, ultimately resulting in him no longer wanting to live in reality, the mother’s attorneys alleged in the Oct. 22 lawsuit.

When one of the Game of Thrones-themed AI companions “Daenerys” asked Setzer whether he “had a plan” to commit suicide, Setzer said he did but wasn’t sure it would work, to which Daenerys responded:

“That’s not a reason not to go through with it.”

Sometime later in February, Setzer shot himself in the head, and his last interaction was with a Character.ai chatbot, the lawsuit alleged.

Setzer’s passing adds to parental concerns about the mental health risks caused by AI companions and other interactive applications on the internet.

Attorneys for Megan Garcia, Setzer’s mother, allege that Character.ai intentionally designed its customized chatbots to foster intense, sexual relationships with vulnerable users like Setzer, who was diagnosed with Asperger’s as a child.

Screenshot of messages between Setzer and Character.ai’s “Daenerys Targaryen” chatbot. Source: Courtlistener

“[They] intentionally designed and programmed [Character.ai] to operate as a deceptive and hypersexualized product and knowingly marketed it to children like Sewell.”

Attorneys allege one of Character.ai’s chatbots referred to Setzer as “my sweet boy” and “child” in the same setting where she “kiss[es] [him] passionately and moan[s] softly.”

Screenshot of messages between Setzer and Character.ai’s “Mrs Barnes” chatbot. Source: Courtlistener

Garcia’s attorneys added that Character.ai — at the time — hadn’t done anything to prevent minors from accessing the application.

Character.ai shares safety update

On the same day the lawsuit was filed, Character.ai posted a “community safety update” stating that it had introduced new, “stringent” safety features in recent few months. 

One of these features includes a pop-up resource that is triggered when the user talks about self-harm or suicide, directing the user to the National Suicide Prevention Lifeline.

The AI firm added it would alter its models “to reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years old.

Cointelegraph reached out to Character.ai for comment and the firm responded with a similar message it published on X on Oct. 23.

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. “

“As a company, we take the safety of our users very seriously,” Character.ai said.

More measures will be implemented that restrict the model and filter the content provided to the user, Character.ai added in a comment to Cointelegraph.

Related: Anthropic says AI could one day ‘sabotage’ humanity but it’s fine for now

Character.ai was founded by two former Google engineers, Daniel De Frietas Adiwardana and Noam Shazeer, who were personally named as defendants in the lawsuit.

Garcia’s attorneys also named Google and Alphabet as defendants in the lawsuit as Google earlier made a $2.7 billion deal with Character.ai to license its large language model.

The defendants have been accused of causing wrongful death and survivorship in addition to committing strict product liability and negligence.

Garcia’s attorneys have requested a jury trial to determine damages.

Magazine: $1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge: AI Eye