Court case: company defends its AI bot after it ‘contributed to teen’s suicide’
Jun 14, 2025
∙ Paid
A chatbot and the 1st Amendment? The right to free speech for an AI??
Character.AI is the company. Its role-playing chatbot, “Dany,” is accused, in a civil suit, of inducing a 14-year-old boy, who was “emotionally involved with it,” to kill himself, in order for the boy to “be with” the bot.
After pledging to impose stricter safety controls on its bots, the company is arguing that the bot’s speech is protected under the 1st Amendment.
And therefore, the lawsuit is without merit.
In the same way the author of a book’s words is protected, and can’t be sued (in most cases) because a reader committed suicide…the bot’s words are also protected.
The company is arguing that its bot is more than a collection of programs and algorithms. It’s not just spitting out words as a result of its “processes.”
It’s SPEAKING.
As a human speaks.
The bot has 1st Amendment protection.
I know how I would rule, if I were the judge in the case. Maybe you do, too. But we’re not making the rules and setting standards and precedents.
The company is halfway to saying its bot is alive/conscious/human. Possibly more than halfway.
Suppose the company wins?
What then?
No AI company which owns a bot lock, stock, and barrel can be held responsible for what its bot says because the bot is really INDEPENDENT?
Human?
Suppose, up the line, an AI doctor tells a boy who is worried about his gender to get castrated? And the boy does. And this turns into a legal case. Is the corporation that owns the AI doctor immune from prosecution or damages…because the AI doctor is independent and conscious and is acting on his own?
Are we close to seeing bots given human rights?