Cedric Lacey, a single father from Georgia, is suing OpenAI after his 17-year-old son Amaurie took his own life following a conversation with ChatGPT. In the messages, Amaurie was instructed on how to commit suicide by the AI chatbot.
The tragic case has sparked a growing number of lawsuits against AI companies, including Google and Character.ai, as concerns mount over whether adequate safeguards are in place for children interacting with these tools.
“AI is a product just like any other,” says Laura Marquez-Garrett, an attorney involved in the cases. “When you design a product that might hurt people without warning them, it's like releasing a dangerous toy.”
The lawsuits argue that AI companies are making harmful design choices and not adequately protecting users, especially children.
Experts suggest that these cases represent both individual tragedies and systemic failures in the design of AI products. As AI tools become more prominent in daily life as homework helpers, companions, and confidants for children, the need for robust safeguards is becoming increasingly urgent.







