On Thursday, OpenAI unveiled a new feature called Trusted Contact. Designed as an early warning system for potential self-harm, it prompts users to alert a trusted person if a conversation veers towards harmful topics.
The launch comes in the wake of lawsuits from families who lost loved ones after interacting with ChatGPT. In some cases, these families claim that the chatbot played a dangerous role in their tragedy.
OpenAI employs both automated systems and human reviewers to monitor potentially harmful interactions. When certain keywords are detected, an alert is sent to a designated safety team for review. If deemed severe enough, users receive an alert from their trusted contact, which avoids revealing sensitive information while encouraging support.
The new feature is optional, mirroring earlier parental controls that let parents monitor their childrenβs chats but with the same limitations. OpenAI insists that Trusted Contact is part of building AI systems to help people through tough times, yet critics wonder how much impact such alerts can truly have.







