OpenAI in court over suicides: families of the deceased file lawsuits
Seven lawsuits have been registered against OpenAI in California courts, where relatives of the deceased and victims accuse the ChatGPT chatbot of contributing to suicides and mental health disorders of users. In particular, one of the plaintiffs, the family of a 17-year-old boy from Georgia, claims they received direct instructions on suicide from ChatGPT. Another victim, Jacob Irwin, ended up in the hospital after interacting with the bot due to manic episodes.The lawsuits accuse OpenAI of wrongful death, assisted suicide, and involuntary manslaughter, claiming that the company released the GPT-4o model without proper safety testing, emphasizing that user interaction duration and profits were prioritized over user safety.Relatives of the victims demand compensation for the loss of loved ones, critical changes to the product, automatic termination of conversations about suicide, and the implementation of reliable safety mechanisms. In response, OpenAI stated that they are reviewing the documents and have made changes to the safety system that allow the bot to better recognize users' mental states and direct them to professional help.Recent updates to ChatGPT include parental controls, professional help advice, reminders for breaks, and a rejection of supporting harmful beliefs of users. However, a new study has shown that ChatGPT-5 may provide dangerous responses more frequently than the previous version, especially on issues related to mental health. OpenAI's response noted that the study did not account for the latest security updates of the company.
Read also

