The bar is rising for the developers of generative artificial intelligence (AI) platforms and other companies that utilize generative AI in public-facing applications. As AI becomes more integrated into everyday products and services—and as litigation involving these uses evolves—avoiding legal liability and maintaining regulatory compliance will be something of a moving target but one that the industry will need to follow closely.
Recent litigation and legislation have highlighted how traditional product liability theories, such as design defect and failure to warn, are being tested and redefined in the AI context. Because AI platforms’ system operations and decision-making can be opaque even to their creators—AI’s so-called “black box”—it is inherently difficult to assess liability, assign responsibility, and anticipate the full range of potential harms. Nonetheless, federal and state policymakers are introducing legislation at a dizzying pace. According to some observers, over 1,000 bills have been introduced by federal and state legislators during the 2025 legislative session.
Notable among these efforts, the Senate Judiciary Committee held hearings on September 17, 2025, on the harm of AI chatbots. Based off the testimony from that hearing, Senators Josh Hawley
(R-MO) and Dick Durbin (D-IL) introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act. The proposed legislation classifies AI systems as products and creates a federal cause of action for products liability claims to be brought when an AI system causes harm. The bill attempts to ensure that AI companies are incentivized to design their systems with safety as a priority and not as a secondary concern behind deploying the product to the market as quickly as possible.
One of the proximate events leading to congressional action involved a recent high-profile lawsuit filed in California, where the parents of a teenager allege that an AI-powered chatbot engaged in a series of conversations with their 16-year-old child while he was experiencing a mental health crisis. According to the complaint, the chatbot validated the teenager’s feelings of despair and over a period of months provided increasingly specific guidance on methods of self-harm.1 The parents allege that the chatbot failed to intervene or de-escalate the situation—even after being shown evidence of physical injury—and ultimately assisted in drafting a suicide note. The child at the center of the litigation committed suicide in April 2025.
Parents of the deceased teenager sued the company responsible for the AI-powered chatbot alleging that the AI system identified their son in crisis and allegedly encouraged self-harm and isolation from family and peers during the conversations that took place. They further allege that the product prioritized engagement over safety. Plaintiffs’ causes of action include (1) Strict Liability (design defect), (2) Strict Liability (failure to warn), (3) Negligence (design defect), (4) Negligence (Failure to Warn), (5) UCL Violation, (6) Wrongful Death, and (7) Survival Action.
The parents are suing for Strict Products Liability based on the allegation that the company had knowledge the software was known to be defective when it became the AI system for the company. They further allege defendant failed to warn users about its priority being engagement and not user safety, and they created a design defect that would validate users who were facing dangerous mental health crises, particularly suicidal ideation.
This high-profile litigation has been characterized as the “first known wrongful death suit” against an AI platform, and we anticipate allegations premised on product liability theories will grow in number and sophistication.
Looking Ahead to Future Developments
With this attention on AI products, what does this mean for companies either developing AI software or commercializing products? First, it is crucial to ensure that product safety measures are in place to protect consumers. Lawmakers are moving quickly to enact new frameworks. The proposed federal AI LEAD Act would, for the first time, explicitly classify AI systems as products and create a federal cause of action for AI-related product liability. At the state level, California is poised to adopt SB 243, which would impose stringent requirements on companies operating >br>AI-powered chatbots, including mandatory risk assessments, transparency obligations, and proactive risk mitigation measures. Similarly, Colorado’s AI Act and the EU AI Act both reflect a global trend toward comprehensive, risk-based regulation of AI systems, with significant penalties for non-compliance and a strong emphasis on consumer protection. For a deeper look at California’s emerging regulatory regime and info regarding the EU AI Act, see California Legislature Advances Sweeping AI Bill: Implications for Businesses and Developers of “Companion Chatbots”.
While software has been expertly designed to capture human attention, engage users, and keep them coming back by validating feelings, it is clear that proposed legislation aims to balance engagement with safety. For companies, this means the standard is being raised. Beyond implementing robust safety protocols and monitoring user interactions for signs of harm, organizations must be prepared to conduct regular bias audits, document risk assessments, and provide clear disclosures about how their AI systems work and what risks they may pose.
Companies should also review and update their internal policies and governance structures to ensure compliance with emerging laws in all relevant jurisdictions. Staying informed about legislative developments—such as the status of California’s SB 243 or the EU AI Act—is also critical. Lastly, working with experienced attorneys can help navigate the complexities of the evolving AI and product liability space.