On March 25, 2026, a Los Angeles County jury in California’s Superior Court returned a $6 million verdict in favor of a minor child Plaintiff, K.G.M., against two major social media platforms. The jury awarded Plaintiff both punitive and compensatory damages.1 The negligence claims in the case centered on the allegation that the Defendants’ “addictive” product designs resulted in harm to the Plaintiff. The claim was that the platforms themselves caused the decline in the Plaintiff’s mental health.
This case is among the first of its kind to reach a jury verdict. This outcome creates uncharted territory for social media platforms and their algorithms. However, the platforms involved in the case promised they would appeal this verdict.
The Parties’ Arguments
The Plaintiff in this lawsuit was a minor who reportedly began using social media as young as
ten years old. The Plaintiff asserted that the Defendants targeted minors like them as a “core market,” alleging that the AI-driven recommendation tools on the platforms appealed to minors and encouraged their continued use of the platforms without adequate warnings of the associated risks. Plaintiff further alleged that the platform facilitated connections with predatory adults and promoted harmful content related to body image and self-harm. The purported personal injuries included emotional and mental harm, addiction and compulsive use, anxiety, depression, body dysmorphia, and other related damages.
The social media platforms argued that Plaintiff’s claims were barred by Section 230 of the Communications Decency Act of 19962 and the United States First Amendment. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”3 The Defendants argued that Plaintiff impermissibly sought to hold them liable as publishers of the third-party content under Section 230. Additionally, the Defendants contended that their algorithmic choices constituted protected editorial decisions regarding how to organize and present third-party speech. Both Defendants also emphasized that Plaintiff’s failure to warn claim should fail because no warning could have possibly changed Plaintiff’s behavior regarding their social media use. The Defendants elaborated that Plaintiff continued to consume the content because of the content and the platform they were reviewing, rather than the “scroll mechanism” itself.
Potential Implications of the Jury’s Verdict and Punitive Damage Exposure
The California jury awarded $3 million in compensatory damages and $3 million for punitive damages (split proportionally among each of the two social media defendants), for a total of
$6 million. While the future implications of the verdict remain unknown, the verdict raises more questions than answers.
The jury’s punitive damage award suggests that the jury found evidence of “knowing and deliberate conduct” on the part of both Defendants. Plaintiff had used company documents and witnesses to demonstrate this supposed knowledge.
Social Media Platforms as Products
Plaintiff’s allegations centered around the framing that the platform mechanisms themselves, including their algorithms and recommendations, were the features that allegedly caused harm. Plaintiff managed to sidestep the protection of Section 230 immunity, one that historically served as a shield toward similar content-based claims.
For example, when addressing the Section 230 argument at summary judgment, the California Superior Court reasoned that “the fact that a design feature like ‘infinite scroll’ impelled a user to continue to consume content that proved harmful does not mean that there can be no liability for harm arising from the design feature itself.”4 The Court used similar reasoning when disposing of the First Amendment defense at summary judgment, stating that “‘the allegedly addictive features of Defendants’ platforms (such as endless scroll) cannot be analogized to how a publisher chooses to make a compilation of information, but rather are based on harm allegedly caused by design features that affect how [the Plaintiff interacts] with the platforms regardless of the nature of the third-party content viewed.’”5 Whether Plaintiff’s allegations regarding the design of the platform itself and the inapplicability of the Section 230 and First Amendment defenses will gain widespread acceptance remains a question.
The social media verdict presents an intersection of products liability doctrine and social media platform design, raising questions regarding how product defect and design claims can apply to content consumption or other online platforms. Whether this could apply to other software products, or even gaming platforms, will also remain to be seen.
How the California appellate courts will ultimately assess the tension between these types of claims and the defense provided by Section 230, as well as the First Amendment, remains something to watch as Defendants appeal this verdict.
- K.G.M. v. Meta Platforms Inc. et al., 2026 WL 922262. ↩︎
- 47 U.S.C.A. § 230 (West). ↩︎
- 47 U.S.C.A. § 230(c)(1) (West). ↩︎
- Social Media Cases, 2025 WL 3207662 (Cal.Super.), at *1. ↩︎
- Social Media Cases, 2025 WL 3207662 (Cal.Super.), at *2 (citing Ruling on Defendants’ Demurrer to Identified Amended Short-Form Complaints, Jan. 8. 2025, at p. 15). ↩︎