In January 2024, Plaintiff Graciela Dela Torre settled her long-term disability claim with Nippon Life Insurance Company (Nippon) and dismissed her case with prejudice. Later, Plaintiff
Dela Torre questioned her settlement. But she did not return to her attorney to ask questions. Instead, she turned to ChatGPT, a widely used AI chatbot.
What resulted was striking. ChatGPT is alleged to have validated Dela Torre’s distrust of her own lawyer’s advice, helped her draft motions to reopen the settled case, and generated over 60 filings after a judge denied her initial attempt.1 The result? A recent lawsuit filed by Nippon against OpenAI in the Northern District of Illinois.
Nippon alleges that OpenAI, through its ChatGPT product, engaged in the unauthorized practice of law when ChatGPT provided legal advice, drafted court filings, and encouraged a pro se litigant to breach a valid settlement agreement. Nippon paid nearly $300,000.00 in defense costs and fees, responding to meritless, AI-generated litigation. This lawsuit raises critical questions for defendants and their counsel: Why are clients paying high defense costs when opposing an untrained pro se litigant? And most importantly, who really pays the price? In this case, it was Nippon.
The Pro Se Leniency Problem: Lowering the Bar
Attorneys often recount stories of courts construing pro se filings liberally, excusing procedural errors by pro se litigants and forgiving missed filing deadlines. In some instances, courts have even provided legal guidance, or suggestions, to pro se litigants that might not have been offered to represented parties under similar circumstances. Here, the price paid is the court’s neutrality.
Adding AI chatbots to the pro se litigant’s toolbox complicates things further. A pro se plaintiff armed with chatbots like ChatGPT can generate unlimited motions, briefs, and filings, with only costs or “fees” being paid for filing. ChatGPT does not get tired, yet it can draft hundreds of motions in the time it takes an attorney to draft one, whether or not it hallucinates2 caselaw. And because the plaintiff is pro se, courts could construe pleadings liberally, excuse filing deficiencies and non-compliance with Standing Orders or Local Rules, overlook procedural irregularities, and give every benefit of the doubt to that pro se litigant. The only safeguard against these abuses today remains some—but not all—courts requiring litigants, pro se or not, to disclose whether AI was used in generating a brief. Here, the price paid by the pro se litigant is simply the fee to file the ChatGPT-generated filing. But the price paid by the opposing party is much higher.
The New Reality
AI chatbots operate in a regulatory blind spot. Unlike licensed attorneys who face disbarment or sanctions, ChatGPT carries no malpractice insurance and cannot be disciplined by a bar association. Pro se litigants using AI face minimal accountability—even when filings contain hallucinated cases or serve no legitimate legal purpose—while corporate defendants must respond to every filing or risk default, paying market-rate legal fees without the ability to recover costs.
The asymmetry is stark: a plaintiff can use AI to generate unlimited filings at minimal cost while defendants spend hundreds of thousands of dollars responding. Courts are inconsistent—some require AI disclosure, others don’t. Some sanction abuse, others don’t. Until the regulatory framework catches up, defendants have limited options: move early for dismissal, document suspected AI use, seek sanctions where appropriate, and hope that if it happens to you, you might have some luck suing the AI company—just like Nippon is trying to do.
This unauthorized practice of law by proxy causes both the legal system and defendants to pay the price.
The Bottom Line
Nippon is a wake-up call. Combining pro se leniency with AI assistance creates a perfect storm: reduced standards of scrutiny, minimal to no accountability, and massive costs imposed on represented opposing parties.
The solution is equal treatment and access to knowledge. Differing rules, standards, and consequences should not be the result. If a filing is deficient, the Court should dismiss it. If a litigant abuses process, courts should sanction them. If an AI system practices law without a license, they should hold the pro se litigant—or the provider itself—accountable. Anything less is a subsidy for AI-enabled abuse, paid by the defendants who must defend themselves from often meritless litigation.
- ChatGPT generated at least 58 to 66 separate filings across both lawsuits, including: a subpoena duces tecum — served in a case already dismissed with prejudice — demanding production of surveillance personnel identities, financial records, and privileged legal communications within five days of service (Dela Torre v. Nippon Life Insurance Company of America, No. 1:22-cv-07059 (N.D. Ill.), Dkt. No. 57; a motion to add Nippon’s Chief Strategy and Operations Officer as a “named representative” for deposition purposes after the close of discovery (id. at Dkt. No. 67); a motion to introduce evidence of alleged false advertising and deceptive business practices wholly unrelated to the dismissed ERISA claim (id. at Dkt. No. 53); requests for judicial notice of purported regulatory “sanctions” containing demonstrably false allegations (id. at Dkt. No. 37); a motion demanding that opposing counsel provide “verified proof” of medical incapacity after the Court had already granted a brief extension (id. at Dkt. No. 95); and requests for judicial notice accusing defense counsel of forgery (id. at Dkt. No. 134). Numerous filings also contained fabricated case citations — including Carr v. Gateway, Inc., 944 F. Supp. 2d 602 (D.S.C. 2013), a case that does not exist. In her second lawsuit, Dela Torre v. Davies Life & Health, et al., No. 1:25-cv-01483 (N.D. Ill.), Dela Torre filed 44 motions, memoranda, demands, petitions, and requests (Dkt. Nos. 22, 26, 30, 32, 36-38, 47, 50, 54, 57, 58, 63, 68, 69, 72, 73, 75, 76, 78, 81, 83, 92-96, 104, 110, 115-118, 121, 123, 124, 133, 137, 143, 152, 165, 170, 193, and 201) and 14 separate requests for judicial notice (Dkt. Nos. 47, 107, 108, 111-113, 125-127, 131, 132, 134, 138, and 148) in the approximately three weeks since March 10, 2025, all drafted with the assistance of ChatGPT. ↩︎
- A hallucinated case is a nonexistent legal case citation that an AI system generates and presents as if it were real. ↩︎