On July 10, 2025, Judge J. Paul Oetken of the United States District Court for the Southern District of New York delivered a pivotal opinion in Paul Lehrman and Linnea Sage v. Lovo, Inc. (Case No. 24-CV-3770 (JPO)), addressing for the first time at this level the unauthorized use of human voices by artificial intelligence systems and the corresponding legal remedies.
This decision, rendered in the context of a motion to dismiss, engages with complex issues under copyright law, trademark law (Lanham Act), state privacy and consumer protection statutes, and common law, and it raises novel questions about the rights of individuals vis-à-vis voice cloning technologies.
The plaintiffs, Paul Lehrman and Linnea Sage, are professional voice-over actors residing in New York. In 2019 and 2020, they were solicited via the freelance platform Fiverr by anonymous users—later revealed to be employees of Lovo, Inc.—to record scripts for purported “research purposes.” Lehrman and Sage provided these recordings for compensation of $1,200 and $400 respectively, after receiving assurances that their voices would be used only for internal, academic purposes and not disseminated publicly.
Unbeknownst to them, Lovo incorporated their recordings into its AI text-to-speech generator, “Genny,” creating voice clones marketed under the names “Kyle Snow” and “Sally Coleman.” These synthetic voices were made available to paying subscribers for commercial use, including audiobooks, advertisements, and podcasts. The plaintiffs first learned of these uses in 2023 when Lehrman’s voice appeared in an episode of the Deadline Strike Talk podcast produced in partnership with MIT, prompting listeners and industry colleagues to remark on the striking similarity to Lehrman’s actual voice.
The plaintiffs filed their original complaint on May 16, 2024, later amended on September 25, 2024, asserting claims for:
The defendant Lovo moved to dismiss all claims on November 25, 2024, arguing inter alia that voices are not protectable under federal intellectual property laws and that no enforceable contracts existed.
The Court noted that the plaintiffs’ voice recordings could, in principle, fall within the scope of the Copyright Act, but observed that they had registered their works only after filing suit. Under prevailing precedent, this procedural defect precluded the copyright claims at this stage.
Regarding claims of false endorsement, the Court concluded that the plaintiffs’ voices, as used here, did not qualify as distinctive marks or source identifiers. Relying on Pirone v. MacMillan, Inc., 894 F.2d 579 (2d Cir. 1990), the Court reasoned that voices in this context function primarily as the “product” being sold, not as a trademark. While the Court recognized the long-standing protection of celebrity images and likenesses under Section 43(a)(1)(A), it declined to extend this doctrine to voices of non-celebrities acting in their professional capacity.
The Court allowed the plaintiffs’ claims for misappropriation of voice to proceed under New York’s right of publicity statutes, emphasizing that these provisions protect against the unauthorized commercial exploitation of a person’s name, portrait, picture, or voice. The Court found that the plaintiffs plausibly alleged that Lovo’s cloning of their voices for commercial purposes violated these state-law protections.
The plaintiffs’ allegations of breach of contract also survived. The Court held that the Fiverr exchanges—comprising detailed negotiations and explicit restrictions on the use of the recordings—plausibly constituted binding agreements. The Court rejected Lovo’s Statute of Frauds defense at the motion to dismiss stage, noting that electronic communications may satisfy New York’s writing requirements under General Obligations Law § 5-701.
This opinion represents a landmark development in addressing the implications of voice cloning technologies for individual rights. By distinguishing between voices as mere “products” and voices as potential indicators of commercial source, the Court reinforced the limits of trademark law while highlighting the utility of state-law publicity rights in filling this gap.
Judge Oetken observed that “this case involves a number of difficult questions, some of first impression,” underscoring its potential consequences for the voice acting profession, the AI industry, and the public at large in an age where synthetic replication of identity elements is increasingly accessible.
Conclusion
By granting in part and denying in part Lovo’s motion to dismiss, the Court paved the way for further litigation on the plaintiffs’ state-law claims and contractual rights. This decision may inspire additional collective actions by voice actors against AI companies and raises broader policy questions about safeguarding identity in the era of generative artificial intelligence.
This article is a legal analysis prepared by a French attorney specializing in intellectual property and technology law. It is not intended as legal advice on U.S. law. For any legal opinion concerning U.S. law, readers should consult a qualified attorney admitted in the relevant jurisdiction.