Radio Host Sues Google, Alleges AI Voice Cloned His Likeness

At Digital Tech Explorer, we closely monitor the intersection of human creativity and AI innovation. A significant legal battle has recently emerged that could redefine the boundaries of digital identity. A prominent radio host has initiated legal action against Google, alleging that the tech giant’s notetaking tool, NotebookLM, features an AI-generated voice that is an unauthorized imitation of his own distinctive vocal persona.

The Allegations: Voice Replication and Digital Ethics

On January 23, former NPR host David Greene filed a lawsuit in California, accusing Google of “deliberate acts of theft.” The core of the complaint suggests that Google utilized Greene’s extensive vocal catalog to develop and refine the machine learning models powering the Audio Overviews in NotebookLM. Greene described the experience of hearing the AI output as “eerie,” noting that the synthetic voice captured his specific delivery and cadence with unsettling accuracy.

Greene, known for his work on major political programs like Left, Right & Center, argues that his voice is a professional asset cultivated over decades. The lawsuit contends that by replicating this “distinctive voice,” Google has created a synthetic persona that mimics his unique persona without consent or compensation.

Impact on the Modern Media Landscape

Asus ROG Carnyx microphone used for professional broadcasting
High-quality hardware like the Asus ROG Carnyx is essential for professional broadcasters whose voices are now being mimicked by AI.

As we explore at Digital Tech Explorer, the rise of AI acceleration in media production raises complex questions for creators. Professional podcasters and broadcasters rely on their vocal identity for legitimacy and revenue. Greene’s legal team asserts that Google’s move to bypass traditional talent licensing constitutes a “violation of multiple statutes” designed to protect the podcast industry from unfair competition.

Beyond the legalities, there is a functional concern regarding the authority of these AI voices. NotebookLM can synthesize information from any source provided to it; if the AI sounds like a trusted journalist, it may inadvertently lend unearned credibility to inaccurate or biased data.

Technical Analysis and Google’s Rebuttal

The lawsuit is supported by forensic evidence. An independent voice recognition company analyzed Greene’s actual broadcasts alongside the NotebookLM audio. The software returned a confidence rating of 53-60% indicating that Greene’s voice likely served as training data for the model. In the world of hardware and software testing, these metrics provide a substantial basis for further investigation.

Google, however, maintains that the allegations are “baseless.” Spokesperson José Castañeda clarified that the male voice used in NotebookLM belongs to a paid professional actor hired specifically for this project. Google asserts that the similarity is coincidental rather than a result of unauthorized data scraping.

A Precedent for the Future of AI Branding

This case follows a familiar pattern in the industry, most notably the 2024 dispute between Scarlett Johansson and OpenAI. In that instance, the AI voice “Sky” was paused after the actress pointed out its striking resemblance to her own voice, especially after she had previously declined to license her vocal likeness to the company.

As TechTalesLeo, I believe this case will serve as a landmark for how we define “voice theft” in the digital age. Whether the court finds in favor of Greene or Google, the ruling will set the standard for how AI companies must differentiate their outputs and respect the intellectual property of human creators. At Digital Tech Explorer, we will continue to monitor these developments as they shape the future of digital innovation and creative rights.