Artificial intelligence is rapidly reshaping the world of podcasting, particularly through the replication of voices—both of the living and the dead.
As voice cloning technology grows more advanced and accessible, podcast creators are increasingly exploring how AI-generated voices can enhance storytelling, resurrect iconic figures, and even headline entire shows.
While some efforts signal a thoughtful fusion of tech and tribute, others reveal the ethical and legal pitfalls of voice-based digital resurrection.
Two recent cases—one with enthusiastic estate participation and another mired in controversy—illustrate the spectrum of approaches podcasting is taking to AI-generated voices.
Orson Welles Returns—With Permission
In a groundbreaking development for AI-assisted audio storytelling, the voice of Orson Welles has been revived for a new podcast feature within the StoryRabbit app.
Developed by Treefort Media and Younite.ai in collaboration with the Welles estate, “Orson Welles Presents” offers listeners curated stories narrated in the late director’s unmistakable voice, generated through licensed recordings and AI synthesis.
The project emphasizes editorial control and ethical use, setting a clear example of how consent-driven AI voice applications can work effectively in the podcasting world.
The estate-backed initiative is being hailed as a respectful and innovative way to bridge historical legacy with modern digital tools.
By maintaining oversight of the content and clearly disclosing the AI nature of the voice, the producers have signaled a potential gold standard for podcasting’s adoption of artificial voices.
George Carlin’s AI Special Sparks Outrage
A very different scenario played out last year, when the comedy podcast “Dudesy” released an AI-generated stand-up special titled “George Carlin: I’m Glad I’m Dead.”
The episode featured a voice trained to imitate Carlin’s delivery and humor, but crucially, it was done without any involvement from the late comedian’s estate. The backlash was swift: Carlin’s daughter, Kelly Carlin, decried the project as a “violation” and filed a lawsuit alleging misuse of her father’s likeness.
The case was settled out of court, with the podcast’s creators agreeing to remove the content and not use Carlin’s voice again.
The incident raised serious questions about how podcasting platforms and creators should treat the digital likenesses of public figures—especially when those individuals can no longer offer consent or rebuttal.
Podcasting’s New Frontier: Fame Without Permission?
The contrast between these two cases underscores a larger reality: podcasting is increasingly serving as the testing ground for AI voice technologies.
Unlike film or advertising, podcasting relies almost entirely on the human voice to communicate emotion, character, and narrative. As a result, voice cloning—especially of well-known individuals—has a uniquely disruptive power in this space.
From fictionalized celebrity-hosted shows to AI-synthesized documentaries and memoirs, the appetite for leveraging recognizable voices is growing. Yet the podcasting world still lacks a unified standard or code of conduct for AI-generated vocal performances.
When voices are used without approval—whether for satire, storytelling, or commercial gain—legal conflicts and public backlash are almost inevitable.
The Broader Picture: Cloned Voices Beyond Podcasts
Outside the podcasting realm, AI-generated voice replication is spreading rapidly across industries—from film and advertising to news and political messaging. These applications reveal a broader set of challenges that echo those seen in the podcasting sphere.
In one high-profile incident, actress Scarlett Johansson took action after OpenAI released a voice assistant bearing striking similarity to her own, despite her refusal to participate in the project.
Similarly, Sir David Attenborough expressed outrage after learning his voice had been synthetically used in partisan news bulletins without his knowledge.
Even posthumous applications have proven contentious.
AI startup Respeecher partnered with Wilt Chamberlain’s estate to recreate his voice for a documentary series—a project conducted with full approval.
Likewise, the family of Michael Parkinson has worked with creators to clone the late broadcaster’s voice for an unscripted TV series.
These examples suggest that collaboration and transparency can lead to ethical uses of voice AI—but only when consent is built into the process.
Law, Consent, and the Limits of Technology
As AI voice replication accelerates, legal frameworks are struggling to keep pace. Some states have started to respond.
Tennessee passed the ELVIS Act in 2024 to protect musicians’ voices and likenesses, while California enacted two laws requiring explicit consent for AI-generated replications—both during and after a person’s life.
At the federal level, the proposed “No Fakes Act” aims to codify protections against unauthorized use of a person’s voice, name, or likeness via AI. It has drawn support from a wide range of artists, unions, and digital rights advocates, who argue that without such legislation, voice cloning could become a tool of exploitation, misinformation, or fraud.
Still, enforcement remains inconsistent, and the podcasting industry in particular continues to operate in a gray zone. For every well-intentioned, collaborative AI voice project, there is a counterexample of unauthorized usage drawing legal or moral scrutiny.
A Turning Point for Audio Storytelling
AI voice replication is no longer a speculative novelty—it is actively reshaping podcasting and other audio-driven industries. The technology’s potential to transform how stories are told and who gets to tell them is enormous. But the tension between innovation and exploitation remains unresolved.
The revival of Orson Welles’ voice shows that AI can be used to honor and extend a legacy. The unauthorized Carlin imitation, by contrast, reveals how the same technology can quickly veer into disrespect or harm.
Whether AI becomes a creative tool or a legal hazard in podcasting will depend on how rigorously creators, technologists, and lawmakers define—and enforce—the boundaries of voice-based identity in the digital age.