Podcast Hosts Address AI Concerns

We take our own dive into AI's role in content creation with this look at AI itself.

16 min read
Podcast Hosts Address AI Concerns

Update 3.26

Several major players in the podcast game have updated their policies surrounding artificial intelligence and how it can be applied. Apple and YouTube are both requiring disclaimers and setting boundaries around how AI can be utilized in content creation and presentation. 

We caught wind of the developments at Apple and YouTube via Podnews, an industry newsletter. They added their intros, which I’ve dropped here, and then pulled information directly from the Apple and YouTube announcements.

While photo and video have their own unique AI challenges, this seemed like an opportunity to peek behind the scenes of content creation, both manually and utilizing an AI engine.

So, I’ll walk you through how this article was developed, and then run the whole thing through ChatGPT and add their ‘written’ version below. 

My words are bolded and italicized below. Here’s what I first pulled in from the newsletter:

Apple Podcasts content guidelines 

Podcasting is an extraordinary medium that allows people to share information, perspectives, stories, and ideas with listeners around the world. With Apple Podcasts our guiding principle is simple: we want to provide a delightful, trusted experience for listeners, and rewarding opportunities for creators to distribute and monetize their shows.

To help creators and listeners know what to expect from each other, and from Apple Podcasts, we maintain the following content guidelines. These guidelines will evolve over time and we will keep creators informed of significant changes as they are updated.

In the event Apple reasonably believes that, based on human and/or systematic review, a creator’s content does not meet these guidelines, Apple may take action to label or remove the content from Apple Podcasts, suspend the sale of subscriptions, and/or suspend or terminate your account. We value the work creators offer on Apple Podcasts and will work to help resolve any issues that may arise.

For creators established in, and who offer subscriptions via Apple Podcasts to customers located in, the European Union, more information about redress options available to you in connection with an action Apple has taken against you, for example removal of your podcast from Apple Podcasts, is available here.

1. Inaccurate, Misleading, or Unauthorized Content

2. Illegal, Harmful, or Objectionable Content

3. Advertising Guidelines

4. Paid Content Guidelines

In addition to the preceding content guidelines, the following guidelines apply to paid content available on Apple Podcasts through Apple Podcasts Subscriptions.

5. Transcripts Guidelines

Now, here’s a separate post in the same Podnews newsletter. Again, I’ve stripped off the Podnews intro, and brought over the copy they got from YouTube’s website:

Generative AI is transforming the ways creators express themselves – from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.

That’s why today we’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, or event – is made with altered or synthetic media, including generative AI.

As we announced in November, these disclosures will appear as labels in the expanded description or on the front of the video player. We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.

The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include:

Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.

These cases include:

You can see a longer list of examples in our Help Center. For most videos, a label will appear in the expanded description, but for videos that touch on more sensitive topics — like health, news, elections, or finance — we’ll also show a more prominent label on the video itself.

You’ll start to see the labels roll out across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV. And while we want to give our community time to adjust to the new process and features, in the future we’ll look at enforcement measures for creators who consistently choose not to disclose this information. In some cases, YouTube may add a label even when a creator hasn't disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people.

Importantly, we continue to collaborate across the industry to help increase transparency around digital content. This includes our work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).

In parallel, as we previously announced, we’re continuing to work towards an updated privacy process for people to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice. We’ll have more to share soon on how we’ll be introducing the process globally.

Creators are the heart of YouTube, and they’ll continue to play an incredibly important role in helping their audience understand, embrace, and adapt to the world of generative AI. This will be an ever-evolving process, and we at YouTube will continue to improve as we learn. We hope that this increased transparency will help all of us better appreciate the ways AI continues to empower human creativity.

Then, as an add-on, Podnews felt compelled to include a link to their own AI policy.

Our editorial policy on AI

Nothing you see in Podnews is the direct output of any AI program, unless it clearly says so. We’re currently not using AI tools in any part of our editorial process.

We have, in the past, used Google’s Bard to summarise long podcast press releases. We might use summary tools again one day - maybe to help initially triage email. We don’t forsee a time when we use AI to actually write for us, though.

We don’t use AI photography tools, with the exception of when we’re writing some stories specifically about AI. We’ll credit the AI tool used if that’s the case.

Press releases and photographs submitted to us may have used AI in their generation. If they’ve not told us they’re using them, we can’t tell you. However, we’re pretty good at spotting when press releases or stories elsewhere are using AI, and we try not to link to them if we suspect that they’re AI generated.

Next, I jumped into a Google News search, using ‘podcast AI news’ as my search term. Several recent developments popped up, and a few had references to older stories.

From BGR.com, a site run under the Penske Media umbrella, I grabbed an update on one of the hot new AI platforms that's generating a newscast completely with AI-

Perplexity, a still relatively new AI-based Google Search rival, is on a tear. For a start, the company raised over $70 million in January from top-tier investors, including Jeff Bezos. Following Perplexity’s launch last year, more than 10 million monthly active users are now flocking to the company’s clean, fast, and ad-free search experience. Perplexity has also just launched Discover Daily — a 100% AI-generated daily news podcast that managed to break into Apple’s top 200 news podcast in its first week.

The podcast, featuring episodes of no more than four minutes in length, feels like the perfect flex for a company eager to show off its increasingly robust AI prowess. The news summaries, for example, are read by a synthetic yet pleasant voice reminiscent of a BBC host (made possible by ElevenLabs’ customizable AI voice cloning technology). 

The summaries are drawn from Perplexity’s curated “Discover” feed that presents a running list of the day’s key headlines — some of the latest such news items including Meta and LG collaborating on a new high-end VR headset, and Redditors expressing dissatisfaction with the company’s IPO. “At Perplexity,” the company explains in its announcement of the new podcast, “we pride ourselves on being the fastest and most accurate way to search the web.”

The announcement continues: “Discover Daily is a testament to our commitment to making knowledge more accessible and engaging. By leveraging ElevenLabs’ lifelike voice technology, we’re able to transform the way people consume information, making it possible to absorb curated knowledge in audio form — perfect for those on the go or simply looking for a more dynamic way to learn something new.”

Perplexity CEO Aravind Srinivas told me last month that the company doesn’t have to make a direct, frontal assault on Google or challenge its market share in order to succeed. “We are operating in a new segment of AI assistants, a segment where new businesses and products will continue to be created and expanded. In this arena, Google doesn’t have a monopoly.”

The company’s buzz, meanwhile, only continues to grow. Among Perplexity’s investors are two with ties to Google: Susan Wojcicki, the former CEO of YouTube, and Jeff Dean, Google’s Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. Moreover, when Perplexity announced its Series B just days ago, it added that the company’s search engine had served a billion queries in 2023. An impressive start for a company that’s also done next to zero marketing.

I also came across this one, out of CBC/ Radio Canada, which relates AI and content creation/ ownership battles to something many folks are familiar with:

The estate of the late comedian George Carlin is suing the team behind a podcast, claiming the hosts used artificial intelligence to create what his family described as a "ghoulish" impersonation of Carlin for a comedy episode.

The lawsuit filed against hosts Chad Kultgen and Will Sasso, the latter of whom is from B.C., said the team infringed on the estate's copyright by using Carlin's life's work to train an AI program in order to impersonate him for the Dudesy podcast's hour-long episode titled "George Carlin: I'm Glad I'm Dead."

"The defendants' AI-generated 'George Carlin Special' is not a creative work. It is a piece of computer-generated clickbait which detracts from the value of Carlin's comedic works and harms his reputation," reads the lawsuit filed in California last week.

"It is a casual theft of a great American artist's work."

The case is another instance of artificial intelligence testing copyright laws.

Writers from comedian Sarah Silverman to Game of Thrones author George R.R. Martin, as well as publications like The New York Times, have filed suit against tech companies accused of using their work without permission to train AI programs.

The Dudesy special, published Jan. 9, begins with a Carlin-like voice saying, "I'm sorry it took me so long to come out with new material, but I do have a pretty good excuse. I was dead."

Through the rest of the episode, the AI character reflects on topics that have been prevalent in American culture since Carlin's death in 2008 — including Taylor Swift, gun culture and the role of artificial intelligence in society.

The special has since been hidden from the public on YouTube.

Kultgen and Sasso have not responded to the estate's lawsuit in court.

'It's so ghoulish. It's so creepy,' said Kelly Carlin-McCall, pictured here in New York City in May 2022, of the AI-generated voice of her late father used in the Dudesy podcast. (Slaven Vlasic/Getty Images)

In an interview with CBC's As It Happens earlier this month, Carlin's daughter said the podcasters never contacted her family or asked permission to use her father's likeness. She said the recording left her feeling like she needed to protect her late father and the pride he took in creating his own comedic material.

"This is not my father. It's so ghoulish. It's so creepy," Kelly Carlin-McCall said of the AI-generated voice.

"I'm not OK with this. I would like them to apologize and say, 'Well, it was just a wild experiment and it didn't work and we apologize' and pull it down."

The show is hosted by Sasso, who was born in Delta, B.C., and Kultgen, an American writer and producer. An artificial-intelligence personality named Dudesy writes and controls the experimental program and acts as a third host, chatting with the two humans throughout the show.

In the lawsuit, Carlin's estate claimed the show made unauthorized copies of the comedian's copyrighted work to train Dudesy to create the hour-long special. It also claimed the podcast used Carlin's name and likeness without permission, including for Instagram posts promoting the episode. 

Courts have seen a wave of lawsuits as rapidly developing, easily accessible AI makes it easy to recreate a person's likeness.

"It's historically been common for people to do impersonations or mimic someone's style, and that has historically been allowed under copyright law," said Ryan Abbott, a partner at Los Angeles-based law firm Brown Neri Smith & Khan who specializes in intellectual property.

"But now you have AI systems that can do it in such a convincing way — someone might not be able to tell a synthetic person from a real person. It's also something people are increasingly doing without permission."

As usual, he added, the law hasn't kept pace with developing tech.

"Because this is so new, courts haven't weighed in yet on the degree to which these things are permissible," Abbott said.

"It is going to be a long time before these cases make their way through courts and, in the meantime, there is a lot of uncertainty around what people are allowed to do."

Sasso and Kultgen have said they can't disclose which company created Dudesy because there is a non-disclosure agreement in place. 

Carlin, 71, was widely recognized for his provocative counter-culture standup routines over his 50-year career. He was honoured with a star on the Hollywood Walk of Fame, appeared on The Tonight Show more than 100 times and received four Grammy Awards for his work in comedy.

Carlin died of heart failure at a hospital in Santa Monica, Calif. on June 22, 2008.

Normally, I’d combine elements from each of these stories, citing sources and hopefully adding at least one original interview or comment from my own research, and that would become a coherent, readable article with some news value.  (Instead, today it’s going out as a peek into the writer’s mind, and if it makes some sense, great!)

Here’s how I’d put together a short version. There’s more information in what I’ve pulled together as well if I wanted a longer form article. (I’ll start with the same intro I used at the beginning.)

Several major players in the podcast game have updated their policies surrounding artificial intelligence and how it can be applied. Apple and YouTube are both requiring disclaimers and setting boundaries around how AI can be utilized in content creation and presentation. 

The updated guidelines come as AI generation continues to improve, and news organizations and other content creators face ethical and legal challenges surrounding their use of AI. In a high profile lawsuit, the family of comedian George Carlin is suing podcasters who used AI to create a new comedy routine featuring Carlin, who died in 2008. AI has also been cited as a driver in the recent proliferation of photos, supposedly of pop star Taylor Swift, in a process known as deepfake.

Moving the needle in the other direction, Perplexity has released a daily newscast that’s generated entirely by AI, with no human intervention beyond setting up the aggregator feeds that funnel information into the AI program.

Many media outlets have not only adopted AI usage guidelines, but also posted them to their own sites. That includes news organizations creating their own content, as well as hosting platforms, such as Apple and YouTube, that host content created by others.

There’s the end of a short piece I created entirely through my own research and curation of information. Now, I’ll take the copy from each source, run it through Chat GPT, and prompt it to ‘summarize these articles in journalistic style.’ The result:

Apple has implemented stringent guidelines for creators on its Podcasts platform to ensure a trusted and rewarding experience for both listeners and creators. The guidelines cover various aspects including metadata accuracy, explicit content tagging, impersonation, rights infringement, and the use of artificial intelligence (AI). Creators failing to adhere may face content removal or account suspension. Specifically, Apple requires transparent disclosure of AI-generated content, emphasizing the importance of authenticity.

Moreover, Apple sets forth regulations against illegal, harmful, or objectionable content, emphasizing compliance with laws and sensitivity towards cultural norms. The guidelines extend to advertising and paid content, mandating clarity, honesty, and ongoing value in subscriptions.

In a related development, YouTube has introduced a new tool requiring creators to label realistically altered or synthetic media to enhance viewer transparency. This move aims to build trust between creators and their audience by ensuring that content involving realistic depictions of people, places, or events made with generative AI is appropriately disclosed.

Elsewhere, the estate of late comedian George Carlin has filed a lawsuit against the hosts of the Dudesy podcast for using AI to create an impersonation of Carlin without permission, marking another instance where AI's capabilities challenge copyright laws. The legal action underscores the growing concerns over AI's impact on intellectual property rights and the authenticity of creative content.

As AI continues to evolve, these developments highlight the need for clear guidelines and ethical standards to navigate the complex interplay between technology and creativity.

In my experience with ChatGPT, that’s a fairly concise summary, and it flows better than most. It was pushed out in about 2 seconds, which is significantly less time than I spent writing after gathering the information.

So, weigh in: What’s the future of AI for podcasters and media creators? 

Share This Post

Check out these related posts

Hot Pod Newsletter Faces Unknown Future

Hot Pod Newsletter Faces Unknown Future

1 min read
Behind the Camera: Meet Jeremiah

Behind the Camera: Meet Jeremiah

1 min read
Knockout: SB Nation Axes Sports Podcasts

Knockout: SB Nation Axes Sports Podcasts

1 min read