Skip to content

Podcast Hosts Address AI Concerns

We take our own dive into AI's role in content creation with this look at AI itself.

Podcast Hosts Address AI Concerns

Table of Contents

Update 3.26

Several major players in the podcast game have updated their policies surrounding artificial intelligence and how it can be applied. Apple and YouTube are both requiring disclaimers and setting boundaries around how AI can be utilized in content creation and presentation. 

We caught wind of the developments at Apple and YouTube via Podnews, an industry newsletter. They added their intros, which I’ve dropped here, and then pulled information directly from the Apple and YouTube announcements.

While photo and video have their own unique AI challenges, this seemed like an opportunity to peek behind the scenes of content creation, both manually and utilizing an AI engine.

So, I’ll walk you through how this article was developed, and then run the whole thing through ChatGPT and add their ‘written’ version below. 

My words are bolded and italicized below. Here’s what I first pulled in from the newsletter:

Apple Podcasts content guidelines 

Podcasting is an extraordinary medium that allows people to share information, perspectives, stories, and ideas with listeners around the world. With Apple Podcasts our guiding principle is simple: we want to provide a delightful, trusted experience for listeners, and rewarding opportunities for creators to distribute and monetize their shows.

To help creators and listeners know what to expect from each other, and from Apple Podcasts, we maintain the following content guidelines. These guidelines will evolve over time and we will keep creators informed of significant changes as they are updated.

In the event Apple reasonably believes that, based on human and/or systematic review, a creator’s content does not meet these guidelines, Apple may take action to label or remove the content from Apple Podcasts, suspend the sale of subscriptions, and/or suspend or terminate your account. We value the work creators offer on Apple Podcasts and will work to help resolve any issues that may arise.

For creators established in, and who offer subscriptions via Apple Podcasts to customers located in, the European Union, more information about redress options available to you in connection with an action Apple has taken against you, for example removal of your podcast from Apple Podcasts, is available here.

1. Inaccurate, Misleading, or Unauthorized Content

  • 1.1. Metadata Accuracy. Podcast metadata (for example, titles, descriptions, language, etc.) must always accurately represent the corresponding content that is distributed.
  • 1.2. Explicit Content. Creators must flag individual episodes with the “Explicit” tag if they contain explicit content (for example, profanity or content that may not be suitable for children).
  • 1.3. Self-Censored Content. Apple Podcasts automatically censors certain explicit words in titles, so creators should not self-censor language in metadata, such as titles or descriptions. These metadata fields must be written as intended, and all words should be completely spelled out. For example, creators do not need to insert asterisks within explicit words. However, the use of censoring symbols as a stylistic name or brand choice is permitted, if you use that name or brand with such censoring symbols consistently in your name elsewhere, beyond Apple Podcasts.
  • 1.4. Impersonation. Podcasts designed to mislead listeners by mimicking, copying, or duplicating other content or search terms are not permitted. Creators must not pretend to be someone else or claim they are affiliated with someone else without their permission and cannot use the artwork, description, or metadata of another creator’s show without permission. 
  • 1.5. Manipulation. Creators must not encourage or cause any misleading, fraudulent, improper, unlawful, or dishonest activities that interfere with the integrity of Apple Podcasts. For example, creators cannot artificially increase, falsify, or otherwise manipulate a podcast’s follows, listens, ratings, or reviews, or attempt to influence search using inaccurate or inappropriate terms.
  • 1.6. Incomplete Content. Don’t provide incomplete content, including sample or trailer episodes for podcasts which are not or will not be shortly available in Apple Podcasts. For episodic shows, you may choose to make a subset of your podcast episodes available on Apple Podcasts so long as the episodes are published to Apple Podcasts on a regular cadence. 
    • The amount of editorial content in an episode should greatly exceed any advertising content, so episodes must not consist primarily of advertising or marketing for products, content, or services outside Apple Podcasts without providing an editorial benefit for listeners.
  • 1.7. Rights Infringement. Podcasts must not contain materials that infringe on the rights of others (for example, copyright, right of publicity, trademark, etc.). Creators must ensure their podcasts only include content that they have the rights to use.
  • 1.8. Password-Protected Content. Content that requires a password is prohibited from the Apple Podcasts directory.
  • 1.9. Duplicate Content and Repeated Submissions. Content may be removed if multiple copies of a show or episode are submitted. If creators seek to publish an updated version of a show or episode, they should upload a replacement to the original.
  • 1.10. Use of Apple Inc. Copyrighted Terms. Podcasts not produced by Apple Inc. or its affiliates must not use Apple trademarks (such as the Apple logo, Apple Inc., or Apple Podcasts) in a way that makes the content appear to be Apple-branded or created by Apple. To learn more, see Guidelines for Using Apple Trademarks and Copyrights.
  • 1.11. Artificial Intelligence (AI) Transparency. Creators using AI to generate a material portion of the podcast’s audio must prominently disclose this in the audio and metadata for each episode and/or show.
  • 1.12 Misleading Use of AI. Creators must not use AI in violation of these guidelines, including to mislead or deceptively portray real-life events in their content (e.g., using AI to fabricate news stories or manipulate audio clips to present false narratives).

2. Illegal, Harmful, or Objectionable Content

  • 2.1. Illegal Content. Creators’ content must comply with applicable law and regulations for the countries or regions where the content is distributed. 
  • 2.2. Harmful or Objectionable Content. Some content may not be suitable for distribution on Apple Podcasts, such as content that may lead to harmful or dangerous outcomes, or content that is obscene or gratuitous. For example, Apple Podcasts does not allow content that includes:
    • Encouragement of suicide or self-harm.
    • Encouragement of criminal activity or illegal acts.
    • Defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national or ethnic origin, or other targeted groups, particularly if the content is likely to humiliate, intimidate, harass, or harm individuals or groups or includes specific threats of violence against certain groups or individuals. Professional political satirists and humorists are generally exempt from this requirement.
  • 2.3. Cultural Sensitivities. Creators are responsible for being knowledgeable about and, if appropriate, addressing any cultural sensitivities before distributing a podcast in a given country or region.

3. Advertising Guidelines

  • 3.1. Advertising Laws. All advertising must be in compliance with applicable law (for example, laws around disclosure of content as paid or sponsored advertising, the content of advertising, and what type of advertising may be included without an age verification).

4. Paid Content Guidelines

In addition to the preceding content guidelines, the following guidelines apply to paid content available on Apple Podcasts through Apple Podcasts Subscriptions.

  • 4.1. Subscription Accuracy. Creators must clearly and accurately describe what benefits listeners will receive in exchange for the price, and the description must match the benefits provided, including the frequency or cadence on which subscribers can expect new subscriber content. Creators must update this information if and when there is a change to the benefits or content that a subscriber might reasonably be expecting.
  • 4.2. Duplicate Shows. A show must be associated with a channel and duplicate shows are not permitted. If creators offer free and paid versions of the same show on Apple Podcasts, they must link them using Apple Podcasts Connect.
  • 4.3. Premium Experience. In order to ensure that customers receive a premium experience when buying subscriptions on Apple Podcasts:
    • Creators cannot charge for content in Apple Podcasts if that content is available elsewhere for free at the same time in the same country or region.
    • Creators cannot offer benefits, such as ad-free listening or early access to new content, if those benefits are available elsewhere for free at the same time for listeners in the same country or region.
    • Creators cannot offer ad-free listening as a benefit if they do not have ads in their free shows.
  • 4.4. Ongoing Value. Subscriptions are not intended to be one-time transactions. The value may differ for each subscription depending on the content and benefits offered by creators, but the subscription must supply ongoing value to subscribers.
  • 4.5. Deceptive or Unfair Pricing. While subscription pricing is up to you, Apple may reject subscriptions that are excessively high or clearly unfair to customers. Creators must not offer subscriptions that try to cheat users with irrationally high prices.

5. Transcripts Guidelines

  • 5.1. Transcript Accuracy. Transcripts that you provide to Apple for display with your content on Apple Podcasts must accurately reflect and correspond to such content.

Now, here’s a separate post in the same Podnews newsletter. Again, I’ve stripped off the Podnews intro, and brought over the copy they got from YouTube’s website:

Generative AI is transforming the ways creators express themselves – from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.

That’s why today we’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, or event – is made with altered or synthetic media, including generative AI.

As we announced in November, these disclosures will appear as labels in the expanded description or on the front of the video player. We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.

The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another's or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.

Of course, we recognize that creators use generative AI in a variety of ways throughout the creation process. We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions. We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.

These cases include:

  • Clearly unrealistic content, such as animation or someone riding a unicorn through a fantastical world
  • Color adjustment or lighting filters
  • Special effects like background blur or vintage effects
  • Beauty filters or other visual enhancements

You can see a longer list of examples in our Help Center. For most videos, a label will appear in the expanded description, but for videos that touch on more sensitive topics — like health, news, elections, or finance — we’ll also show a more prominent label on the video itself.

You’ll start to see the labels roll out across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV. And while we want to give our community time to adjust to the new process and features, in the future we’ll look at enforcement measures for creators who consistently choose not to disclose this information. In some cases, YouTube may add a label even when a creator hasn't disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people.

Importantly, we continue to collaborate across the industry to help increase transparency around digital content. This includes our work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).

In parallel, as we previously announced, we’re continuing to work towards an updated privacy process for people to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice. We’ll have more to share soon on how we’ll be introducing the process globally.

Creators are the heart of YouTube, and they’ll continue to play an incredibly important role in helping their audience understand, embrace, and adapt to the world of generative AI. This will be an ever-evolving process, and we at YouTube will continue to improve as we learn. We hope that this increased transparency will help all of us better appreciate the ways AI continues to empower human creativity.

Then, as an add-on, Podnews felt compelled to include a link to their own AI policy.

Our editorial policy on AI

Nothing you see in Podnews is the direct output of any AI program, unless it clearly says so. We’re currently not using AI tools in any part of our editorial process.

We have, in the past, used Google’s Bard to summarise long podcast press releases. We might use summary tools again one day - maybe to help initially triage email. We don’t forsee a time when we use AI to actually write for us, though.

We don’t use AI photography tools, with the exception of when we’re writing some stories specifically about AI. We’ll credit the AI tool used if that’s the case.

Press releases and photographs submitted to us may have used AI in their generation. If they’ve not told us they’re using them, we can’t tell you. However, we’re pretty good at spotting when press releases or stories elsewhere are using AI, and we try not to link to them if we suspect that they’re AI generated.

Next, I jumped into a Google News search, using ‘podcast AI news’ as my search term. Several recent developments popped up, and a few had references to older stories.

From, a site run under the Penske Media umbrella, I grabbed an update on one of the hot new AI platforms that's generating a newscast completely with AI-

Perplexity, a still relatively new AI-based Google Search rival, is on a tear. For a start, the company raised over $70 million in January from top-tier investors, including Jeff Bezos. Following Perplexity’s launch last year, more than 10 million monthly active users are now flocking to the company’s clean, fast, and ad-free search experience. Perplexity has also just launched Discover Daily — a 100% AI-generated daily news podcast that managed to break into Apple’s top 200 news podcast in its first week.

The podcast, featuring episodes of no more than four minutes in length, feels like the perfect flex for a company eager to show off its increasingly robust AI prowess. The news summaries, for example, are read by a synthetic yet pleasant voice reminiscent of a BBC host (made possible by ElevenLabs’ customizable AI voice cloning technology). 

The summaries are drawn from Perplexity’s curated “Discover” feed that presents a running list of the day’s key headlines — some of the latest such news items including Meta and LG collaborating on a new high-end VR headset, and Redditors expressing dissatisfaction with the company’s IPO. “At Perplexity,” the company explains in its announcement of the new podcast, “we pride ourselves on being the fastest and most accurate way to search the web.”

The announcement continues: “Discover Daily is a testament to our commitment to making knowledge more accessible and engaging. By leveraging ElevenLabs’ lifelike voice technology, we’re able to transform the way people consume information, making it possible to absorb curated knowledge in audio form — perfect for those on the go or simply looking for a more dynamic way to learn something new.”

Perplexity CEO Aravind Srinivas told me last month that the company doesn’t have to make a direct, frontal assault on Google or challenge its market share in order to succeed. “We are operating in a new segment of AI assistants, a segment where new businesses and products will continue to be created and expanded. In this arena, Google doesn’t have a monopoly.”

The company’s buzz, meanwhile, only continues to grow. Among Perplexity’s investors are two with ties to Google: Susan Wojcicki, the former CEO of YouTube, and Jeff Dean, Google’s Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. Moreover, when Perplexity announced its Series B just days ago, it added that the company’s search engine had served a billion queries in 2023. An impressive start for a company that’s also done next to zero marketing.

I also came across this one, out of CBC/ Radio Canada, which relates AI and content creation/ ownership battles to something many folks are familiar with:

The estate of the late comedian George Carlin is suing the team behind a podcast, claiming the hosts used artificial intelligence to create what his family described as a "ghoulish" impersonation of Carlin for a comedy episode.

The lawsuit filed against hosts Chad Kultgen and Will Sasso, the latter of whom is from B.C., said the team infringed on the estate's copyright by using Carlin's life's work to train an AI program in order to impersonate him for the Dudesy podcast's hour-long episode titled "George Carlin: I'm Glad I'm Dead."

"The defendants' AI-generated 'George Carlin Special' is not a creative work. It is a piece of computer-generated clickbait which detracts from the value of Carlin's comedic works and harms his reputation," reads the lawsuit filed in California last week.

"It is a casual theft of a great American artist's work."

The case is another instance of artificial intelligence testing copyright laws.

Writers from comedian Sarah Silverman to Game of Thrones author George R.R. Martin, as well as publications like The New York Times, have filed suit against tech companies accused of using their work without permission to train AI programs.

The Dudesy special, published Jan. 9, begins with a Carlin-like voice saying, "I'm sorry it took me so long to come out with new material, but I do have a pretty good excuse. I was dead."

Through the rest of the episode, the AI character reflects on topics that have been prevalent in American culture since Carlin's death in 2008 — including Taylor Swift, gun culture and the role of artificial intelligence in society.

The special has since been hidden from the public on YouTube.

Kultgen and Sasso have not responded to the estate's lawsuit in court.

'It's so ghoulish. It's so creepy,' said Kelly Carlin-McCall, pictured here in New York City in May 2022, of the AI-generated voice of her late father used in the Dudesy podcast. (Slaven Vlasic/Getty Images)

In an interview with CBC's As It Happens earlier this month, Carlin's daughter said the podcasters never contacted her family or asked permission to use her father's likeness. She said the recording left her feeling like she needed to protect her late father and the pride he took in creating his own comedic material.

"This is not my father. It's so ghoulish. It's so creepy," Kelly Carlin-McCall said of the AI-generated voice.

"I'm not OK with this. I would like them to apologize and say, 'Well, it was just a wild experiment and it didn't work and we apologize' and pull it down."

The show is hosted by Sasso, who was born in Delta, B.C., and Kultgen, an American writer and producer. An artificial-intelligence personality named Dudesy writes and controls the experimental program and acts as a third host, chatting with the two humans throughout the show.

In the lawsuit, Carlin's estate claimed the show made unauthorized copies of the comedian's copyrighted work to train Dudesy to create the hour-long special. It also claimed the podcast used Carlin's name and likeness without permission, including for Instagram posts promoting the episode. 

Courts have seen a wave of lawsuits as rapidly developing, easily accessible AI makes it easy to recreate a person's likeness.

"It's historically been common for people to do impersonations or mimic someone's style, and that has historically been allowed under copyright law," said Ryan Abbott, a partner at Los Angeles-based law firm Brown Neri Smith & Khan who specializes in intellectual property.

"But now you have AI systems that can do it in such a convincing way — someone might not be able to tell a synthetic person from a real person. It's also something people are increasingly doing without permission."

As usual, he added, the law hasn't kept pace with developing tech.

"Because this is so new, courts haven't weighed in yet on the degree to which these things are permissible," Abbott said.

"It is going to be a long time before these cases make their way through courts and, in the meantime, there is a lot of uncertainty around what people are allowed to do."

Sasso and Kultgen have said they can't disclose which company created Dudesy because there is a non-disclosure agreement in place. 

Carlin, 71, was widely recognized for his provocative counter-culture standup routines over his 50-year career. He was honoured with a star on the Hollywood Walk of Fame, appeared on The Tonight Show more than 100 times and received four Grammy Awards for his work in comedy.

Carlin died of heart failure at a hospital in Santa Monica, Calif. on June 22, 2008.

Normally, I’d combine elements from each of these stories, citing sources and hopefully adding at least one original interview or comment from my own research, and that would become a coherent, readable article with some news value.  (Instead, today it’s going out as a peek into the writer’s mind, and if it makes some sense, great!)

Here’s how I’d put together a short version. There’s more information in what I’ve pulled together as well if I wanted a longer form article. (I’ll start with the same intro I used at the beginning.)

Several major players in the podcast game have updated their policies surrounding artificial intelligence and how it can be applied. Apple and YouTube are both requiring disclaimers and setting boundaries around how AI can be utilized in content creation and presentation. 

The updated guidelines come as AI generation continues to improve, and news organizations and other content creators face ethical and legal challenges surrounding their use of AI. In a high profile lawsuit, the family of comedian George Carlin is suing podcasters who used AI to create a new comedy routine featuring Carlin, who died in 2008. AI has also been cited as a driver in the recent proliferation of photos, supposedly of pop star Taylor Swift, in a process known as deepfake.

Moving the needle in the other direction, Perplexity has released a daily newscast that’s generated entirely by AI, with no human intervention beyond setting up the aggregator feeds that funnel information into the AI program.

Many media outlets have not only adopted AI usage guidelines, but also posted them to their own sites. That includes news organizations creating their own content, as well as hosting platforms, such as Apple and YouTube, that host content created by others.

There’s the end of a short piece I created entirely through my own research and curation of information. Now, I’ll take the copy from each source, run it through Chat GPT, and prompt it to ‘summarize these articles in journalistic style.’ The result:

Apple has implemented stringent guidelines for creators on its Podcasts platform to ensure a trusted and rewarding experience for both listeners and creators. The guidelines cover various aspects including metadata accuracy, explicit content tagging, impersonation, rights infringement, and the use of artificial intelligence (AI). Creators failing to adhere may face content removal or account suspension. Specifically, Apple requires transparent disclosure of AI-generated content, emphasizing the importance of authenticity.

Moreover, Apple sets forth regulations against illegal, harmful, or objectionable content, emphasizing compliance with laws and sensitivity towards cultural norms. The guidelines extend to advertising and paid content, mandating clarity, honesty, and ongoing value in subscriptions.

In a related development, YouTube has introduced a new tool requiring creators to label realistically altered or synthetic media to enhance viewer transparency. This move aims to build trust between creators and their audience by ensuring that content involving realistic depictions of people, places, or events made with generative AI is appropriately disclosed.

Elsewhere, the estate of late comedian George Carlin has filed a lawsuit against the hosts of the Dudesy podcast for using AI to create an impersonation of Carlin without permission, marking another instance where AI's capabilities challenge copyright laws. The legal action underscores the growing concerns over AI's impact on intellectual property rights and the authenticity of creative content.

As AI continues to evolve, these developments highlight the need for clear guidelines and ethical standards to navigate the complex interplay between technology and creativity.

In my experience with ChatGPT, that’s a fairly concise summary, and it flows better than most. It was pushed out in about 2 seconds, which is significantly less time than I spent writing after gathering the information.

So, weigh in: What’s the future of AI for podcasters and media creators?