TikTok will soon require users to disclose whether or not they've included AI content in their post. The new feature, first revealed in a post from social media consultant Matt Navara, provides a toggle next to the words "AI-generated content" under the app's content disclosure sections. The words "Label AI-generated content to follow our guidelines and help prevent content removal," appear next to the toggle.
That language may suggest TikTok will remove posts that contain AI-generated content without first disclosing it. The platform has also removed users altogether for multiple infractions of these types of violations.
Why TikTok Wants You To Identify AI Content
TikTok's move towards artificial content identification comes at a time when the U.S. is grappling with how to address both AI content and social media's impact on misinformation as a whole. The upcoming requirement likely stems from recent "guardrails" that companies have agreed to put in place in order to curtail the spread of misinformation.
There's already plenty of misinformation on social media platforms — and a lot of posts receive a signal boost from various algorithms that prioritize things like watch time and engagement, regardless of ethics. It doesn't take long for a piece of verifiably false information to spread after users engage with it.
Now, technology that allows for the creation of things like deep fakes and voice replication makes it even easier for people to believe "unbelievable" things — and share them. This can have serious consequences, both social and legal, for content creators and audiences as a whole. This new guideline also likely gives TikTok some increased sense of plausible deniability around any future legal claims against the company.
The Difference Between AI Content And Content That Uses AI
There's a very important distinction for users to make, though — and one we hope TikTok will provide some clarity around. Videos that use things like video filters and generated captions are not the same as videos that use AI-generated content intent on being indistinguishable from organic content.
Take the fake pictures of the pope in a puffy jacket, for example — those images went viral incredibly quickly and were intended to be indistinguishable from reality (the reality that, no, the pope did not wear a puffy jacket). That's AI content. The same with videos that create parodies of presidents playing video games against each other, songs imagining Frank Sinatra singing gangster rap, and videos showing Tom Cruise acting weird (who needs to fake that?).
However, platforms like Snapchat and TikTok have built a lot of engaging features off utilizing the same technologies others are using to be deceptive — artificial intelligence, virtual reality, mixed reality, machine learning, and more. Utilizing things like filters, smart captions, and other features isn't really AI content — it's just content that uses some form of AI to enhance its overall presence and engagement.
What It Means For The Future
So what's the ultimate effect of requiring users to identify whether their content uses AI or not? Well, hopefully it will give TikTok easy ways to take down content that is intentionally misleading. The reality is, most users probably aren't really messing around with creating "fake" content.
However, those who are clout chasing or building audiences off of misleading content (often tied to outrage culture) may not want to identify their content as fake. After all, there are lots of pages that intentionally share misleading or inaccurate information to grow, just barely skirting around the lines — and acknowledging the content is fake would completely tank their reach. Because, opposed to a satire site like The Onion, there's nothing really creative about tricking people.
So — will this new requirement weed out bad apples and curb misinformation? We can all hope. But ultimately, platforms will likely need to do a better job of identifying fake content, too. Allowing users to note that their post is AI content (and presumably have that disclaimer appear to users) is probably a good first step. But if we're truly going to get to a better, more trustworthy social media landscape, these platforms will probably need to be even more proactive to stop misinformation and intentionally misleading AI content before it spreads.