April 2

Scammers Use AI To Spoof Popular Creators For Ads

AI, Social Media

Scammers are using AI to effectively steal content creator identities and use them to post fake ads. It's another unfortunate example of how bad actors are using the fast-developing technology for nefarious purposes more quickly than companies and authorities can stop them. 

 And even if victims identify their scammers, there may be surprisingly little they can do to get justice or prevent further attacks. 

How The Latest AI Scam Works

According to The Washington Post, scammers scour social media for well-performing videos from different content creators. The video itself doesn't have to be about anything specific — it just needs to feature the creator in a space the scammer finds suitable for their ad. 

Then, they use platforms like HeyGen and Eleven Labs to create synthetic versions of that creators voice and swap out the audio with their own words — and then animate the lips. The results vary, but with a little fine tuning and additional manipulation, these spoofs can have shockingly believable results. 

In many cases, the Post found these videos were used to also promote dubious products on places like adult websites. One victim, Michel Janse, first heard from her followers that her likeness was being used to sell erectile dysfunction pills. Scammers utilized a video that showed Janse in her real clothes and bedroom "talking" about a fictional partner with fictional problems. 

On top of being a complete violation of Janse's privacy and an unauthorized use of her likeness, the videos can also be damaging to her brand. Janse is a Christian influencer who talks about things like travel, wedding planning, and home decor. Having a spoofed version of her show up on adult websites and "selling" deceitful products can lead to a serious hit to her brand and ability to land legitimate partnerships. 

The Evolution Of AI Deceit

These types of scams have roots in technology from several years ago. Most people have probably heard the term "deepfake" by now. It was coined in 2017 by a reddit user who began posting synthetic pornographic videos that utilized celebrity faces on adult actor bodies. The word itself refers to "deep-learning technology" (a form of machine learning) and, well, the fact that the results are fake. 

For years, the concern around this type of content largely revolved around popular figures and adult content. As AI has evolved, deepfakes were used to create videos of popular celebrities appearing to sell or endorse things or ideas. The biggest creators in the world, from Taylor Swift and Tom Hanks to Mr. Beast, have all been victims of these types of scams.

But those creators are highly visible and have more resources to identify misuse of their likeness. Now, scammers are using much smaller creators who may never even know they've been spoofed. Even just a few seconds of content is enough to train some fairly convincing AI replications of a person. 

What Creators Can Do To Combat It

Perhaps the most frustrating part of the whole ordeal is how little control creators have over removing this content. Because much of it appears in advertising on various websites across the Internet, it's difficult to know if your likeness has been misused. On top of that, some sites aren't beholden to any regulations or laws that would force them to remove the ads or content. If a site is hosted in a country that doesn't have any impetus to do anything about the content, complaints may fall on deaf ears. 

YouTube said in November 2023 that they would be working on ways to more quickly identify fake content created by AI. In March 2024, the company introduced a new policy requiring users to disclose whether AI was used to create content that could be perceived as real by viewers. YouTube also reiterated its work to create a process that allows users to request the removal of synthetic content. 

But that's still a major issue — it puts the burden on the victims to identify misuse of their likeness. Similar to other forms of identity theft, it could be months or even years before you realize your likeness has been stolen. While platforms like TikTok and YouTube are slowly catching up to the curve on basic identification measures, it's going to take serious technology and industry regulation to create safeguards against future criminal use of AI.


MORE STORIES FOR YOU

Insights

Insights

News

Never miss a good story!

 Subscribe to our newsletter to keep up with what's going on in content creation!