August 7

Deepfakes And Protecting Your Likeness

AI, Finance, Influencers, Mental Health, Tech

Deepfakes are once again in the headlines, this time thanks to a recent ruling and a generative AI bot with no guardrails. But it's not just about artificially generated media, either — some creators are finding themselves defending their original content being used by huge companies without credit or compensation. 

What Are Deepfakes?

Deepfakes are artificially generated images or videos of actual people. The word itself is a portmanteau of "deep learning," a type of machine learning / artificial intelligence, and "fake" — because, you know, the output depicts a fake scenario. The advent of artificial voice models has also started to include audio in the world of deepfakes, since it's becoming easier and easier to completely mimic another human's voice.

The origin of using computer generated imagery (CGI) to recreate actual humans first started in the 1990s, and has been used quite a bit in traditional media and entertainment. Though modern deepfakes don't have to be pornographic in nature — and many aren't — the technology has been synonymous with artificially generated pornography pretty much since the onset. 

That's because the term itself was coined by a Reddit user back in 2017 who anonymously created and posted deepfake pornography of various celebrities using face swapping technology. That subreddit was eventually banned, but there was no undoing what had already been done. 

Now, with the plethora of consumer generative AI models, deepfakes are more prevalent than ever. 

So Is It Legal To Create Deepfakes?

This is the core concern right now as more and more people (famous or not) are finding themselves at the tail end of deepfaked media. The answer is, frustratingly, yes and no. Creating pornographic deepfakes without the subject's consent is now illegal both federally and under many state laws.

But how effective has this law been? Just ask X users, who recently found out that a "Spicy Mode" version of the platform's generative AI model will happily generate pornographic deepfakes — sometimes without users even prompting it. That's right, X's generative AI bot "Grok" now has a "spicy mode" that immediately generated pornographic deepfake videos of celebrities like Taylor Swift, even without users explicitly asking for it. 

And yes, this is illegal. Many states have passed laws in the past several years, and the newly passed Take It Down Act criminalize the action, with even harsher penalties if the victim is a minor. Penalties can include up to three years of jail time, fines, and forfeiture of property. 

And here's the thing. Right now, these laws do cover the vast majority of deepfake content on the Internet. In fact, according to the New York State Bar Association, 98 percent of deepfake videos online are pornographic in nature, and 99 percent of them depict women. 

But what if the deepfake in question isn't pornographic? Then it's a different story. 

The Issue With Non-Pornographic Deepfakes

While the vast majority of deepfake content on the Internet is pornographic right now, that is quickly changing as generative AI models freely allow users to generate whatever they want, from something as seemingly innocuous as a celebrity picking flowers or dressed as Santa Claus to intentionally misleading political propaganda. 

The latter was the primary concern in a recent California law, which recently suffered defeat in court. Judge John Mendez struck down a California law that prevented AI-generated deepfakes of politicians in the lead-up to elections. Mendez's ruling was ultimately quite narrow. He says that the California law runs afoul of a federal law, which states that platforms like X and Facebook are not responsible for the content their users post. 

Basically, a user created a deepfake of Vice President Kamala Harris in which she appeared to call herself "the ultimate diversity hire." Because there are no laws or regulations currently requiring platforms or users to disclose if content is a deepfake or not, the post could easily be misconstrued as genuine. (It was also amplified by X owner Elon Musk, who shared the video on his profile). 

California then passed a law making it illegal for platforms to host this type of deepfake content leading up to elections. That is the law Mendez struck down. California also passed a second law that would require labels and takedowns of deepfake content of politicians around election time. According to Politico, Mendez also intends to overturn that law. 

Why Did The Laws Get Struck Down?

Mendez refused to address any First Amendment concerns with the first law. He said that wasn't necessary, because the law so clearly ignores a federal law stating that platforms are not responsible for the content that users post on them. 

With the second law, Mendez seemed incredulous that it would achieve its goals. "I think the statute just fails miserably in accomplishing what it would like to do," he said. Because the law could be perceived as censorship or an infringement on free speech, it may be struck down in favor of other approaches. 

So What About Protecting Your Likeness?

Does this mean that people have absolutely no recourse against individuals who attempt to disseminate fake information using deepfakes of their likeness? Unfortunately, it's a very grey area right now. 

While state and federal laws moved quickly to criminalize pornographic deepfakes, there's been much less movement for other deepfakes. And we've already seen how that could play out, from the aforementioned Kamala Harris video to deepfaked videos of Donald Trump sucking on Elon Musk's toes.

Political satire has always been an American tradition, but we've never been faced with a technology that could so clearly be passed as real. Or with so many bad actors. 

Political cartoons and celebrity impersonators have always been protected because it's very clear these "depictions" are not meant to be perceived as potentially real. You could maybe even stretch it and say the same thing of, say, a deepfake video of a politician riding on the back of a giant eagle. 

But what if it's a completely believable deepfake of somebody saying something that undermines their actual beliefs or words? What's stopping somebody from creating an indiscernible deepfake of you saying terrible things in a "compromised" environment — and then sharing it with your boss? 

And even if you could try to protect yourself from these things under civil protections like libel or slander laws, the nature of the Internet makes it nearly impossible to get that content fully removed or find perpetrators in the first place. 

Simply put, the only remedies are reactionary and woefully inadequate to meet the modern digital moment. 

Some People Aren't Even Respecting Organic Content

Actor and influencer Annie Korzen says Krispy Kreme and Maxibon is using her content to sell their product without crediting her, paying her, or asking permission. The 86-year-old creator went viral in 2021 with a simple video asking, "Was it crispy?" when talking about the donut brand.

That video alone has generated nearly 17 million views. Korzen now has nearly 500,000 followers and in 2023 she released a book titled The Book of Annie: Humor, Heart, and Chutzpah from an Accidental Influencer. Maxibon recently posted a video that included Korzen's original content to promote a new product with Krispy Kreme. 

Korzen commented on the video and made her own post making others aware of it. Now, the majority of the comments on the video are from users support Korzen's quest for credit and compensation. According to People, neither Maxibon nor Krispy Kreme has responded to the uproar. 

But, this again begs the question — if a creator's likeness isn't protected, what is stopping a company from making a deepfake recreation of that video and using it? Would duplicate deepfaked content be free from copyright restrictions?

As you can see, we have way more questions than answers right now. And while the current U.S. presidential administration seems to show no interest in solving these problems, the European Union appears ready to regulate and fight AI misinformation. 


MORE STORIES FOR YOU

The Artist Corporation Wants To End Hustle Culture

The artist corporation is a new corporate structure that aims to end hustle culture and give creative people the type of security and flexibility enjoyed by other corporations for decades. But can it stick in a creator economy that is perhaps more volatile than ever? The artist corporation, or “A-Corp,” is an experimental new structure being

Read More

Is Content Creation Replacing College?

Like the generation before them, more and more Gen Z adults are questioning whether going to college was worth it. But as industries like the creator economy and career options like content creation grow, could they be considering replacing college altogether? The Value Of University Degrees Comes Into QuestionWhile the decision to go to college or

Read More

Do You Know What Vtubers Are?

Vtubers are growing in popularity as a unique form of content creator. But a lot of people still don’t know what Vtubers are, how they create their content, why many of them choose the style — or how successful you can be doing it. What Are Vtubers?Vtubers are online creators and entertainers who use virtual (often

Read More

Are Content Creator Lawsuits Becoming More Common?

As the creator economy matures and content creators assert their influence in everyday life, are content creator lawsuits becoming more common? A growing number of creators have found themselves on multiple sides of the legal world for myriad reasons.We’ve Come A Long Way, BabyThe creator economy and content creation in general are in an interesting

Read More

Never miss a good story!

 Subscribe to our newsletter to keep up with what's going on in content creation!