May 10

  • Home
  • Learn Stuff
  • OpenAI Media Manager Will Allow Creators To Exclude Their Content From AI Training

OpenAI Media Manager Will Allow Creators To Exclude Their Content From AI Training

AI, Web3

OpenAI, the company at the forefront of controversial technology like ChatGPT, DALL-E, and Sora, says it’s working on a tool to allow creators to exclude their content from future products. Dubbed OpenAI Media Manager, the tool appears to be the company’s first serious attempt to alleviate concerns about copyright infringement. 

Open AI announced the new tool in a May 7th blog post on its website. The company says it plans for Media Manager to be available beginning in 2025. 

What The OpenAI Media Manager Tool Is Supposed To Do

While specifics of how the OpenAI Media Manager tool may work are sparse, the company says it’s the result of years worth of conversations with creators. “We are not professional writers, artists, or journalists, nor are we in those lines of business,” OpenAI says in its post. “We focus on building tools to help these professions create and achieve more. To accomplish this, we listen to and work closely with members of these communities, and look forward to our continued dialogues.”

In basic terms, OpenAI Media Manager should allow content creators to identify their content. From there, they can opt to exclude it from a projects that OpenAI may be developing. Again, how it will do this — or how much impetus is on the creator — is still unclear,

“OpenAI is developing Media Manager, a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training,” the company says. ”Over time, we plan to introduce additional choices and features. This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio, and video across multiple sources and reflect creator preferences.“

How We Got Here

While OpenAI’s blog post certainly paints a rosy picture both of the future of AI and its work with creators, the reality is the company has created an incredibly tumultuous environment between content creators and companies training their products on said content. The company is facing many lawsuits from all kinds of creators and copyright owners, including major media outlets, authors, comedians, and even billionaires who previously worked with the company.  

In addition to lawsuits, multiple governments are investigating whether the company broke consumer protection laws, privacy laws, securities laws, and more. In other words, for all the big talk about the future of artificial intelligence as we know it, the company responsible for the AI rush could be in big trouble. 

And while the company's in-house team of a dozen lawyers hammers away at these issues, engineers are working overtime to create systems that wash their hands clean of culpability and press teams are looking to spin it as creator-friendly however they can. 

Or, maybe OpenAI really does care about creators. The heart of OpenAI's mission is conceptually altruistic: "to ensure that artificial general intelligence benefits all of humanity." (Of course the jury is still out on whether artificial general intelligence is even possible or just the stuff of Blade Runner fan fiction, but that's a different can of worms). 

Still, it's hard to take the company at its word when the majority of products it has released into the world are basically generative tools trained on stolen content and most likely to be used to replace original artists. 

Who Is The OpenAI Media Manager Really For?

Outside of knowing that the OpenAI Media Manager is a tool that the company is working on and wants to release in 2025, we have far more questions than answers. And we're not the only ones. 

In a Wired article, Ed Newton-Rex poses several important questions. Newton-Rex is the CEO of Fairly Trained, a non-profit organization that certifies generative AI companies who protect human creators in their use of machine learning. Or, more simply, "any generative AI model that doesn’t use any copyrighted work without a license." The company has certified 13 different models so far according to its website — and to no one's surprise, none of OpenAI's products make the cut. 

Newton-Rex has concerns that the OpenAI Media Manager could simply be an opt-out scheme, which puts the onus on creators to find and opt-out their content from being used in machine learning. In this scenario, not a whole lot changes when it comes to OpenAI training products on content that it doesn't have permission or license to use — unless there were a mass awareness campaign to get creators to claim their content. 

Furthermore, will OpenAI make the Media Manager tool available to other models in order to allow creators to opt-out of multiple products? If not, it could just be one small piece of a very large copyright infringement puzzle. 

Ultimately, the notion of asking somebody to "opt out" of having their content used to train for-profit tools without consent or compensation is pretty bonkers. But right now, that's the state we find ourselves in. Until regulators are able to establish firm guidelines preventing companies from the practice, creators and rights holders will have to take matters into their own hands.






Never miss a good story!

 Subscribe to our newsletter to keep up with what's going on in content creation!