June 11

Employees And Experts Warn Of The Dangers Of AI

AI, Web3

An open letter signed by more than a dozen current and former employees warns of the dangers of AI and implores the government and companies to do something about it. Titled "A Right To Warn About Advanced Artificial Intelligence," the letter features thirteen signatures from current and former employees of OpenAI, Google DeepMind, and Anthropic.

Six of signers chose to remain anonymous, and four of those still work at OpenAI. According to The Washington Post, the letter was also endorsed by Yoshuo Bengio and Geoffrey Hinton, often considered the "godfathers" of AI, as well as Stuart Russell, a renowned computer scientist.

The Dangers Of AI At The Forefront Of Open Letter

The letter, hosted on a new site dubbed "Right To Warn," isn't particularly long, but it's well-documented and comes with a list of four primary demands. The authors begin by saying, "we believe in the potential of AI technology to deliver unprecedented benefits to humanity."

But then it gets serious.

The letter details three main major risks of artificial intelligence. These include "further entrenchment of existing inequalities, manipulation and misinformation, [and] the loss of control of autonomous AI systems potentially resulting in human extinction."

We've already seen some of the dangers of AI outlined here playing out in social media and online retailers. However, the most drastic fear listed here — human extinction — likely refers not to where artificial intelligence is, but where several of these companies, including OpenAI, ultimately want to take the technology.

Beyond what we've already seen play out, from questionable ethics to employee exoduses to whirlwind board decisions, perhaps the most concerning thing about artificial intelligence is what companies may do in pursuit of advancing it. The authors of the letters argue that, because all of these companies are motivated by profits and lack government oversight, AI will only become more unsafe without stringent protocols. 

They propose four main agreements that will help company employees, well, protect humanity. 

The Four Agreements

The letter's authors argue that company employees are currently the only safeguards against egregious missteps in AI advancements. As such, they ask that all companies developing technology adopt these four agreements to help employees curb the dangers of AI.

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

Basically, these employees want to make sure that other employees have an opportunity to bring forward serious concerns without threats of retaliation. It sounds a little dystopian, but it's a very real concern, especially at OpenAI

What It Means For The Future Of AI

The letter, dated June 4th, 2024, came less than one week before Apple announced its own form of AI, dubbed "Apple Intelligence." In its announcement, the company notably went in on making sure Apple's version of the technology would be safe and not trained on other peoples' data — only your own. In a way, they basically just want to make personal assistants better. 

But the company also announced that OpenAI's ChatGPT would be integrated into some Siri responses, following similar agreements with companies like Microsoft. If anything, it just means that now the biggest player in the tech hardware space is seemingly "all-in" on AI, the dangers of AI certainly aren't going anywhere. 

But the more noise experts make, the more likely we are to see serious concern and interest from parties that can create regulations and guardrails to make sure AI doesn't become a thing of dystopian action movies. A Pew Research poll revealed that 52 percent of people are more concerned than excited about AI, with only 10 percent of respondents saying they're more excited than concerned. 

A combination of both insider distress and public concern should, in theory, fast-track protections against overly ambitious tech companies with a "shoot first and ask questions later" attitude towards developing the technology. But slow developments so far, at least in the U.S., have made it clear that there's no silver bullet to protect against the potential dangers of AI. 


MORE STORIES FOR YOU

YouTube Is (Probably) Demonetizing AI Content

YouTube is demonetizing certain types of AI content on its platform. The second-most visited website in the world announced an update to its YouTube Partner Program payment policy that takes a hammer to channels trying to leverage AI slop to make a buck. The Official Demonetization Update…UpdateYouTube will officially update the guidelines on July 15th, but

Read More

Teens Are Turning Screen Time Into Money

More than ever, teens are turning their screen time into money. Thanks to new online opportunities, many young people are earning their first bits of income through things like social media, gaming, streaming, and e-commerce.  A recent survey by Whop (a social e-commerce platform) uncovered some really interesting data around how Americans aged 12 to 18

Read More

Creators Are Taking Actions To Unmask Online Trolls

Australian content creators are taking matters into their own hands to unmask online trolls. Could it lead to a safer Internet? Indy Clinton Unmasks Online TrollsIndy Clinton recently shared a post that has garnered more than 3.7 million views. In it, Clinton is dancing in her kitchen — a seemingly standard video, until you read the

Read More

How AI-Generated Music Hurts Real Artists

AI-generated music is probably among the most convincing of the current world of AI slop. And it’s in a lot more places than you may realize, whether its underscoring videos or taking up a spot on your weekly playlist. The thing is — with how streaming companies pay artists, it’s not all just fun and

Read More

Never miss a good story!

 Subscribe to our newsletter to keep up with what's going on in content creation!