The American tech sector spun into a frenzy this week after a new Chinese AI model DeepSeek dropped. More specifically, American tech stocks plummeted and trade headlines across the globe were plastered with mentions of the new model.
Why are American tech companies and investors worried about DeepSeek and what might it mean for content creators?
Why Tech Stocks Plummeted
Tech stocks in the NASDAQ and S&P 500 saw big drops across the board. But few felt it as badly as Nvidia, a chipmaker that up until this point was a Cinderella growth story. In one day, the company now holds the onerous honor of having the biggest single day market cap drop in U.S. history, losing $600 billion, or roughly 17 percent of its price, in one day.
Companies that rely on Nvidia also saw big dips, including Dell and Oracle, seeing near 10-percent drops. But what exactly does that mean?
It just means a bunch of people who held stock in the company decided to sell it off after DeepSeek came out. When a bunch of people start to sell off stock in a company, it has a compounding effect that can really damage the value of the companies and the market overall. (A stock market crash is as much about humans reacting poorly to news as the news in the first place).
But here's the thing — the reason all these stocks plummeted is because DeepSeek released an AI model that reportedly matches or exceeds those of U.S. companies' capabilities for a tremendous fraction of the cost. Like, DeepSeek reportedly built its model with less than $6 million in funding. Compare that to the U.S. companies spending $50+ million on models.
Not only is DeepSeek reportedly as good or better and cheaper than other models, it's model dubbed R1 is also open-source, meaning anybody can get in and see how it was built. DeepSeek officially released the open-source R1 on January 20th.
Suddenly, all of the U.S. companies that raised billions of dollars in investments now seem at great risk. Chipmaker Nvidia is one of the core suppliers of all these companies, which is why that company in particular became the symbol of the stock price plummet.
In a twist of irony, DeepSeek actually likely used Nvidia products in developing its model. The company acquired 10,000 A100 graphics processor chips from Nvidia in 2022, prior to the Biden administration clamping down on American companies providing China with such tech.
So What Is DeepSeek?
DeepSeek is a Chinese company founded in 2023. The language model it released, dubbed R1, is pretty much the same as any other large language model out there, like OpenAI's ChatGPT or Microsoft's Copilot.
The company was founded by hedge fund manager Liang Wengfend, who in 2023 said the majority of DeepSeek's core technical roles are filled by new graduates or those with only a year or two worth of experience. The company said DeepSeek took only two months and $6 million to build, further destabilizing faith in America's dominance in the AI sector.
At the end of the day, though, the product is pretty much the same as the rest of these language models, at least when it comes to how a typical user interacts with it. You can download it from an app store or use it online. It generates various responses to prompts. There's no real killer feature we haven't seen or anything unique from a user perspective.
However, as a harbinger of what's to come from China and an ominous signal that the investment appetite for AI is bloated, DeepSeek is very powerful. And it might be enough to kill many ongoing projects at companies, or future investments in others.
In a twist of delicious irony, OpenAI claims it has evidence that DeepSeek used its own models to train their product. Considering OpenAI and other companies training large language models may be responsible for copyright infringement and IP theft on an unimaginable scale, they probably won't get too much sympathy claiming another company used its model to train their own.
If it's true though, it may technically undermine the idea that DeepSeek was created entirely from scratch in two months by a bunch of junior engineers for $6 million. Though, on the other side of the coin, all of these other models opened Pandora's Box — why would rivals worry about training something from scratch if they can get away with the same level of alleged theft as OpenAI etc. in the first place?
Is This Something That I, A Content Creator, Should Worry About?
Well, it's not great if you had a lot of Nvidia stock. It's not quite the big AI bubble burst that many have predicted, but for the first month of 2025, it certainly may signal the end of it. Most value in technology is predicated on the belief that there will be a widespread adoption of it and that there's something uniquely special or "sticky" about it.
AI is certainly everywhere, but the evidence of consumer demand is mixed at best. At the very least, most consumers logically want technology that is going to make their days easier and help them eliminate burdensome tasks more efficiently. Until that becomes available and easily adoptable, most of the Silicon Valley excitement around AI pertains to it presumably being something hard to do and something you can generate a lot of value by owning. The DeepSeek revelation threatens to upend that faith in the AI ecosystem. (The fact that it's a Chinese company that did it and not an American one likely adds to the anxiety, but the reality is it's likely much bigger picture than that).
So, if you've been banking on AI as a cornerstone of your content, well, you may see some ramifications. We may see companies get a lot tighter with budgets and vanity projects designed to show the advancements of generative AI get shelved in favor of something that could actually make investors money. But that's likely a very small segment of people. Heck, if anything maybe you can just download DeepSeek and play around with it.
However, this could lead to more relaxed regulations on AI in the U.S. as a sort of "space race" style competition emerges. Unfortunately, that may make it more difficult for creators to opt out of having models trained on their content. It could also have some ramifications on the litany of lawsuits facing the companies who allegedly trained their products on stolen content. It's too early to tell.
As is typically our advice when things like this crop up, content creators simply need to keep on creating in the meantime. It's good to be aware of it, but don't let it distract you from the most important thing: making stuff and connecting with people.