DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
People who don’t understand tech (ie. investors) were led to believe the US had a decade headstart on AI and the GPU embargo would slow foreign competition.
Well, now here’s deepseek, giving competitive results, developed on a fraction of the compute power. Gap closed. Now these same people will believe tech companies won’t demand entire data centers of Nvidia B200s to compete.
Won’t the faster hardware be even better with more efficient models? I don’t see as much value loss as the market, especially since it’s already built.
People who don’t understand tech (ie. investors) were led to believe the US had a decade headstart on AI and the GPU embargo would slow foreign competition.
Well, now here’s deepseek, giving competitive results, developed on a fraction of the compute power. Gap closed. Now these same people will believe tech companies won’t demand entire data centers of Nvidia B200s to compete.
Won’t the faster hardware be even better with more efficient models? I don’t see as much value loss as the market, especially since it’s already built.