I think we’re about to get a crash in 5 hours folks

The companies known as the Magnificent Seven make up over 20% of the global stock market. And a lot of this is based on their perceived advantage when it comes to artificial intelligence (AI).

The big US tech firms hold all the aces when it comes to cash and computing power. But DeepSeek – a Chinese AI lab – seems to be showing this isn’t the advantage investors once thought it was.

DeepSeek doesn’t have access to the most advanced chips from Nvidia (NASDAQ:NVDA). Despite this, it has built a reasoning model that is outperforming its US counterparts – at a fraction of the cost.

Investors might be wondering about how seriously to take this. But Microsoft (NASDAQ:MSFT) CEO Satya Nadella is treating DeepSeek as the real deal at the World Economic Forum in Davos:

“It’s super impressive how effectively they’ve built a compute-efficient, open-source model. Developments like DeepSeek’s should be taken very seriously.”

Whatever happens with share prices, I think investors should take one thing away from the emergence of DeepSeek. When it comes to AI, competitive advantages just aren’t as robust as they might initially look.

  • RedWizard [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    3 days ago

    and is that the reason for the huge bubble in asset values of companies like Nvidia and Microsoft?

    No not, actually. The bubble is in the idea that AI requires large amounts of power, cooling, and processing throughput to achieve things like the current OpenAI O1 Reasoning and Logic models. The circle is like this:

    The New AI Model Is Bigger --> Needs Bigger Hardware --> Bigger Hardware Needs Better Cooling --> More Cooling and Bigger Hardware Needs More Power --> More Cooling and Bigger Hardware means we can train the next Bigger Model --> Back to Start

    So long as the Newest AI model is “bigger” then the last AI model, then everyone in this chain keeps making more money and has higher demand.

    However, what Deepseek has done is put out an equivalent to the newest AI model that:

    A) Required less up front money to train,
    B) Uses considerably less resources than the previous model,
    C) Is released on an Open Source MIT License, so anyone can host the model for their use case.

    Now the whole snake is unraveling because all this investment that was being dumped into power, cooling, and hardware initiatives are fucked because less power and cooling is required, and older hardware can run the model.

    • Grapho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 days ago

      The fact that the US model can eat shit as soon as somebody figures out a way to make it work better and faster instead of forcing a bunch of bloat and ad-infinitum upgrading is hilarious to me.

      If that’s not a perfect distillation of the infinitely wasteful US economy I don’t know what is.