• adoxographer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    While this is great, the training is where the compute is spent. The news is also about R1 being able to be trained, still on an Nvidia cluster but for 6M USD instead of 500

    • alvvayson@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      True, but training is one-off. And as you say, a factor 100x less costs with this new model. Therefore NVidia just saw 99% of their expected future demand for AI chips evaporate

      Even if they are lying and used more compute, it’s obvious they managed to train it without access to the large amounts of the highest end chips due to export controls.

      Conservatively, I think NVidia is definitely going to have to scale down by 50% and they will have to reduce prices by a lot, too, since VC and government billions will no longer be available to their customers.

      • bestboyfriendintheworld@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        True, but training is one-off. And as you say, a factor 100x less costs with this new model. Therefore NVidia just saw 99% of their expected future demand for AI chips evaporate

        It might also lead to 100x more power to train new models.

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I doubt that will be the case, and I’ll explain why.

          As mentioned in this article,

          SFT (supervised fine-tuning), a standard step in AI development, involves training models on curated datasets to teach step-by-step reasoning, often referred to as chain-of-thought (CoT). It is considered essential for improving reasoning capabilities. DeepSeek challenged this assumption by skipping SFT entirely, opting instead to rely on reinforcement learning (RL) to train the model. This bold move forced DeepSeek-R1 to develop independent reasoning abilities, avoiding the brittleness often introduced by prescriptive datasets.

          This totally changes the way we think about AI training, which is why while OpenAI spent $100m on training GPT-4, running an expected 500,000 GPUs, DeepSeek used about 50,000, and likely spent that same roughly 10% of the cost.

          So while operation, and even training, is now cheaper, it’s also substantially less compute intensive to train models.

          And not only is there less data than ever to train models on that won’t cause them to get worse by regurgitating other worse quality AI-generated content, but even if additional datasets were scrapped entirely in favor of this new RL method, there’s a point at which an LLM is simply good enough.

          If you need to auto generate a corpo-speak email, you can already do that without many issues. Reformat notes or user input? Already possible. Classify tickets by type? Done. Write a silly poem? That’s been possible since pre-ChatGPT. Summarize a webpage? The newest version of ChatGPT will probably do just as well as the last at that.

          At a certain point, spending millions of dollars for a 1% performance improvement doesn’t make sense when the existing model just already does what you need it to do.

          I’m sure we’ll see development, but I doubt we’ll see a massive increase in training just because the cost to run and train the model has gone down.

      • adoxographer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I’m not sure. That’s a very static view of the context.

        While china has an AI advantage due to wider adoption, less constraints and overall bigger market, the US has higher tech, and more funds.

        OpenAI, Anthropic, MS and especially X will all be getting massive amounts of backing and will reverse engineer and adopt whatever advantages R1 had. Which while there are some it’s still not a full spectrum competitor.

        I see the is as a small correction that the big players will take advantage of to buy stock, and then pump it with state funds, furthering the gap and ignoring the Chinese advances.

        Regardless, Nvidia always wins. They sell the best shovels. In any scenario the world at large still doesn’t have their Nvidia cluster, think Africa, Oceania, South America, Europe, SEA who doesn’t necessarily align with Chinese interests, India. Plenty to go around.

        • alvvayson@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Extra funds are only useful if they can provide a competitive advantage.

          Otherwise those investments will not have a positive ROI.

          The case until now was built on the premise that US tech was years ahead and that AI had a strong moat due to high computer requirements for AI.

          We now know that that isn’t true.

          If high compute enables a significant improvement in AI, then that old case could become true again. But the prospects of such a reality happening and staying just got a big hit.

          I think we are in for a dot-com type bubble burst, but it will take a few weeks to see if that’s gonna happen or not.

          • adoxographer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Maybe, but there is incentive to not let that happen, and I wouldn’t be surprised if “they” have unpublished tech that will be rushed out.

            The ROI doesn’t matter, it wasn’t there yet it’s the potential for it. The Chinese AIs are also not there yet. The proposition is to reduce FTEs, regardless of cost, as long as cost is less.

            While I see OpenAi and mostly startups and VC reliant companies taking a hit, Nvidia itself as the shovel maker will remain strong.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      if, on a modern gaming pc, you can get breakneck speeds of 5 tokens per second, then actually inference is quite energy intensive too. 5 per second of anything is very slow

    • orange@communick.news
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      That’s becoming less true. The cost of inference has been rising with bigger models, and even more so with “reasoning models”.

      Regardless, at the scale of 100M users, big one-off costs start looking small.

        • orange@communick.news
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Maybe? Depends on what costs dominate operations. I imagine Chinese electricity is cheap but building new data centres is likely much cheaper % wise than countries like the US.