I think we’re about to get a crash in 5 hours folks
The companies known as the Magnificent Seven make up over 20% of the global stock market. And a lot of this is based on their perceived advantage when it comes to artificial intelligence (AI).
The big US tech firms hold all the aces when it comes to cash and computing power. But DeepSeek – a Chinese AI lab – seems to be showing this isn’t the advantage investors once thought it was.
DeepSeek doesn’t have access to the most advanced chips from Nvidia (NASDAQ:NVDA). Despite this, it has built a reasoning model that is outperforming its US counterparts – at a fraction of the cost.
Investors might be wondering about how seriously to take this. But Microsoft (NASDAQ:MSFT) CEO Satya Nadella is treating DeepSeek as the real deal at the World Economic Forum in Davos:
“It’s super impressive how effectively they’ve built a compute-efficient, open-source model. Developments like DeepSeek’s should be taken very seriously.”
Whatever happens with share prices, I think investors should take one thing away from the emergence of DeepSeek. When it comes to AI, competitive advantages just aren’t as robust as they might initially look.
So LLM’s the “AI” that everyone is typically talking about are really good at one statistical thing:
“CLASSIFYING”
What is “CLASSIFYING” you ask? Well it’s basically attempting to take a data and put it into specific boxes. If you want to classify all the dogs you could classify them based on breed for example. LLMs are really good at classifying better than anything we’ve ever made and they adapt very well to new scenarios and create emergent classifications of data fed to them.
However they are not good at basically anything else. The “generation” that these LLMs do is based on the classifier and the model, which basically generates responses based on statistically what the next word is. So for example it’s entirely possible that if you fed an LLM the entirety of Shakespeare and only Shakespeare and you gave it “Two households both alike” as a prompt, it practically may spit out the rest or Romeo and Juliet.
However this means AI’s are not good at the following:
Don’t get me wrong, yes this is a solution in search of problem. But the real reason that there is a bubble in the US for these things is because companies are making that bubble on purpose. The reason isn’t even rooted in any economic reality. The reason is rooted in protectionism. If it takes a small lake of water and 10 data centers to run ChatGPT, that means it’s unlikely you will lose a competitive edge because you are misleading your competition. If every year you need more and more compute to run the models it concentrates who can run them and who ultimately has control of them. This is what the market has been doing for about 3 years now. This is what DeepSeek has undone.
The similarities to BitCoin and crypto bubbles are very obvious in the sense that the mining network is controlled by whoever has the most compute. Etherium specifically decided to cut out the “middle man” of who owns compute and basically says whoever pays into the network’s central bank the most controls the network.
This is what ‘tech as assets’ means practically. Inflate your asset as much as possible regardless of it’s technical usefulness.
As a sidenote “putting things in boxes” is the very thing that itself upholds bourgeois democracies and national boundaries as well.
I think this distinction is interesting because if the product of AI is classification it is just a more abstract method of continuing to do the very thing we have been doing for a long time with other statistical methods that fragment data into sets of whatever we wish to draw boundaries to. Essentially it sounds like intensified and far more depersonalized categorizing.
I mean at this raw of an argument you might as well argue for Lysenkoism because unlike Darwinism/Mendelian selection it doesn’t “put things in boxes”. In practice things are put in boxes all the time, it’s how most systems work. The reality is that as communists we need to mediate the negative effects of the fact that things are in boxes, not ignore the reality that things are in boxes.
The failure of capitalism is the fact that it’s systems of meaning making converge into the arbitrage of things in boxes. At the end of the day this is actually the most difficult part of building communism, the Soviet Union throughout it’s history still fell ill with the “things in boxes” disease. It’s how you get addicted to slave labor, it’s how you make political missteps because it’s so easy to put people in a “kulak” in a box that doesn’t even mean anything anymore, it’s how you start disagreements with other communist nations because you really insist that they should put certain things into a certain box.
I’m not arguing for anything and did not suggest that putting things in boxes is good or bad, but that it just is what we do.
And that these models just do the same, but with more abstraction.