I think we’re about to get a crash in 5 hours folks
The companies known as the Magnificent Seven make up over 20% of the global stock market. And a lot of this is based on their perceived advantage when it comes to artificial intelligence (AI).
The big US tech firms hold all the aces when it comes to cash and computing power. But DeepSeek – a Chinese AI lab – seems to be showing this isn’t the advantage investors once thought it was.
DeepSeek doesn’t have access to the most advanced chips from Nvidia (NASDAQ:NVDA). Despite this, it has built a reasoning model that is outperforming its US counterparts – at a fraction of the cost.
Investors might be wondering about how seriously to take this. But Microsoft (NASDAQ:MSFT) CEO Satya Nadella is treating DeepSeek as the real deal at the World Economic Forum in Davos:
“It’s super impressive how effectively they’ve built a compute-efficient, open-source model. Developments like DeepSeek’s should be taken very seriously.”
Whatever happens with share prices, I think investors should take one thing away from the emergence of DeepSeek. When it comes to AI, competitive advantages just aren’t as robust as they might initially look.
I’d like some LLM features centered around “here is my organized folder of documents for context” but without paying $20/month or buying a $2,000 GPU or giving all of my data to Google/Microsoft. I couldn’t find anything that actually worked even if I did pay money or give my data to Google/Microsoft. Ignoring even the folder thing, I remember asking Claude to summarize a novel I’m writing and it kept mixing character’s with details that unambiguously only applied to other characters.
It doesn’t work in the average case. I’ve seen this tactic from the company that I work for and multiple companies I have contacts at. Bosses think they can simply use “AI” to fix their hollowed out documentation, on-boarding, employee education systems by pushing a bunch of half correct, barely legible “documentation” through an LLM.
It just spits out garbage for 90% of people doing this. It’s a garbage in garbage out process. In order for it to even be useful you need a specific type of LLM (a RAG) and for your documentation to be high quality.
Here’s an example project: https://github.com/snexus/llm-search
The demo works well because it uses a well documented open source library. It’s also not a guarantee that it won’t hallucinate or get mixed up. A RAG works simply by priming the generator with “context” related to your query, if your model weights are strong enough your context won’t outweigh the allure of statistical hallucination.
I’ve seen one or two companies doing exactly this, but specific to certain varieties of bureaucracies (mostly insurance companies, unfortunately). It struck me as one of the few potential real uses for LLMs so long as it can provide a bibliography for any responses that can be confirmed deterministically.