AI Models

A Chinese quant fund just matched GPT-4 for $6 million.

OpenAI spent billions. DeepSeek spent $6M. The compute moat Silicon Valley counted on just got a lot thinner.

High-Flyer Capital Management is a quantitative hedge fund in Hangzhou. They trade derivatives using algorithms. They are not an AI lab.

In January 2025, their AI research arm — DeepSeek — released DeepSeek-R1. It matched OpenAI’s o1-mini on reasoning benchmarks. The total compute cost: approximately $5.6 million.

The reaction from Silicon Valley was panic. Nvidia lost $600 billion in market cap in a single day — the largest single-day drop for any company in history.


How did they do it?

Two architectural choices that most US labs had deprioritized:

  1. Mixture of Experts (MoE). Instead of activating the full model for every token, MoE routes each input through a small subset of specialized subnetworks. You get the capability of a 671B parameter model at the compute cost of a 37B model.
  2. Pure reinforcement learning for reasoning. DeepSeek-R1 was trained to reason through trial and error — no supervised chain-of-thought examples. It learned to verify its own logic. The model discovered strategies like self-reflection and multi-step verification on its own.

Neither technique is secret. Both were in published research. DeepSeek just executed on them with unusual discipline.


The theory everyone in AI believed was this: compute is the moat. Whoever controls the most H100s wins. Training runs at scale create capabilities that can’t be replicated cheaply.

DeepSeek didn’t disprove that entirely. But they moved the goalposts badly.

DeepSeek-R1 is open source. You can download the weights. You can run it on consumer hardware. The capability that OpenAI charges $20/month for is now free.

Every company with a “we have exclusive access to frontier models” pitch deck had a bad January.


The deeper implication: AI capabilities are becoming a commodity faster than the industry expected.

The race isn’t over. Frontier models are still ahead. But the gap between “frontier” and “good enough for most applications” is closing rapidly — and the cost to reach “good enough” is falling faster than anyone planned for.

This changes the business of AI more than it changes the technology.

Research powered by Olostep

This research was compiled using the Olostep Answers and Scraping API.