Trending on Reddit: DeepSeek-V4 Released — 1.6T Params, 1M Context
What Happened
DeepSeek released V4 with 49B active parameters, a 1M token context window, and hybrid attention plus compression techniques. Early benchmarks place it as the second-strongest open reasoning model behind Kimi K2.6. Hallucination rates and serving costs remain notable concerns, and the weights are openly downloadable.
My Take
US export controls were supposed to slow Chinese frontier work; instead they've forced an architecture-efficiency arms race that's now producing models the open community can actually run. The strategic question for executives isn't "should we use a Chinese model?" — it's "what's our plan when an open-weights model is two months behind GPT-5.5 at a tenth the cost?" That gap is the real threat to OpenAI's $25B revenue run-rate.
Read Original Source