Chinese AI startup DeepSeek has launched two new major language models (LLM) named DeepSeek V4 Flash and DeepSeek V4 Pro, continuing its strategy of competing with high performance but low cost.
This move comes more than a year after previous versions such as V3.2 and R1 attracted global attention, challenging the position of many giants in the artificial intelligence industry.
Both new V4 models were released as open source code, possessing a context window exceeding 1 million tokens, allowing processing of huge amounts of data such as entire documents or source code in one input.
In which, the Pro version has a scale of up to 1.6 trillion parameters (49 billion operating parameters), considered one of the largest open source models today. This number surpasses competitors such as Moonshot AI with Kimi K2. 6 or MiniMax with M1.
The smaller Flash version with about 284 billion parameters, is designed to optimize processing costs and speed.
Both models only support text output, do not create multimedia content such as images or videos and are different from some current closed AI systems.
In terms of technology, DeepSeek subdivides tasks and assigns them to specialized modules for processing. At the same time, the company combines many advanced techniques such as model distillation and multi-head attention mechanisms, helping to optimize performance even when using less advanced hardware.
Previously, DeepSeek used Nvidia's H20 GPU; in the new generation, they switched to chips developed by Huawei.
According to the announcement, DeepSeek V4 Pro achieves high performance in reasoning tests and can compete with leading models from OpenAI or Google in certain tasks.
However, the company also admits that its models are still about 3–6 months behind the most advanced systems in terms of general knowledge.
The most notable point lies in the price strategy. DeepSeek continues to maintain its low-cost advantage when V4 Flash is priced from only 0.14 USD per million input tokens and 0.28 USD output. This is considered a much lower figure than equivalent products on the market.
Meanwhile, V4 Pro also has a more competitive price compared to high-end models such as Gemini or GPT.
The appearance of the V4 line shows that DeepSeek is pursuing its own path of optimizing performance on cost, instead of just racing for scale and computing power. This contributes to changing the view on AI development costs, while increasing competitive pressure in the entire industry.
In the context of large technology companies continuously investing heavily in AI, DeepSeek's provision of open source, low-cost but highly efficient models can promote a wider wave of applications.
If this advantage is maintained, the Chinese company is likely to continue to reshape the global AI race in the coming years.