TLDR
- DeepSeek has introduced two new open-source AI models: V4-Pro (1.6 trillion parameters) and V4-Flash (284 billion parameters).
- Each model features a 1 million token context window, matching the capacity of Google’s Gemini.
- The V4-Pro model performs equally with OpenAI’s GPT-5.4 on coding benchmarks and is second only to Gemini in reasoning tests.
- DeepSeek asserts the models offer “drastically reduced compute and memory costs” versus competitors.
- This launch follows reports that Tencent and Alibaba are negotiating an investment in DeepSeek, valuing the company at over $20 billion.
(SeaPRwire) – On Friday, the Chinese AI startup DeepSeek unveiled preview versions of its new flagship open-source model, V4. The company states the latest model delivers enhanced reasoning, reduced operational expenses, and an extensive context window.
DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.
DeepSeek-V4-Flash: 284B total / 13B active params.… pic.twitter.com/n1AgwMIymu
— DeepSeek (@deepseek_ai) April 24, 2026
DeepSeek launched two variants: V4-Pro and V4-Flash. The Pro model contains 1.6 trillion parameters. The Flash variant is a more streamlined model with 284 billion parameters, engineered for greater efficiency and cost-effectiveness.
Both models are capable of handling a context window of one million tokens. This enables them to ingest vast quantities of text simultaneously, equaling the performance of Google’s Gemini in this specific capability.
DeepSeek noted the models are presently text-only. The firm added that it is developing multimodal features, which will eventually allow the models to interpret images and video.
How It Compares to Rivals
On the MMLU-Pro benchmark, a common AI performance test, the V4-Pro model equaled the score of OpenAI’s GPT-5.4. It ranked just below Google’s Gemini and Anthropic’s Claude Opus 4.6. In assessments of reasoning ability, V4-Pro is surpassed only by the newest Gemini model.
DeepSeek also stated that V4 has been fine-tuned for AI agent tools such as Claude Code, OpenCode, and CodeBuddy.
The company characterized V4’s context length as “world leading with drastically reduced compute and memory costs.” Analyst Zhang Yi labeled it an “inflection point,” suggesting that support for ultra-long context could transition from research environments to widespread commercial application.
AI analyst Max Liu described the release as a “milestone” for China’s AI sector, likening its potential influence to the initial launch of DeepSeek’s R1 model.
Market and Investment Context
This marks DeepSeek’s first comprehensive new model release since the R1 debuted in early 2025. That earlier model disrupted global technology stocks, including Nvidia and Meta, by demonstrating that a more economical and efficient model could rival costly closed-source alternatives.
DeepSeek did not disclose the hardware used to train V4. Earlier this year, U.S. authorities alleged the company utilized prohibited Nvidia Blackwell chips. A report from The Information claimed the models were instead trained on Huawei chips.
Huawei verified that its Ascend supernode, powered by Ascend 950 AI chips, will provide complete support for DeepSeek’s V4 models.
The model launch occurs shortly after news emerged that Tencent and Alibaba are discussing an investment in DeepSeek at a valuation exceeding $20 billion. DeepSeek is recognized as one of China’s six premier AI unicorn companies.
A preview of V4 is currently accessible on Hugging Face. DeepSeek has not set a date for the official full release.
This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.
Category: Top News, Daily News
SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.
DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.