DeepSeek-V4 preview version officially launched, providing a brand new experience with 1M ultra-long context memory.
DeepSeek-V4 has a million-word super long context, leading in Agent ability, world knowledge, and reasoning performance in both domestic and open source fields.
On April 24th, DeepSeek announced that the preview version of the new series model DeepSeek-V4 is officially online and open source. DeepSeek-V4 has a million-word ultra-long context, leading in agent capabilities, world knowledge, and reasoning performance in both domestic and open source fields. The model is divided into two versions by size, deepseek-v4-flash and deepseek-v4-pro. Starting today, by logging into the official website or official app, you can chat with the latest DeepSeek-V4 and explore the new experience of 1M ultra-long context memory. The API service has been updated synchronously, and you can call it by modifying the model_name to deepseek-v4-pro or deepseek-v4-flash.
Compared to its predecessors, the Agent capability of DeepSeek-V4-Pro has been significantly enhanced. In the Agentic Coding evaluation, V4-Pro has reached the best level among current open source models and has performed excellently in other Agent-related evaluations. Currently, DeepSeek-V4 has become the Agentic Coding model used by internal employees of the company, with evaluation feedback indicating a better user experience than Sonnet 4.5, and delivery quality approaching Opus 4.6 non-reasoning mode, but still with a certain gap compared to Opus 4.6 reasoning mode.
It is reported that DeepSeek-V4 has pioneered a new attention mechanism, compressing in the token dimension and combining with DSA sparse attention (DeepSeek Sparse Attention), achieving world-leading long context ability and significantly reducing the computing and memory requirements compared to traditional methods. From now on, 1M (one million) context will be standard for all official DeepSeek services.
V4-Pro and V4-Flash have a maximum context length of 1M, and both support non-reasoning mode and reasoning mode, with the reasoning mode supporting the reasoning_effort parameter setting for thinking intensity (high/max). For complex Agent scenarios, it is recommended to use the reasoning mode and set the intensity to max.
Currently, DeepSeek API has synchronously launched V4-Pro and V4-Flash, supporting OpenAI ChatCompletions interface and Anthropic interface. When accessing the new model, the base_url remains the same, but the model parameter needs to be changed to deepseek-v4-pro or deepseek-v4-flash.
Related Articles

On April 24th, ZHOU HEI YA (01458) spent HK$1,992,500 to repurchase 1,356,500 shares.

Green Tea Group (06831) spent HKD 386,000 to repurchase 48,000 shares on April 24th.

China Chunlai (01969) releases its interim results, with adjusted net profit of 432 million yuan, an increase of 7.3% year-on-year.
On April 24th, ZHOU HEI YA (01458) spent HK$1,992,500 to repurchase 1,356,500 shares.

Green Tea Group (06831) spent HKD 386,000 to repurchase 48,000 shares on April 24th.

China Chunlai (01969) releases its interim results, with adjusted net profit of 432 million yuan, an increase of 7.3% year-on-year.

RECOMMEND

The Great Transformation Of The Hong Kong Automotive Market
23/04/2026

Another “Elephant” Dances As China Construction Bank Hits A Record High While The Sector Remains Below Book Value, With Several Names Offering Elevated Dividend Yields
23/04/2026

Major Oil Traders Warn One Billion Barrel Shortfall Is Locked In, Hormuz Closure Could Trigger Recession
23/04/2026


