Guotai Haitong: Deepseek V4 leads domestic models to shine with large-scale model competition extensively upgraded, maintains "hold" rating on computer sector.

date
06:39 28/04/2026
avatar
GMT Eight
Deepseek V4 is released, with performance on par with overseas giants, deep adaptation to domestic computing power, and a significant price drop. Chinese open source large models are officially challenging American closed-source giants in a comprehensive manner.
Guotai Haitong released a research report stating that recently, OpenAI has released GPT-5.5 and ChatGPT Images 2.0, with API prices increasing threefold. Anthropic's valuation has surpassed a trillion dollars, overtaking OpenAI. In China, Kimi, Tencent, and DeepSeek have all open-sourced their models, matching the performance of overseas giants and deeply adapting to domestic computing power. DeepSeek has topped the list of global top open-source Agent frameworks with its default model OpenClaw. Google has released the eighth generation TPU, which marks the first time the training and inference chips have been separated. HaiGuang and Mooreline completed the adaptation of DeepSeek V4 on Day0, synchronously accelerating the large-scale deployment of big models in both China and the U.S. The computer sector maintains a "hold" rating. The main points highlighted by Guotai Haitong are as follows: The release of DeepSeek V4 matches the performance of overseas giants, deeply adapts to domestic computing power, and significantly lowers prices. China's open-source big models are officially challenging American closed-source giants. DeepSeek-V4 was launched on April 24, with the Pro version having 1.6T parameters and millions of contexts. The performance of a single card after adapting to the Ascend 950 reaches 2.87 times that of H20. The next day, DeepSeek announced a limited-time 60% discount on the V4-Pro, which is expected to become the regular price after the Ascend mass production. OpenClaw has announced that the DeepSeek V4 Flash model is set as the default model, with V4 Pro also launched, fixing the issue of multiple round tool calls and supporting complex long-chain tasks. On April 20th, The Dark Side of the Moon released Kimi K2.6, which supports 300 sub-Agents and 4000-step long tasks, surpassing GPT-5.4 and other three flagship overseas models in two major benchmarks. Tencent launched the hybrid model Hy3 Preview, with 295B parameters and 21B activations, focusing on Agents and Coding, already integrated into all scenarios applications, making China's open-source power a core element in the global AI technology stack. OpenAI released two heavyweight products in succession, with Anthropic's valuation exceeding a trillion dollars and surpassing OpenAI, leading to a comprehensive upgrade in the global big model competition. On April 22, OpenAI released ChatGPT Images 2.0, dominating Google's Nano Banana in instruction compliance and multilingual text rendering, and first promoting visual thinking modes. On April 24, GPT-5.5 was launched, with the highest intelligent score under the same token volume, and API prices have tripled to 5/30 dollars per million tokens. On the same day, Anthropic's secondary market valuation reached one trillion dollars, surpassing OpenAI's 880 billion dollars. Leading companies are accelerating technological iterations and commercial deployments, entering a double explosion stage in terms of technology and valuation for the big model industry. HaiGuang and Mooreline adapted to DeepSeek V4, Google released the eighth generation of TPU, and AI chips from China and the U.S. accelerated the deployment of large models. On April 22, Google launched the eighth generation TPU, with separate training/inference chips for the first time, paired with a self-developed Axion CPU. The TPU 8t's performance has tripled and can be scaled up to 9600 chips, while the TPU 8i can increase performance by 80% per dollar. It is set to be released later this year. On April 24, HaiGuang's DCU completed deep optimization of DeepSeek V4, and Mooreline completed full-scale operator optimization for V4-Flash on the MTT S5000. Companies from China and the U.S. are working on upgrading computing power at the grassroots level and adapting to domestic production on both ends to promote the commercial use of large-scale models.