LENOVO GROUP (00992) Heavyweight release Wanquan heterogeneous smart computing platform 3.0 to help enterprises increase efficiency with AI.

date
11/05/2025
avatar
GMT Eight
Lenovo Group (00992) unveils the "Super Intelligent Body" tailored for individuals and enterprises at the 2025 Lenovo Innovation Technology Conference (Tech World), aiming to accelerate the comprehensive implementation of hybrid AI through innovative breakthroughs.
LENOVO GROUP (00992) unveiled the "super intelligent body" for individuals and enterprises at the 2025 Lenovo Tech World conference, aiming to accelerate the comprehensive implementation of hybrid AI through innovative breakthroughs. With the three core capabilities of perception and interaction, cognition and decision-making, autonomy and evolution, the super intelligent body will become a super gateway for personalized AI, helping individuals unleash their creativity and helping enterprises stimulate new growth momentum. At the conference, LENOVO GROUP also launched the Lenovo Omniscient Heterogeneous Intelligent Computing Platform 3.0. As the core carrier of Lenovo's hybrid infrastructure, the Omniscient Heterogeneous Intelligent Computing Platform 3.0 achieved comprehensive upgrades on the basis of its existing technological advantages, integrating more than 10 innovative technologies such as power-cube matching, AI inference acceleration algorithm set, and fault prediction and self-healing systems. Covering national clusters, industry intelligence computing centers, and enterprise on-premise deployments, it has become the "all-around engine" for enterprise AI implementation. In terms of computing efficiency, the platform's AI inference acceleration algorithm set, in combination with advanced technologies such as MLA multi-headed potential attention mechanism and speculative reasoning, improves AI inference performance by 5-10 times, maintaining a 20% advantage over the best industry community solutions. To address the needs for large model post-training and inference by various industries, an AI compiler optimizer was developed, which not only improves AI training and inference efficiency but also reduces computing costs by over 15%. To tackle the high communication complexity of the MoE expert model architecture, the expert parallel communication algorithm reduces inference latency by more than 3 times, increases network bandwidth utilization from 50% to 90%, significantly enhancing distributed training efficiency. The AI training and inference slow node fault prediction and self-healing system achieves fault self-healing capabilities in milliseconds for hundreds of cards, minutes for thousands of cards, and tens of minutes for tens of thousands of cards, completely eliminating the waste of computing power due to the "bucket effect." In terms of cost optimization, the platform achieves dynamic balance between resource utilization and cost through intelligent scheduling engines and FinOps engines. The intelligent scheduling engine uses Gang task group scheduling to ensure atomic resource allocation for distributed training tasks, and, combined with load-aware algorithms and multi-parameter preemption strategies, coupled with elastic quotas, increases cluster resource utilization by 13%. The Lenovo Omniscient Heterogeneous Intelligent Computing Platform has demonstrated its powerful capabilities in multiple scenarios: In the national high-quality AI cluster scenario, Lenovo cooperates closely with East Number West Algorithm to raise MFU from 30% to 60% in the thousand-card training scenario; In the industry and research-level intelligent computing scenarios, Lenovo works closely with Peking University to jointly create a major technology infrastructure computing platform, reducing operational costs by 50% and increasing GPU resource utilization from 70% to 90%; Lenovo and Geely cooperate in building a benchmark intelligent computing cluster for the manufacturing industry, optimizing costs in an enterprise hybrid computing scenario; In the enterprise AI infrastructure scenario for model local deployment, the continuously upgraded DeepSeek large model integrated machine, optimized through the Omniscient Heterogeneous Intelligent Computing Platform, fully supports the mainstream GPU chip ecosystem both domestically and internationally, constantly refreshing performance industry records; In the era of AI trends, computing power has become an important cornerstone for the development of artificial intelligence. Lenovo is building a more powerful, efficient, stable, and environmentally friendly hybrid infrastructure to accelerate the rapid implementation of hybrid artificial intelligence and unleash the full potential of AI value. Chen Zhenkuan, Vice President of LENOVO GROUP and General Manager of China Infrastructure Business Group, stated that Lenovo will continue to innovate and lead AI computing technology upgrades through the construction of hybrid infrastructure, facilitating the rapid implementation of hybrid artificial intelligence.