Dell and NVIDIA Launch New Enterprise AI Solutions and Next-Generation PowerEdge Servers

date
21/05/2025
avatar
GMT Eight
Dell Technologies Inc. announced a major AI-focused partnership with NVIDIA on May 19, introducing next-gen PowerEdge servers supporting up to 192 NVIDIA Blackwell Ultra GPUs. The new AI infrastructure offers up to 4× faster LLM training and 62% lower inference costs vs. public cloud. Dell shares have surged 60% from April lows to $114, with market value nearing $800 billion.

As enterprise-level interest in artificial intelligence continues to accelerate, Dell has announced a new collaboration with NVIDIA, introducing a series of advanced AI-focused solutions and infrastructure at a recent tech conference in Las Vegas.

On May 19, Dell showcased the latest developments in its partnership with NVIDIA. These include a comprehensive upgrade to its AI Factory's infrastructure, solutions, and services, with the goal of simplifying the implementation of AI across enterprise environments. The offerings cover areas such as computing performance, data storage and management, and network systems.

A key highlight is Dell’s rollout of a new generation of PowerEdge servers. These include both air-cooled and liquid-cooled models, capable of supporting up to 192 NVIDIA Blackwell Ultra GPUs. According to Dell, this configuration enables training for large language models at speeds up to four times faster than before. In addition, Dell plans to adopt NVIDIA’s Vera CPU and the Vera Rubin platform to further enhance AI performance and efficiency.

The PowerEdge XE9780 and XE9785 servers, designed with air-cooling, are tailored for smooth integration into existing enterprise data centers. For organizations seeking faster deployment at the rack level, Dell introduced the liquid-cooled XE9780L and XE9785L models. These systems, which use direct-to-chip liquid cooling, also support up to 192 NVIDIA Blackwell Ultra GPUs. A Dell IR7000 rack can be customized to include as many as 256 of these GPUs. Serving as successors to the PowerEdge XE9680—Dell’s fastest-growing product in this category—these platforms are equipped with eight NVIDIA HGX B300 modules and provide significantly increased efficiency for LLM training.

Internal research from Dell indicates that 75% of companies now treat AI as a central strategy, with 65% already advancing their projects into active production. Still, many businesses face hurdles such as concerns around data integrity, cybersecurity, and high operational expenses.

Dell also noted that its AI Factory offers a 62% cost advantage for running LLM inference locally compared to using public cloud infrastructure. This cost benefit may be particularly important to investors with budget-conscious priorities. As of now, Dell’s share price has climbed 60% from its April low, reaching $114, with its market value nearing USD 800 billion.

Looking to the future, Dell is expanding its AI product lineup to address demands from edge computing environments to full-scale data centers. The PowerEdge XE7745 server, expected to launch in July 2025, will feature support for the NVIDIA RTX Pro™ 6000 Blackwell Server Edition GPU. This system is intended for use in physical and agent-based AI applications such as robotics, digital twins, and multimodal AI tasks, consolidating them under a single platform.

Plans are also in place for Dell to integrate NVIDIA's Vera CPU into its systems, alongside support for the Vera Rubin platform with new PowerEdge XE servers. These technologies are designed specifically for scalable, integrated solutions.

Currently, over 3,000 customers around the world are using Dell’s AI Factory to speed up their AI deployments. Dell’s AI offerings span the full range of enterprise environments—from personal computing to large data centers—forming a complete ecosystem for business AI integration.

As artificial intelligence moves from experimentation to broader application, Dell’s collaboration with NVIDIA could mark a significant step forward in the development of efficient, cost-effective enterprise AI infrastructure—especially where local deployment proves more economical than cloud-based options.