AMD has unveiled its new Helios AI server system featuring 72 MI400 processors, positioning it as a direct competitor to Nvidia's offerings, while also announcing a strategic partnership with OpenAI, whose CEO Sam Altman expressed enthusiasm about adopting AMD's latest chips for future AI workloads.
Scheduled for release in 2026, the Helios rack-scale system will deliver impressive technical specifications that surpass Nvidia's upcoming Vera Rubin NVL144 platform. The system boasts 31 TB of HBM4 memory capacity and 1.4 PBps of memory bandwidth, representing a 50 percent increase over Nvidia's 20 TBps and 936 TBps.1 Computational performance reaches 2.9 exaflops at FP4 precision and 1.4 exaflops at FP8 precision, making it comparable to Nvidia's claimed 3.6 exaFLOPS at FP4 and 1.2 exaFLOPS at FP8.1
The Helios architecture, unveiled at AMD's "Advancing AI" event on June 12, 2025, represents the company's comprehensive approach to AI infrastructure.2 This system will integrate AMD's full spectrum of technologies, including leadership GPUs, CPUs, networking, and open software to deliver what the company describes as "unmatched flexibility and performance."2 AMD CEO Lisa Su emphasized that this next-generation rack-scale solution will extend the company's leadership in rack-scale AI performance beyond 2027.2
Sam Altman's appearance on stage alongside Lisa Su marked a significant shift in the AI hardware landscape, with OpenAI not just adopting AMD's MI400 chips but serving as an "early design partner" for the MI450 series.12 Altman expressed being "extremely excited" about the partnership, praising the memory architecture as "great for inference" and potentially "an incredible option for training as well."13
This strategic alliance extends beyond simple adoption, with OpenAI providing crucial feedback on requirements for next-generation training and inference capabilities.42 The collaboration represents a notable diversification for OpenAI, which has historically been a major Nvidia customer. Beyond OpenAI, AMD secured support from other technology giants including Meta, xAI, Oracle, Microsoft, and Cohere, all of whom appeared at the event to discuss their integration of AMD processors into their AI infrastructure.56
The MI350 series, launched immediately at the June 2025 event, features the MI350X and MI355X models that deliver up to four times the AI computational performance and a 35-fold boost in inferencing capabilities compared to previous generations.1 These chips come equipped with 288GB of HBM3E memory, surpassing Nvidia's individual GPU memory capacity of 192GB.1
Looking ahead to 2026, the MI400 series will push boundaries with 432GB of HBM4 memory and 19.6 TBps of memory bandwidth—more than double that of the MI350 series.2 These next-generation chips will be capable of performing an impressive 40 petaflops of FP4 and 20 petaflops of FP8 operations, positioning them as direct rivals to Nvidia's future high-end GPUs.21 The MI400 will serve as the foundation for AMD's Helios server system, with OpenAI already committed as an early adopter of this technology.34
Championing an open ecosystem approach, Lisa Su emphasized that "the future of AI is not going to be built by any one company or in a closed ecosystem," directly contrasting with Nvidia's proprietary strategy.1 This philosophy extends to the Helios servers, with AMD making networking standards and other specifications openly available to competitors like Intel.2 The company continues to develop its ROCm software platform, recently releasing ROCm 7 with significant performance improvements and day-0 support for new features.2
To strengthen this open approach, AMD has forged partnerships with AI startups to improve software compatibility and performance, acquiring several small software companies to boost talent.3 Senior VP of AI Vamsi Boppana described this as "a very thoughtful, deliberate, multi-generational journey," with the company committing to rapid improvements that benefit customers like enterprise AI startup Cohere.3 This strategy, combined with AMD's aggressive pricing offering "significant double-digit percentage savings" compared to Nvidia, represents a comprehensive challenge to the current AI chip market leader.4