What To Know
- In the middle of this Thailand AI News report, it is clear that the project signals a decisive shift toward openness, performance, and sustainability in the AI data center ecosystem.
- 4 exaFLOPS of FP8 compute performance and hosting 31 terabytes of HBM4 memory in a single rack, Helios represents a quantum leap in performance density and scalability.
Thailand AI News: Breakthrough Collaboration in AI Infrastructure
At the Open Compute Project (OCP) Global Summit in San Jose, Meta and AMD unveiled a transformative development for large-scale artificial intelligence systems: Helios, the first rack-scale AI system built entirely on Meta’s new Open Rack Wide (ORW) standards. This open-source architecture aims to revolutionize how hyperscalers and data centers design, deploy, and scale AI infrastructure. Built on AMD’s Instinct™ MI450 Series GPUs, Helios is engineered for the next generation of exascale AI and high-performance computing workloads. In the middle of this Thailand AI News report, it is clear that the project signals a decisive shift toward openness, performance, and sustainability in the AI data center ecosystem.

AMD and Meta join forces to unveil Helios, a revolutionary open rack AI system redefining data center infrastructure for the AI era.
Image Credit: AMD
Redefining Open Standards for AI
Meta’s ORW specification marks a leap toward fully standardized and interoperable AI infrastructure. The double-wide rack design provides increased space for power, cooling, and maintenance, solving many bottlenecks faced by modern AI workloads. AMD’s Helios implementation transforms this concept into reality, extending its philosophy of open design from silicon to full rack-level systems. Capable of delivering up to 1.4 exaFLOPS of FP8 compute performance and hosting 31 terabytes of HBM4 memory in a single rack, Helios represents a quantum leap in performance density and scalability.
Harnessing the Power of AMD Instinct MI450 GPUs
At the heart of Helios lies AMD’s cutting-edge CDNA™ architecture, which powers each MI450 GPU with 432 GB of HBM4 memory and 19.6 TB/s of memory bandwidth. When combined across 72 GPUs, the Helios rack delivers breathtaking throughput—up to 2.9 exaFLOPS of FP4 compute and more than 1.4 petabytes per second of total bandwidth. This enables training of trillion-parameter AI models and large-scale inferencing while maintaining remarkable energy efficiency. Compared to previous generations, the Helios system offers up to 36 times greater performance and 50 percent more memory capacity than NVIDIA’s comparable Vera Rubin system.
A Blueprint for Industry Collaboration
Helios is more than just a showcase of AMD’s engineering—it is a collaborative blueprint for the entire AI ecosystem. Built on Meta’s ORW standards, it invites OEM and ODM partners to adopt and expand the design, helping accelerate the rollout of open, scalable AI infrastructure across the globe. The project aligns with AMD’s leadership in initiatives like the Ultra Accelerator Link (UALink™) and the Ultra Ethernet Consortium (UEC), both of which focus on enabling open, high-performance communication fabrics for large-scale AI clusters.
Built for Real-World AI Data Centers
AI data centers face mounting challenges related to heat, density, and efficiency. Helios addresses these head-on through its open, double-wide layout, quick-disconnect liquid cooling, and Ethernet-based scale-out design that ensures multipath resiliency. These innovations make it not only powerful but practical—an essential factor for hyperscalers managing vast and complex AI workloads.
The Road Ahead
AMD has begun releasing Helios as a reference design to manufacturing partners, with mass deployment targeted for 2026. The system’s open-source ethos promises to accelerate the pace of AI hardware evolution by giving enterprises and governments alike a ready-made, production-scale foundation for future AI development. By uniting performance, openness, and collaboration, AMD and Meta are setting a new standard for the AI infrastructure of tomorrow.
Helios stands as proof that when technology leaders work together on shared, open principles, innovation scales faster and benefits everyone in the AI ecosystem. The collaboration between AMD and Meta may well define the next decade of AI infrastructure design.
For more details, visit: https://www.amd.com/en/blogs/2025/amd-helios-ai-rack-built-on-metas-2025-ocp-design.html
For the latest on new AI Hardware, keep on logging to Thailand AI News.