2025-05-21

At Computex 2025, ASUS introduced its NVIDIA Enterprise AI Factory validated design, which includes updated ASUS AI POD designs built on optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms and support both air- and liquid-cooled data centers. The systems are designed to support scalable AI deployments and improve performance and energy efficiency for a range of enterprise applications.

NVIDIA Enterprise AI Factory with ASUS AI POD

The validated NVIDIA Enterprise AI Factory with ASUS AI POD design offers a framework for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform in on-premises environments. Aimed at enterprise IT teams, it includes computing, networking, storage, and software components to support more efficient AI factory deployments and reduce potential implementation challenges.

Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications.

For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency.

AI-ready racks: Scalable power for LLMs and immersive workloads

ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel Xeon 6 processors and RTX PRO 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing.

In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance.

Engineered for the AI Factory, enabling next-gen agentic AI

Integrated with NVIDIA’s agentic AI showcase, ASUS infrastructure supports AI systems capable of real-time learning and scalable agent-based operations for a range of business applications across industries.

ASUS offers AI infrastructure solutions with both air- and liquid-cooled options to support efficient and reliable data center operations. Its portfolio includes high-speed networking, cabling, and storage rack architecture, featuring NVIDIA-certified storage such as the RS501A-E12-RS12U and VS320D series to support scalable AI and HPC workloads. SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks help optimize resource use, while the WEKA Parallel File System and ASUS ProGuard SAN Storage enable high-speed, scalable data management.

ASUS offers a software platform and services that include the ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC) to support the development, orchestration, and deployment of AI models. Its L11/L12-validated solutions are designed to help enterprises implement AI systems at scale, backed by deployment and support services.

For more information, visit servers.asus.com.

The post ASUS announces advanced AI POD design built with NVIDIA at Computex appeared first on Engineering.com.

Show more