The G894-ZD3-AAX7 is a heavy-duty 8U GPU server built for demanding workloads such as AI training and inference, high-performance computing (HPC), scientific simulations, large-scale data analysis and enterprise compute clusters. It combines dual AMD EPYC 9005/9004 processors, support for eight GPUs through the NVIDIA HGX B300 platform, high-bandwidth networking and advanced storage options, making it ideal for organisations with intensive compute and I/O demands.
Core Architecture: CPU and Memory
-
The server supports two AMD EPYC 9005 or 9004 series processors via Socket SP5 (LGA 6096). Each socket supports up to 500 W cTDP, offering flexibility to use high-core-count, high-performance processors as needed.
-
Memory capacity is generous: there are 24 DDR5 DIMM slots (12 per CPU), supporting 12-channel memory per processor. Depending on the CPU generation, memory speeds can go up to 6400 MT/s.
-
This configuration ensures robust host-side compute power and sufficient memory bandwidth for data-intensive workloads, real-time processing, large dataset handling and parallel compute tasks.
GPU Density & Interconnect: HGX B300 Support
-
The server is built around NVIDIA HGX B300, enabling support for 8 SXM-type GPUs in a single 8U chassis.
-
Inter-GPU communication is designed for maximum throughput: the server supports 1.8 TB/s GPU-to-GPU bandwidth using a combination of NVLink and NVSwitch. This high-speed connection is essential for large-model training, distributed GPU workloads, deep learning, multi-GPU parallelism and HPC tasks.
-
For networking, it includes onboard high-performance GPU-networking ports: 8 × 800 Gb/s OSFP InfiniBand XDR or dual 400 Gb/s Ethernet ports via NVIDIA ConnectX-8 SuperNIC, ensuring that GPU clusters or multi-node setups can communicate with speed and efficiency.
Storage, I/O and Expansion
-
Front panel includes 8 hot-swap 2.5" Gen5 NVMe bays, offering fast, low-latency storage ideal for large datasets, scratch storage, checkpointing, or model data for AI workloads.
-
Internally, there are 2 M.2 slots (supporting 2280/22110) for system or OS drives — keeping system storage separate from workload storage.
-
The server also offers 4 FHHL PCIe Gen5 x16 expansion slots (via bridge board), useful for additional networking cards, DPUs, storage controllers or other accelerator hardware.
-
Front I/O includes USB ports, VGA, LAN management port, system status indicators and control buttons. Rear I/O supports the high-bandwidth OSFP ports and management LAN, ensuring comprehensive connectivity and manageability.
Power, Cooling and Reliability
-
To support high-power GPUs and ensure stability under heavy load, the server uses twelve 3000 W 80 PLUS Titanium redundant power supplies. This redundancy helps maintain uptime even if one PSU fails.
-
The cooling system is designed with multiple fan modules distributed across zones (motherboard, GPU tray, PCIe expansion, networking), ensuring adequate airflow for each component, critical when GPUs are running at full load.
-
This combination of redundant power and robust cooling supports continuous operation, making the server well-suited for production environments, HPC clusters or AI labs that run 24/7 workloads.
Ideal Workloads and Use Cases
The G894-ZD3-AAX7 is particularly well suited for:
-
Large-scale AI model training and inference (deep learning, large language models, generative AI)
-
High-performance computing tasks and scientific simulations need both high GPU density and CPU/memory stability
-
Data-intensive workloads combining GPU compute, high memory bandwidth and fast storage (e.g. analytics, data preprocessing, big data pipelines)
-
Multi-node GPU clusters or distributed training environments requiring high-bandwidth NVLink + InfiniBand / Ethernet networking
-
Enterprises or research labs that need a scalable, future-ready server platform
Why G894-ZD3-AAX7 Matters
For organisations needing high compute density, this server offers a balanced mix of CPU power, memory capacity, GPU density, storage speed and network throughput. Its design allows for both immediate high-performance workloads and planning for future growth. The 8U form factor, while supporting 8-GPU density via HGX B300, makes it compact relative to the power it offers.
The inclusion of redundant power supplies, enterprise-grade cooling, and high-speed interconnects ensures reliability and stability even under heavy workloads. This makes it suitable for mission-critical environments such as AI labs, HPC clusters, research institutions and enterprise compute centers.
Conclusion
The GIGABYTE G894-ZD3-AAX7 stands out as a powerful, flexible platform for GPU-heavy compute tasks. With dual AMD EPYC processors, support for eight GPUs via NVIDIA HGX B300, high memory capacity, fast storage, strong networking and robust power/cooling design, it is well-suited for AI and HPC workloads at scale. It is available now at Saitech Inc.
For teams planning to deploy GPU clusters, build AI infrastructure or run heavy compute workloads, this server provides a strong foundation capable of scaling with evolving needs.

