GIGABYTE Expands AMD EPYC servers

GIGABYTE continues its active development of new AMD EPYC platforms with the release of the 2U 4 Node H261-Z60, the first AMD EPYC variant of our Density Optimized Server Series

The H261-Z60 combines 4 individual hot pluggable sliding node trays into a 2U server box. The node trays slide in and out easily from the rear of the unit. 

Each node supports dual AMD EPYC 7000 series processors, with up to 32 cores, 64 threads and 8 channels of memory per CPU. Therefore, each node can feature up to 64 cores and 128 threads of compute power. Memory wise, each socket utilizes EPYC’s 8 channels of memory with 1 x DIMM per channel / 8 x DIMMS per socket, for a total capacity of 16 x DIMMS per node (over 2TB of memory supported per each node). 

Maximum compute in this system can enable data center footprints to be reduced by up to 50% compared with a standard 1U dual socket server. And GIGABYTE has recently demonstrated that our server design is perfectly optimized for AMD EPYC by achieving one of the top scores of the SPEC CPU 2017 Benchmark for AMD EPYC single socket* & dual socket** systems. 

*R151-Z30 achieved highest SPEC CPU 2017 performance benchmark for single-socket AMD Naples platform vs other vendors as of May 2018

** R181-Z91 achieved second highest SPEC CPU 2017 performance benchmark for dual-socket AMD Naples platform vs other vendors as of May 2018

In the front of the unit are 24 x 2.5” hot-swappable drive bays, offering a capacity of 6 x HDD or SSD SATA / SAS storage drives per node. In addition, each node features dual M.2 ports (PCIe Gen3 x 4) to support ultra-fast, ultra-dense NVMe flash storage devices. Dual M.2 support is double the capacity of competing products on the market. 

Dual 1GbE LAN ports are integrated into each node as a standard networking option. In addition, each node features 2 x half-length low profile PCIe Gen3 x 16 slots and 1 x OCP Gen3 x 16 mezzanine slot for adding additional expansion options such as high-speed networking or RAID storage cards. GIGABYTE delivers best-in class expansion slot options for this form factor. 

Easy & Efficient Multi Node Management

The H261-Z60 features a system-wide Aspeed CMC (Central Management Controller) and LAN module switch, connecting internally to Aspeed BMCs integrated on each node. This results only in one MLAN connection required for management of all four nodes, resulting in less ToR (Top of Rack) cabling and less ports required on your top of rack switch (only one port instead for four required for remote management of all nodes).

Ring Topology Management for Even Greater Efficiency

Going a step further, the H261-Z60 also features (Optional Ring Topology Kit must be added) the ability to create a “ring” connection for management of all servers in a rack. Only two switch connections are needed, while each server is connected to each other in a chain. The ring will not be broken even if one server in the chain is shut down. This can even further reduce cabling and switch port usage for even greater cost savings and management efficiency.

Efficient Power & Cooling

GIGABYTE’s H261-Z60 is designed for not only greater compute density but also with better power and cost efficiency in mind. The system architecture features shared cooling and power for the nodes, with a dual fan wall of 8 (4 x 2) easy swap fans and 2 x 2200W redundant PSUs. In addition, the nodes connect directly to the system backplane with GIGABYTE’s Direct Board Connection Technology, resulting in less cabling and improved airflow for better cooling efficiency.

GIGABYTE’s unrivalled expertise and experience in system design leverages and optimizes AMD EPYC’s benefits to offer to our customers a product extremely on-point to meet their needs for maximized compute resources in a limited footprint with excellent expansion choices, management functionality and power & cooling efficiency.


Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference


James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC


Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.


As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC


By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers