Skip to main content

UK’s supercomputer upgrade to produce eleven times the scientific throughput

The latest Uk supercomputer Archer2 will produce up to eleven times the scientific output of the previous system thanks to an upgrade featuring AMD EPYC CPUs. UK Research and Innovation (UKRI) has announced hat Cray has been awarded the contract to supply the hardware for the next national supercomputer with huge improvements to benchmarks using five of the most heavily used codes on the current service.

This new system will make use of technologies from AMD and Cray to deliver a newly upgraded national supercomputing service in the UK.

As with all new systems, the relative speedups over ARCHER vary by benchmark. The ARCHER2 science throughput codes used for the benchmarking evaluation are estimated to reach 8.7x for CP2K, 9.5x for OpenSBLI, 11.3x for CASTEP, 12.9x for GROMACS, and 18.0x for HadGEM3. 

As these benchmarks show ARCHER2 represents a significant step forward in capability for the UK science community, with the system expected to sit among the fastest fully general-purpose (CPU only) systems when it comes into service in May 2020.

As has been previously announced, because ARCHER2 is being installed in the same room as the current ARCHER system, there will be a period of downtime where no service will be available. 

ARCHER is due to end operation on 18th February 2020, and ARCHER2 will be operational from 6th May 2020. From the 6th May, there will be a period of 30 days continuous running to stress-test the new machine and sort out any issues before full service, where access will be possible but may be limited. UKRI will be providing information about allocations and access routes for ARCHER2 in the coming weeks.

ARCHER2 will deliver a peak performance estimated at approximately 28 PFLOP/s. This computational power is provided by 5,848 compute nodes, each with dual AMD Rome 64 core CPUs at 2.2GHz, for 748,544 cores in total and 1.57 PBytes of total system memory.

The systems will use 23 of Cray’s Shasta Mountain direct liquid-cooled cabinets and come with 14.5 PBytes of Lustre work storage in 4 file systems with an additional 1.1 PByte all-flash Lustre BurstBuffer file system.

The system will use Cray’s Slingshot 100Gbps network in a diameter-three dragonfly topology, consisting of 46 compute groups, 1 I/O group and 1 Service group.

More information on the system configuration and software stack can be found on the ARCHER service website.

Topics

Read more about:

HPC

Media Partners