Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

100GbE network successfully demonstrated

Share this on social media:

The Department of Energy's Oak Ridge National Laboratory (ORNL), Myricom, and Juniper Networks demonstrated a wide area 100GbE between the ORNL and Juniper Networks booths at SC11.

ORNL operates the DOE Office of Science-supported Jaguar supercomputer, America’s fastest open-research system. With the increasing size of scientific data sets and with the accelerating need to share these data sets between different organisations across the US and around the world, this demonstration highlights data movement between distinct organisational domains that are bridged into a single Ethernet fabric using a high-performance, hardware-optimised communication software stack.

'The Oak Ridge Leadership Computing Facility provides world-class computational resources to teams of researchers from around the world,' said Galen Shipman, leader of ORNL’s Technology Integration Group. 'Supporting access to and collaboration among these researchers requires the use of leading-edge technologies for data movement and high-performance remote access. Our work with Juniper and Myricom demonstrates advances in wide-area networking, end-system network architecture, and an optimised software stack for data movement.'

To create the unified Ethernet broadcast domain for the SC11 demonstration, each booth had a Juniper Networks MX 3D Universal Edge Router connected by a 100 GbE link, separate from SCinet’s backbone. These routers provide Virtual Private LAN Services to allow a machine in one domain to access a machine in another domain.

Connected to the routers in each booth was a single host with four Myricom network interface cards, each with dual 10-GbE ports. Each host was connected to the Juniper Networks router via eight 10-GbE links for a combined capacity of 80 gigabits per second (Gbps). Using a hardware-accelerated implementation of the Common Communication Interface (CCI), a single thread using CCI was able to push 80 Gbps of data to the other host.