Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

InfiniBand bolsters research at Brookhaven National Laboratory

Share this on social media:

The US Department of Energy’s Brookhaven National Laboratory has deployed Mellanox FDR 56Gbp/s InfiniBand with RDMA to build a cost-effective and scalable 100Gb/s network for compute and storage connectivity. Some of the key research being conducted currently at Brookhaven includes system biology to advance the fundamental knowledge underlying biological approaches to producing biofuels, sequestering carbon in terrestrial ecosystems, advanced energy systems research and nuclear/high-energy physics experiments to explore the most fundamental questions about the nature of the universe.

A storage area network (SAN) test bed utilising iSCSI Extensions for RDMA (iSER) protocols over Mellanox InfiniBand-based storage interconnects with RDMA was constructed by Brookhaven National Laboratory. This storage solution is scalable to allow a large number of cluster/cloud hosts to have unrestricted access to virtualised storage and enable gateway hosts, such as FTP and web servers, to move data between client and storage with an extremely high speed. Combined with its front-end network interface, the upgraded SAN will eliminate bottlenecks and deliver 100Gb/s end-to-end data transfer throughput to support applications that constantly need to move large amounts of data within and across Brookhaven’s data centres.