Skip to main content

A different approach to data

In technical computing, two converging trends at the moment are leading to an increase in data intensity: firstly, the sheer amount of data being collected with the mass deployment of sensor networks and analytical equipment, and secondly, there are larger and more complex simulations across a range of scientific disciplines.

The modern scientist measures everything. Sensors can be placed just about anywhere and everywhere, whether that be under the sea or out in space, to measure any number of criteria – and not just at one point in time, but continuously. This has led to a new scientific approach whereby researchers make discoveries as a result of being able to sift through huge volumes of data, where previously they would determine a theory, and then carry out an experiment to try to prove that theory.

Similarly, the shift to larger and more complex simulations can be found in every discipline. In the life sciences, researchers who once looked at theoretical, small scale sub-systems now find it possible to model entire biological systems or even ecosystems. The same is true in chemistry, with ever more intricate molecular and material simulations, as well as climate science, where complex world-scale simulation has become essential to our understanding of global warming. The volume of data associated with these efforts is growing exponentially.

These trends both point to a research environment that demands fast access to terabytes, and in some cases even petabytes of data.

Certainly the capability of modern compute systems is aiding in the handling of these datasets. However, a recent shift in Moore’s Law, from increasing the clock speeds of processors to increasing the number of compute cores in a system node, has complicated matters. In order to make the best of the available computing power, computational scientists have to distribute their application load across more and more nodes, with each of these nodes carrying more and more compute cores. This means that simulation algorithms need to be rewritten to distribute and then recombine massive sets of decomposed work elements in order to make the best of the modern computing architecture.

The most common approach to tackling the data problem – and the need for memory that this creates – is to use clusters. Typically, this starts with a 2-CPU compute server, linked together with many other 2-CPU compute servers using a network, such as Ethernet or Infiniband. Work is then scaled using this set-up. It’s certainly one of the cheapest methods of accessing multiple nodes, but it does have its restrictions. Notably, one would be restricted to 64GB of total memory in many cases. In the scientific world, that’s not a lot of memory. One can get round this by sending data across multiple nodes, but that becomes difficult to program and can be quite slow for large simulations.

Such leaps in volume and complexity of data, coupled with more processors and larger memory, means that commodity systems cannot satisfy all the demands of scientists and engineers. Companies like SGI are bringing their expertise to the fore to develop new approaches to help researchers use computing efficiently.

For us, this means designing a highly scalable system with better connectivity to node and I/O. Effectively, this enables us to create a very large shared memory space that is accessible to one or more users, by knitting together the capabilities of several nodes under one instance of the operating system. And this is done with standard x86 processors and off-the-shelf Linux distributions. With this platform, it is simpler for the user to access huge resources via a familiar OS, without the need for complex communication algorithms. The approach provides significant advantages in performance too as it aggregates up to 16 terabytes of memory and makes it instantly accessible.

This new platform means that scientists can now afford enterprise-class computing capability at the price point and industry standards that they need. They must no longer make do with commodity products when their methods outstrip the capability of those computing platforms. Scientific computing is no longer a niche discipline, and HPC suppliers are beginning to recognise that it is a market that demands dedicated solutions as needs evolve. Shared memory computing is just one example of how we are meeting those needs.



Topics

Media Partners