Skip to main content

Storage ambitions

As the price of storage hardware drops and new technologies such as 3D NAND become commoditised, HPC users are readying themselves for the next generation of HPC storage technology. 

While it is not completely clear just how these technologies will be used in large scale file systems the consensus among vendors is that SSD and flash technology will be key to developing large scale storage architectures – particularly as we approach initial targets set out for exascale computing.

The classical techniques for increasing computing performance typically revolve around turning up clock frequencies and increasing use of parallelisation, but this is creating a disparity between storage, memory and compute as huge amounts of data must be fed into increasing numbers of processing elements. 

If left unaddressed, today’s storage systems could become a bottleneck for HPC caused by a lack of data being fed into the processing elements of a HPC system.

To combat this, storage vendors are moving towards much faster, lower latency storage architectures with many opting to move data as close as possible to processing elements. This helps to reduce the penalty of moving data into and out of processors or accelerators for computation. 

Panasas’ senior software architect, Curtis Anderson, noted that the biggest impact on today’s HPC storage technology was the falling price of SSD technology.

‘We’re close to the point where the vastly increased utility of SSDs will offset their higher price, even in the relative “bulk” storage needed by HPC systems. SSDs have no “locality” the way that HDDs do, so they hit their peak bandwidth at very small I/O sizes’, said Anderson. 

Another point that Anderson noted was that the lower latency of SSDs is an advantage to some storage workloads, but the biggest advantage is reducing time and resources spent ‘managing locality during writes in order to create contiguity for later read operations.’

Anderson stressed that much of the complexity can be removed by replacing storage hardware with SSDs ‘resulting in much simpler and higher performance software stacks.’ 

Flash drives future 
storage performance

‘Generally, things are changing incrementally in storage technology, but it’s certainly interesting how large and cheap flash and SSDs are becoming’, stated OCF’s storage business development manager, Richard Corrigan. While Corrigan did not go as far as saying that SSD and flash would replace current technologies, he did suggest that vendors were increasingly seeing ‘small pools of flash and SSDs within customer environments.’  

‘Technology still isn’t really the limiting factor in HPC storage, rather it’s about how you can meet users’ requirements. It is about whether you have the technical skills to deliver the storage capacity or service/ SLA that users require and so faster disk drives, controllers or software isn’t really going to change that’, stressed Corrigan.

However, Corrigan also noted that development in the storage market ‘is further complicated by object storage being counted as one of the potential ways forward and cloud storage being considered as a tier. I don’t see that change happening yet because of the real cost of moving data between data tiers.’

These costs range from first identifying which data set is the right candidate to be moved to a new tier. The second is the performance impact to the primary tier while the data is being moved. ‘The data to be moved has to read from the storage 
media and processed by the controllers – which will impact performance for other users’, explained Corrigan. 

‘The data also has to be sent over the network to the object storage layer which again may cause congestion for other users. Then there is the cost of maintaining “spare space” on the primary storage to allow data to be promoted from the object layer back to the primary layer; again, there is the cost of waiting for this to happen and the performance cost while it is happening.’

‘The nirvana is having the tools to predict automatically what data to move down and what to promote in advance of the requirement to use it’, concluded Corrigan.

Robert Triendl, senior vice president of global sales, marketing, and field services at DataDirect Networks (DDN) shared this view on an increasing use of SSDs but also commented on the rising interest in flash storage.

‘We have talked about this for some time but now we really start to see a lot more requirements for flash-based systems’, stated Triendl. ‘This includes an interest in systems that use more of an edge approach. 

Triendl noted that, while this vision had been around for some time, the falling cost of flash storage was allowing users to look towards edge storage. ‘I remember sitting in meetings seven or eight years ago where end users articulated the idea that all you have is a large collective of compute nodes and your storage environment is just a software layer that takes devices into specific nodes and makes them available to the overall cluster the software layer.’ 

‘I think we are getting a bit closer to that vision and the availability of high-performance, large capacity flash devices is going to make such approaches more viable’, concluded Triendl.

Sensing change 

Changes in computing technology require long cycles of research and development and this tied with the natural purchasing cycle of supercomputers – which can be 3-5 years or more – requires considerable foresight to predict disruptive technologies.

As the HPC industry is currently trying to solve numerous challenges on the road to exascale supercomputing, this becomes increasingly difficult as there are many projects trying to overcome significant technological hurdles with new disruptive technologies for the next era of HPC. 

Examples of this can be found in the US DOE Fast Forward-2 project, similar ‘European Exascale Projects’ funded by the EU commission and China’s efforts to create its own processing technologies and accelerators. 

DDN’s Triendl explained that DDN has been involved with proposals and predictions of pricing and technology, as systems scheduled for 2024 are many years into the future in terms of HPC technology development.

DDN's WOS object storage technology

‘If you do a prediction of pricing and costs you actually see that performance storage will go to flash, capacity storage will remain on drives for quite a long time. There will still be a differential in a couple of years so this hybrid storage approach, where your fast storage is on flash and your large capacity data sits on a large capacity disk drive will be around for some time’, stated Triendl.

‘Now you can do both low latency I/O and high bandwidth I/O in the same storage environment. This is not possible with drives. In the case of the low latency small random I/O you want some sort of caching in between the client and the device but in the case of streaming I/O you want the cache to go away because it is going to be too expensive if you stream to cache. In a flash world all of this changes fundamentally,’ concluded Triendl.

In the opinion of Panasas’ Anderson, the main focus of storage development lies in specific application areas such as bio medical research, to a lesser extent deep learning (DL), and the continued rise of GPUs. 

‘GPUs offer a tremendous increase in FLOPs over CPUs, but as specialised-function SIMD vector co-processors, they bring both limitations and different opportunities as well. Changes in the storage workload are likely to come in order to get the most performance out of GPUs’, Anderson noted.

GPUs have been instrumental in certain application areas of HPC and it has been GPUs that have also been at the centre of a huge rise in the use of machine learning (ML) and artificial intelligence (AI) technologies. Some market researchers are predicting that the AI market will soon dwarf the entire HPC sector and so it makes sense for HPC hardware vendors to explore their potential role in this AI revolution – including those vendors focused on storage.

DDN’s Triendl commented: ‘Our largest capacity single installation is a company that is focused on deep learning for self-driving cars and that is something that has happened in the last twelve months.’

‘I think everyone, even those in more traditional HPC application areas, are looking to tie those application areas into a more machine learning based approach. We have certainly seen that. I think we will see more of that I the future’ stated Triendl.

‘Machine learning people are running different approaches, in different application areas with different application paradigms that are coming up and changing fast, so it is not always entirely clear what the storage approach for those application areas really needs to look like’, Triendl commented.

While AI and precursor technologies such as DL and ML are interesting prospects there is still some mystery around how this market will develop. It is expected that HPC and AI will share much of the same hardware but there are differences that storage vendors must be aware of in order to maximise performance in AI or ML environments.

‘One thing we have seen is that the larger end users, which are quite new to us, often come a bit more from a cloud background than a HPC background. So, they have a different way of looking at the system – how to manage the system’ said Triendl.

‘That does not really impact the storage architecture all that much but it does impact the way that you look at managing large systems. It is going to be interesting to see what that will mean to the HPC world we will find out’ noted Triendl.

Triendl explained that most of the organisations DDN has worked with are using things such as video data for satellite imaging or other large multiple petabyte data sets. ‘We have seen fairly similar requirements from an I/O perspective, but the way that these machine learning customers look at systems management is quite different – a bit closer to a cloud paradigm’.

‘Sometimes you want your metrics to be somewhat virtualised whereas in HPC everything is bare metal so there is a bit of that but this does not really impact us, it impacts the way that the system is deployed and managed. It impacts the interfaces with the cluster’, concluded Triendl.

Removing complexity

One thing that is becoming clear in many aspects of HPC is that throwing more hardware at a problem is not necessarily the correct solution. 

Triendl explained that ‘SSDs are not nearly as standardised’ as one might think. ‘SSDs seem like simple devices, but in reality they have become quite complicated from a performance perspective, for example.’

However, much of this ‘sophistication’ is unwanted from a HPC perspective so companies like DDN will try to remove it, as this reduces unnecessary software layers that can get in the way of raw performance.

DDN's SFA14KX hybrid storage server

‘Now, from a storage systems perspective, these are in the end. Just software layers that get in the way. Even with drives, a storage vendor would never use a cache on a drive. All the drive vendors try to do is make that cache more sophisticated and more powerful and we just turn it off because all it does is create a huge nightmare as you need to manage a thousand different caches’, concluded Triendl.

Meeting the exascale challenge

Many of the changes that we see in today’s storage technology are leading us on the inexorable march towards exascale. One example of this is the apex systems to be installed in the US department of energy (DOE) laboratories beginning next year. Although they are not exascale systems they represent years of development and the next step towards exascale computing. 

Triendl commented that all of these systems will contain ‘fairly large flash components’ as part of the storage architecture.

However, as Corrigan points out, much of the uptake of new HPC technologies will be decided by the users and their understanding of technology available. ‘Understanding what they [users] want out of storage is the major driving factor to any innovation or change.’ 

‘Where one customer may have one application to run and a handful of users, another may have hundreds of users and applications, and the challenge for any HPC storage system is to be able to meet the demands of all of those users in the fastest and most efficient way possible’ said Corrigan.

Corrigan expanded on this by explaining that the need to push for exascale and what that might actually look like depends on the user and the type of applications they are running. 

‘Is it a research organisation doing one thing, or is it going to be used for national supercomputer services with lots of users and use cases? The challenge is how much of the budget do you spend on storage. The challenge is in understanding the workflow and providing a storage solution that will meet this requirement’ said Corrigan. 

This may be easy for organisations that have a well-defined workflow which supports incremental improvements towards a specific common goal. ‘It is more of a challenge for organisations that have a “general purpose” HPC solution with many different and often competing requirements. Making storage investments that will improve the solution for everybody will always be a challenge’, said Corrigan.

While Triendl has noted that we are moving towards flash-based storage for the performance layer it will not be the traditional flash array that users may have seen in the past.

‘Today, most – or almost all – flash arrays are still classical dual controller or multiple controller systems that use a classical data protection approach.’ He commented that while DDN use a form of erasure coding it is done in a ‘very flexible’ way.

‘Classical block arrays always want to write in certain sizes or increments, so we have come up with an approach where there is no real size unit – we just write into buffers and do the erasure coding on flash data as a come-in/come-out basis’ said Triendl.

‘I think you will see the industry move towards a software defined storage world rather than a classical hardware dual controller setup’. concluded Triendl. 



Spectra Logic has announced that it has forged a partnership with ArcaStream, to integrate IBM Spectrum Scale, previously known as General Parallel File System (GPFS), in its Spectra BlackPearl Converged Storage System, delivering a solution that is not locked into a single vendor.

Combining ArcaStream’s certified client with BlackPearl enables GPFS users to store data using object storage on disk, tape, public and/or private cloud, within multi-user workflows. 

This provides an interesting approach for many science users who may want to archive data while still taking advantage of cloud deployment methods, particularly in research centre or universities with multi users.

The solution provides scale-out storage and data management with intelligent automatic tiering, data analytics, and secure global collaboration tools. An archive for high performance computing (HPC) environments, this new system aims to eliminate the need for expensive proprietary hardware and hierarchal storage management technology. 

IBM Spectrum Scale storage is a high-performance clustered file system, which overcomes complex challenges in the data centre that involve the integration of different technologies from multiple vendors. ArcaStream developed a tailored BlackPearl-based client for its file system, which has now achieved certification.  

‘Customers with highly complex deployments demand low cost, high performance, and ease of use and integration when they implement storage solutions,’ said Matt Starr, CTO at Spectra Logic. ‘BlackPearl is now GPFS enabled, which is a critical requirement for the HPC market. Customers who deploy this joint solution will not need to deploy any other storage tier.’

Topics

Read more about:

HPC, Storage

Media Partners