Skip to main content

Setting store by reliability

Storing data reliably, and at as high a density as possible, is nowadays every bit as important as processing speed. Tom Wilkie considers the storage options.

With just over a week to go to the start of the International Supercomputing Conference in Leipzig (ISC’14), at which the latest Top500 rankings will be announced, two big announcements from storage manufacturers are a timely reminder that high-performance computing is about more than just processors.

Panasas’ launch this month of ActiveStor 16 with PanFS 6.0 is particularly interesting. Not only is it, as CEO Faye Pairman herself notes, the most important upgrade to the company’s flagship software in five years, the new launch focuses squarely on the issue of reliability in storage.

As developers contemplate what an exascale world would look like, one thing is becoming clear – that there will be so many processors within the machine that failure is to be expected routinely during operation. Alex Ramirez, the leader of the European Mont Blanc Project, is quite explicit that system and application software will have to be fault-tolerant if exascale machines are to be useful.

At the PRACEdays14 meeting held in Barcelona at the end of May, Jean-Francois Lavignon, who leads the European Technology Platform for HPC (ETP4HPC), stressed the issue of resilience, of reliability, as one of the key challenges on the road to exascale. Lavignon also cited exabytes of data -- not just processing numbers -- as a critical issue.

But processors are more reliable than hard drives – if only because processors have no mechanical moving parts. In Pairman’s view, traditional RAID approaches no longer deliver an adequate level of storage reliability. The launch of the PanFS 6.0 file system offers RAID 6+ triple parity protection, which, the company claims, can improve reliability 150 times, compared to dual parity approaches while minimizing drive rebuild times. Other features reinforce this emphasis on reliability.

At about the same time as the Panasas announcement, DataDirect Networks (DDN) launched what it claimed to be the world’s densest storage solution, with capacity of more than 5 PBs in a single rack. Although DDN of course recognizes that ‘performance remains a key storage requirement’, the emphasis of its announcement is directed towards offering ‘solutions that help reduce data centre sprawl, power consumption, administrative overhead, and total cost of ownership.’ So its SS8460 disk drive enclosures can handle up to 84 drives with 6 TB drives in just 4U rack space, nearly doubling the amount of storage in a single rack compared to what was previously available. In the company’s view, this could cut operating expenditure in half.

With the growth of big data, the demands being put on storage hardware and software are increasing. Indeed, it is arguable that storage is under more pressure than computation itself. Some of this comes from science and engineering – in research ranging from the life sciences to the European Particle Physics Laboratory, CERN, just outside Geneva. But a lot of demand for storage of data – and its quasi-instant retrieval – is coming from commerce and business, with the imperative to store data about customers and their shopping patterns and then process that data to improve a company’s products, delivery, and sensitivity to likely future demands.

It is no surprise therefore that the centre of gravity of Panasas’ business is moving from HPC to broader markets and a more diverse customer base serving commercial clients. The step is not too big though, for according to Pairman: ‘Even in enterprise applications, it’s technical workflows.’ However, enterprise clients have much higher expectations of reliability than traditional HPC users, she said.

So for Pairman the worry is that, without a conceptual change in the way that data is stored, large deployments will simply exacerbate existing vulnerabilities in traditional data preservation and protection schemes. Quoting figures from the market research company Gartner, she expects that increasing processor power (as mentioned earlier in this article) and increasing volumes of data from sensors – as ‘the internet of things’ moves from marketing hype to reality – will lead to an 800 per cent growth in data over the next five years, with 80 per cent of it being unstructured.

With this growth of unstructured data, she said: ‘The time for scale out is now. Storage for exascale requires a different architecture.’ Reliability worsens with scale – if you have ten times as many drives you have 10 times as many challenges.

According to Geoffrey Noer, senior director of product marketing, the new ActiveStor 16 system is designed ‘to make sure data storage has resiliency at very large scale’. The company believes that the per-file distributed RAID, based on erasure codes, will allow reliability and availability to increase as capacity grows, rather than decreasing at scale as traditional RAID approaches do. It also has extended file system availability, which keeps the file system online even in the event of ‘one too many’ drive failures – events that normally take storage products offline completely. The system has improved disaster recovery, because it allows administrators to restore a small, specific list of files quickly rather than having to restore the entire file system in the event of catastrophic failure. Another interesting feature is the doubling of the capacity of on-system solid state drive (SSD) resources. As Noer pointed out, this is not used just for cache but for increased metadata and small file performance.

Pairman’s concern for the requirements of enterprise users is shared by Molly Rector, chief marketing officer at DDN: ‘In a world where data growth is outpacing the power and space needed to store and manage it, customers are relying on the vendors to minimise data centre footprint and power utilisation,’ so the company’s storage enclosures are ‘designed to meet customer requirements for massive density, performance, and value.’

Demands for more storage at increased reliability and at higher density in a smaller volume while using less electricity, call for creativity from the architects of data storage. There is a lot more in store.

Media Partners