Skip to main content

The software defined future

With enterprise users now picking up HPC technology and storage requirements constantly increasing, Excelro’s vice president of products, Josh Goldenhar, believes that the future of storage technology lies in software-defined architecture.

In the most basic sense ‘software-defined’ means to pool resources and manage them using software, but in the context of server storage, this allows users to deploy non-proprietary hardware – which can be much cheaper than proprietary solutions.

The use of non-proprietary hardware also allows user’s system designers more freedom to select the hardware they want as they are not locked into a single vendor.

‘The large web-scale giants have really woken up everybody else to say let’s go software-defined everything if we can – that’s the Holy Grail,’ said Goldenhar.

‘Let’s deploy racks of systems, we would love to get to completely homogenised servers but if we can’t, lets at least put the same kind of thing. This defined entity that we stamp out and deploy over and over again.  We then use software, intelligent software to be able to have this completely software defined,’ Goldenhar added.

This allows users to opt for much cheaper standardised hardware as they do not require additional features as this will be provided by the software layer. ‘Pick your buzzword but let’s put software on top of standard/commodity hardware. It’s all the same thing,’ said Goldenhar.

This approach to storage allows users to reduce costs by making the acquisition of hardware much simpler – even down to the switches used. ‘Let’s take white box switches, Amazon is writing its own OS for their switches, but you could also use ICOS from Broadcom or Qumulus. There are choices out there. Not everyone can go that extreme but there is some measure that they go to in deploying the software-defined datacentre,’ Goldenhar commented.

Goldenhar explained that this approach is not applicable to everyone – particularly small and medium businesses that don’t have thousands of developers. This is where Execelro comes in, as it can offer software that can be used in conjunction with any NVMe drive.

Giving software to the masses

The idea behind the company’s software is to take the concept of software-defined storage and make it available to smaller companies or large enterprises that do not have the same capacity for application development as can be found in the big web-scale giants like Facebook, Google or Amazon. 

‘We take all the concepts that we talked about before that the big eight introduced, to smaller folks who do not have that army of application developers so they can get these economies of scale, use NVME  better than anyone else, use all of the available IOPS and throughput as well as the capacity,’ said Goldenhar.

‘Facebook, you might argue, is a single application – it is made up of a lot of smaller components for sure, but it was developed as this massive application to do one thing and they could customise each section of it to exactly the hardware that they are using,’ added Goldenhar.

‘If you have legacy applications you can’t just snap your fingers and move that over to a shared nothing architecture,’ Goldenhar concluded.

Excelero provide a software layer that uses a back end of block storage which is fully distributed across the system. It uses modern scale-out techniques with no centralised bottleneck. This provides a block-based storage system with a single global namespace to make data access easy for users.

Goldenhar noted that customers often ask ‘why did you create a block device’ – or more accurately ‘do you do file and object as well?’ and the answer is ‘no, we do block!’

‘We picked one thing and we want to do that better than anyone else and the reason is because block is at the heart of everything else. You can put a file system on block, you can put object storage on top of block through various different measures – it is really the central common denominator,’ explained Goldenhar.

Building blocks

While some HPC users may be unfamiliar with block storage, Goldenhar explains that the technology is used in many of the more well-known parallel file systems seen in HPC today such as Lustre or GPFS.

‘At the end of the day users don’t want block storage they want a single namespace. The end scientific user doesn’t care what is underneath, a lot of times they don’t even know,’ commented Goldenhar.

He explained that for most users it is a particular application, library or method that is used. Users want to submit a job and have it work but the underlying technology is not important – as long as it works!

‘Block devices are things that look like a device in a machine (disk drives, flash devices). GPFS for example, counts on block devices, in general, to be a like a Storage Area Network (SAN), it counts on the fact that underneath the file system it sees what looks like a disk drive device,’ said Goldenhar.

A block device is similar to a physical device but the key point is that a physical device is only available to the host in which it is installed. Logical block devices are attached to multiple hosts (often through a fibre channel array).

This is the same concept that is employed in parallel file systems such as GPFS where a server can have different hosts sharing logical block devices. If one host dies another can continue to update because it also sees the logical block device. Many logical block devices split across any number of hosts are what make up a typical parallel file system that you would see attached to most HPC systems today.

Goldenhar noted that, just like a car, most supercomputer users just want results rather than knowing the inner workings of the system. ‘As a HPC user you are the driver and you just want to drive the car. You use the tools available to you, such as the steering wheel and the gas pedal, but you just use it.’

The idea behind this technology is to not only simplify the access of data for users but to also create a distributed high-performance storage system that can work with a completely standardised rack of servers with NVMe drives. ‘We are trying to make this logical block device that performs as good as or better than proprietary hardware all-flash arrays. If you have that and you have RDMA, Mellanox InfiniBand or even 100G Ethernet, then you can just sprinkle our software into the mix then you get a virtual all-flash array that can be used underneath Lustre, GPFS, BeeGFS take your pick,’ stated Goldenhar.

‘We take this pool of NVME drives sprinkled throughout the cluster or densely packed in the top of a rack server. We can add our software and make those drives accessible not just as individual disk drives you can make it appear to be one very big and fast drive,’ commented Goldenhar.

The future of storage

However, there is more to this technology than just creating a single namespace as the system is also physically disaggregated which can be used to provide redundancy. ‘We let you use and pool these resources but let you retain those performance characteristics. We get full performance for bandwidth and less than one per cent degradation of latency performance so we only add about five microseconds of overhead to what a drive can do locally. You basically access NVME for the same speed that you can access at host but it’s not just speed because you can add redundancy,’ added Goldenhar.

For Goldenhar and his colleagues at Excelero, the future lies in software-defined computing architecture. The benefits for storage are clear; performance and redundancy on cheap commodity hardware, all managed by a single software layer to create a physically disaggregated storage architecture.

‘I don’t think there is anyone arguing against moving towards software and away from proprietary hardware arrays. That was part of our strategy to focus on the emergent market rather than to steal some of the shrinking market from the big traditional players,’ concluded Goldenhar. 



Topics

Read more about:

Storage, HPC

Media Partners