PRESS RELEASE

Cray CS300 Hadoop solution

Cray has announced the launch of a new Hadoop solution, which it says will allow customers to apply the combination of supercomputing technologies and an enterprise-strength approach to 'big data' analytics to high-value Hadoop applications.

Available later this month, Cray cluster supercomputers for Hadoop will pair Cray CS300 systems with the Intel Distribution for Apache Hadoop (Intel Distribution) software.

Built on optimised configurations of the Cray CS300 systems, Cray cluster supercomputers for Hadoop software feature turnkey performance, reliability and maintainability. The Hadoop solution will include a Linux operating system, workload management software, the Cray Advanced Cluster Engine management software, and the Intel Distribution, which includes greater security, improved real-time handling of data, and improved performance throughout the storage hierarchy.

'More and more organisations are expanding their usage of Hadoop software beyond just basic storage and reporting. But while they’re developing increasingly complex algorithms and becoming more dependent on getting value out of Hadoop systems, they are also pushing the limits of their architectures,' said Bill Blake, senior vice president and CTO of Cray.

'We are combining the supercomputing technologies of the Cray CS300 series with the performance and security of the Intel Distribution to provide customers with a turnkey, reliable Hadoop solution that is purpose-built for high-value Hadoop environments. Organisations can now focus on scaling their use of platform-independent Hadoop software, while gaining the benefits of important underlying architectural advantages from Cray and Intel.'

Company: 
Feature

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori

Feature

Robert Roe looks at the latest simulation techniques used in the design of industrial and commercial vehicles

Feature

Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware

Feature

Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community

Feature

Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers