PRESS RELEASE

MaxCloud

Maxeler Technologies has launched MaxCloud, a cloud implementation of a high performance dataflow computing system.

MaxCloud offers businesses a scalable, on-demand, high-availability off-site resource to eliminate upfront hardware costs and deployment time while benefiting from the high performance of dataflow computing.

MaxCloud provides a pool of accelerated compute nodes running an industry standard Linux distribution that combine multi-core CPUs with multiple Maxeler dataflow engines. Each compute node typically provides the performance of 20-50 standard x86 servers.

MaxCloud has multiple compute nodes each with 12 Intel Xeon CPU cores, up to 192GB of RAM for the CPUs and 4 MAX3 dataflow compute engines. MAX3 uses Xilinx Virtex-6 FPGAs directly connected to up to 48GB of DDR3 DRAM, giving a total of 384GB in a single 1U server. Dataflow engines within the same compute node are directly connected via PCI Express and also via Maxeler's high-bandwidth MaxRing interconnect.

Maxeler provides a complete service to migrate applications to the MaxCloud and a comprehensive suite of software tools to develop, accelerate and maintain applications for Maxeler systems, whether deployed in the cloud or on-site.

Maxeler systems are currently deployed by investment banks to speed up pricing and risk calculations for complex portfolios. Its systems are also used to optimise reservoir modelling and seismic imaging programs used in oil and gas exploration. Other application areas that will benefit from MaxCloud and Maxeler dataflow computing technology include computational fluid dynamics and bioinformatics.

Company: 
Feature

Gemma Church finds out how astronomers are using simulations to investigate the extremities of our universe

Feature

Turning data into scientific insight is not a straightforward matter, writes Sophia Ktori

Feature

The Leibniz Supercomputing Centre (LRZ) is driving the development of new energy-efficient practices for HPC, as Robert Roe discovers

Feature

William Payne investigates the growing trend of using modular HPC, built on industry standard hardware and software, to support users across a range of both existing and emerging application areas

Feature

Robert Roe looks at developments in crash testing simulation – including larger, more intricate simulations, the use of optimisation software, and the development of new methodologies through collaboration between ISVs, commercial companies, and research organisations