Maxeler Technologies has launched MaxCloud, a cloud implementation of a high performance dataflow computing system.

MaxCloud offers businesses a scalable, on-demand, high-availability off-site resource to eliminate upfront hardware costs and deployment time while benefiting from the high performance of dataflow computing.

MaxCloud provides a pool of accelerated compute nodes running an industry standard Linux distribution that combine multi-core CPUs with multiple Maxeler dataflow engines. Each compute node typically provides the performance of 20-50 standard x86 servers.

MaxCloud has multiple compute nodes each with 12 Intel Xeon CPU cores, up to 192GB of RAM for the CPUs and 4 MAX3 dataflow compute engines. MAX3 uses Xilinx Virtex-6 FPGAs directly connected to up to 48GB of DDR3 DRAM, giving a total of 384GB in a single 1U server. Dataflow engines within the same compute node are directly connected via PCI Express and also via Maxeler's high-bandwidth MaxRing interconnect.

Maxeler provides a complete service to migrate applications to the MaxCloud and a comprehensive suite of software tools to develop, accelerate and maintain applications for Maxeler systems, whether deployed in the cloud or on-site.

Maxeler systems are currently deployed by investment banks to speed up pricing and risk calculations for complex portfolios. Its systems are also used to optimise reservoir modelling and seismic imaging programs used in oil and gas exploration. Other application areas that will benefit from MaxCloud and Maxeler dataflow computing technology include computational fluid dynamics and bioinformatics.


For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe looks at the latest simulation techniques used in the design of industrial and commercial vehicles


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers