ANALYSIS & OPINION
Tags: 

New agreement improves HPC simulations

Maple users will now be able to run their computer models in parallel alongside other computer programs, across many different computers in a cluster

For some time engineers using Maplesoft’s Grid Computing Toolbox have been able to use parallel and distributed computing techniques to increase the speed of Maple simulations, but only if no other software is running. These techniques split the processing work into separate jobs, which are then sent to the different processors within the cluster.

This typically requires sophisticated coordination to ensure that the processing runs smoothly and that bottlenecks don’t build up. The Grid Computing Toolbox has always been able to manage this flow of information for models run from within Maple, but until now it couldn’t coordinate Maple jobs with jobs from other software. This excluded other programs from benefiting from high-performance computing while Maple was running.

‘Users may want to run Fortran and C as well as Maple, but the processing must be balanced across the different nodes,’ Laurent Bernardin, chief scientist at Maplesoft, told scientific-computing.com. ‘In the past, you couldn’t guarantee that a node would not be running both C and Maple at the same time.’

A new agreement between Maplesoft and Altair Engineering could solve this. Maple will soon be compatible with Altair’s PBS Gridworks, which can coordinate jobs from both Maple and other applications at the same time. A new version of the Grid Computing Toolbox that includes this will be on display at the Supercomputing 2007 event in Reno this November.

Not every cluster will run PBS Gridworks, and Bernardin says it’s possible that the Maplesoft will investigate similar collaborations in the future to make Maple compatible with other kinds of clusters. ‘We want Maple to be available as a tool for high-performance computer for anyone who wants to use it,’ he says.

Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori

Feature

Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware

Feature

Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community

Feature

Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers