Anybody working in the software industry will be aware of the changes in hardware that are causing us to re-factor and, in many cases, rewrite our software. As the number of cores increases and their clock speeds decrease a great deal of attention has focused on the technologies that we can use to address these developments. Should we stick with tried and well-understood technologies such as OpenMP or MPI, or switch to one of the many new languages that are springing up?
December 2009/January 2010
In the not so distant past, mechanical CAD and CAE were the domains of two different groups of engineers, performed with different toolsets. Moving data back and forth between them was a nagging problem, and investigating a change in geometry required the time-consuming process of sending a new geometry to the CAE environment, once again repairing it, reapplying the physics and then redoing the simulation.
The Shanghai Supercomputer Center (SSC) was founded in December 2000. It is the first high performance computing platform open to the whole public in China and currently is the country’s leading supercomputer centre.
As consumer products go, the modern automobile is subject to more design constraints than most. Automobile designers produce works of aesthetic beauty, capable of slicing efficiently and silently through the wind, as their powerful motors turn the world beneath them. Engineers, on the other hand, must make this creativity practical, packing functionality into the constraints of designers’ outlines.
In the past few years, operators of HPC facilities have become keenly aware of power issues; they often spend as much on removing heat as they do to power the servers. While once upon a time power was 15 per cent of the cost of a data centre, today it’s roughly 50 per cent. Although much work is being done on how to better handle and remove heat from facilities, server manufacturers are busy combating the source of the problem – optimising the units that generate the heat in the first place.
‘The problem with energy,’ says the earnest energy department civil servant across the café table from me, ‘is entropy. Actually, three problems. Where the energy comes from is a problem. Where it goes to when we’re done with it is a problem. And the process of using it is a problem. Those three things will always be true. They will always cost us in money, resources and consequences.’ Then she adds, looking nervously about her: ‘But saying so is a shortcut to a short career.
Gemma Church finds out how astronomers are using simulations to investigate the extremities of our universe
Turning data into scientific insight is not a straightforward matter, writes Sophia Ktori
The Leibniz Supercomputing Centre (LRZ) is driving the development of new energy-efficient practices for HPC, as Robert Roe discovers
William Payne investigates the growing trend of using modular HPC, built on industry standard hardware and software, to support users across a range of both existing and emerging application areas
Robert Roe looks at developments in crash testing simulation – including larger, more intricate simulations, the use of optimisation software, and the development of new methodologies through collaboration between ISVs, commercial companies, and research organisations