Skip to main content

HPC for the desktop

Harnessing high-performance computing power for the desktop might sound like a nightmare to scientists. Imagine moving what is usually left within the confines of a data centre to a controlled and peaceful laboratory and images of a noisy, hot and power-consuming piece of kit, which will disturb researchers and their sensitive experiments, might make HPC seem like a bit of a non-starter for the science community.

But as Moore’s law has stalled on the desktop, scientists and researchers are turning to HPC to solve some of today’s most important and complex problems. Simulation is increasingly replacing physical testing to save money and time, with those simulations getting more complex and hence making technical computing more prominent for researchers.

And the software and hardware vendors are facing a number of challenges to keep up with the demands of scientists. While HPC hardware offerings are becoming more powerful, and affordable, there is a software gap between the hardware capabilities and actual benefits that can be extracted through programming. Roger Germundsson, director of Research and Development at Wolfram, says: ‘There is a barrier – the trend we are on with multiple cores will not necessarily scale to hundreds of cores; the scaling problems are everywhere in operating systems, programming languages and the hardware architecture.’

Worlds apart

Today, there are two distinct environments for scientific computing: the desktop and the highperformance computer. While both have much to offer, there is a real drive to produce some common ground where a scientist can use HPC power on a desktop.

One of the ways to achieve this is through parallel processing, which allows a computer to perform many different tasks simultaneously, in sharp contrast to the serial approach employed by conventional desktop computers. While parallel processing on a massive scale, based on interconnecting numerous chips, has been done for years to create supercomputers, its application to desktop systems has been a challenge due to programming complexities.

One approach is to produce a personal supercomputer, which would sit under your desk like a conventional PC, but uses parallel processing to up the computer’s power. And there has been some progress on this front with researchers at the University of Maryland’s A. James Clark School of Engineering recently unveiling a prototype ‘desktop supercomputer’ which uses technology based on parallel processing on a single chip.

The prototype has been developed by Uzi Vishkin and his Clark School colleagues and uses a circuit board about the size of a car’s license plate on which 64 parallel processors have been mounted. ‘The single-chip supercomputer prototype built by Professor Uzi Vishkin’s group uses rich algorithmic theory to address the practical problem of building an easy-to-program multicore computer,’ says Charles E Leiserson, professor of computer science and engineering at MIT.

And the Tyan Computer Corporation also introduced its Typhoon personal supercomputer (PSC) in 2006, which contained a 16-core server, consisting of four removable motherboards, each sporting a pair of dual-core Intel Xeon 5100-series LV processors with up to 12GB of DDR2 SDRAM memory, dual Gigabit NICs, and a single, lonely SATA device. The company has released subsequent generations of the Typhoon, also known as the TyanPSC T-600 Series, with a price tag starting at $20,000. One of the machines within this group is the T-650 Series, which uses 40 CPU cores per system, has up to 60GB of RAM and can be plugged into a standard wall socket.

But until personal supercomputers become more widespread, another approach for parallel processing is to use the power already contained within desktop computers and there are a variety of vendors out there for scientists looking to increase their HPC power, without wanting to buy or build their own personal supercomputer.

The main players

Microsoft upped its HPC ante in 2006 with the launch of its Windows Compute Cluster Server (CCS) 2003, which is based on the Windows Server 2003 operating system (hence the somewhat outdated name).The Windows CSS runs on a group of interconnected computers and is designed to allow multiple servers to work together to handle HPC tasks with its successor, Windows HPC Server 2008, already in the pipeline. Michael Newberry, product manager of HPC at Microsoft UK, says: ‘Our vision is that HPC power (whether at the desk, or in the back room) be available seamlessly, just like print servers are today.’

Scientists at the NorthWest Institute for BioHealth Informatics (NIBHI) have been using the Windows CCS systems. Based at the University of Manchester, NIBHI’s researchers study the effects of molecular, personal and community factors on common diseases, which generate large volumes of genetic data and require HPC power to handle the complex calculations.

NIBHI switched from its Linux systems and developed an HPC environment based on Windows CCS 2003, and Microsoft also helped the team to use a web portal to access their HPC facilities by integrating Microsoft Office SharePoint Server 2007 with the CCS. This gave scientists a central place to present information such as calculations, generate workflows, collaborate, and share information with other researchers. Newberry adds: ‘The essential message is that they [the scientists at NIBHI] have integrated HPC into their everyday systems, so researchers can be researchers, not IT technicians, and use HPC by pressing a button on the desktop.’

But the fact of the matter is that scientists and researchers do have to understand IT more and more as it becomes increasingly cost-effective for them to simulate experiments, rather than actually build and do them, as Wolfram’s Germundsson says: ‘Scientists do very few experiments or prototypes nowadays; instead we simulate, [and] this will also accelerate HPC over the desktop adoption, provided we can automate this for people.’

Automation, and therefore cost, are sticking points for getting more scientists to use HPC power over a desktop, as Germundsson adds: ‘Using HPC, and HPC on a desktop, is getting to be very much more mainstream. What will make it get more mainstream is the price going down [and] there are much higher levels of automation needed to make this cost effective for larger groups of users.’

Wolfram has two products that can help scientists harness HPC: gridMathematica and the Mathematica Personal Grid edition. gridMathematica is based on Wolfram’s flagship product Mathematica, but has additional features that allow a cluster of computers to work in parallel to solve problems in less time than would be possible using a single computer. One front-end is used to communicate with multiple kernels, which perform the computations, and the computers do not even need to run on the same operating systems, making it easier to hook up a range of computers.

Mathematica Personal Grid edition combines the computational capabilities of Mathematica with the high-level parallel language extensions of Wolfram’s Parallel Computing Toolkit to make personal supercomputing more of a reality, at a user’s own desk, and at their own convenience. It features a parallel-processing framework that can take advantage of quadcore machines and also supports 64-bit computing, which means that it can address larger amounts of memory for systems with lots of RAM. Other features include built-in universal database links, web services and numerous file format converters, along with hundreds of new numerical and symbolic algorithms.

And the two products complement one another, as Germundsson adds: ‘gridMathematica is the next step up from the Mathematica Personal Grid product. It is giving users access to much larger compute power, but you can transition your work from Mathematica Personal Grid edition directly.’

Another company harnessing the power of a desktop is Tech-X, which released the GPULib software library earlier this year. This executes vectorised mathematical functions on GPUs, bringing high-performance numerical operations to everyday desktop computers.

The product is very much aimed at the scientific market and bringing down costs, as Peter Messmer, vice president of the Space Applications Group at Tech-X Corporation, says: ‘Until recently, high-performance computing meant a significant investment in large, specialised computer systems. Now, we can perform these operations quickly on GPUs costing just a couple of thousand dollars that may already be part of your desktop computer.’ And letting scientists use HPC power on their everyday machines to analyse complex scientific data is a must, according to Messmer, who adds: ‘It is necessary to offer scientists the mechanisms to process these large data volumes efficiently. However, not all scientists have access to clusters and yet need access to increased processing power. Or they do have access to clusters, but are often travelling and want to perform data analysis on the plane. Or they are annoyed by the queuing systems and unfamiliar compilers, and slow connections to the HPC centre.

‘This is where we decided to concentrate on GPUs. Even laptops have a huge amount of processing power in their graphics cards and scientists should be able to harvest this power for their needs.’

Despite this novel approach by Tech-X, clusters are one of the fastest growing segments within HPC, which has subsequently led them to evolve in a rather hotchpotch manner. James Reinders, chief product evangelist and director of marketing for Intel’s software development products division, says: ‘Many people don’t realise how non-standard HPC systems have been – including clusters – but we are quickly fixing that so clusters can be as “off the shelf” as a desktop, which doesn’t mean they are all the same, it just means that the differences are abstracted so the software just works regardless of the differences.’

The Stanford University HPC Centre nearly doubled the performance of its existing system by implementing the Intel Cluster Ready Program over a 1,696-core cluster solution. The solution integrates Clustercorp, Dell and Panasas technologies to give the centre the flexibility to meet its ever-expanding computational and application requirements and to enable Stanford researchers to achieve faster time-to-results.

The system supports more than 200 researchers and effectively enables computational fluid dynamics (CFD) on demand.

Researchers can run routine jobs in-house, which lets then do more extensive verification and validation of code using external resources. Desktops remain a popular way to access HPC resources. Reinders says: ‘Desktop is often the desired front-end for the HPC clusters. Powerful desktops with multi-core technologies can offload some work from backend clusters and provide more interactivity and visualisation. Another trend we are seeing is remote visualisation, where the desktop becomes more important. We are likely to see powerful desktops continue to evolve and do more of the backend tasks, thus empowering the engineers.’

Scientists are also more likely to experiment with HPC while on their own machines as they feel more comfortable simulating on their desktops than sitting in a dedicated HPC centre. As Reinders adds: ‘Scientists are definitely more likely to experiment, in a sense “mess around” with half-baked ideas they have on a desktop. It is a place for prototyping before launching large jobs on production machines and so is fertile ground for learning and discovery.’ Launched in 2004, Interactive Supercomputing’s interactive parallel computing platform, Star-P, offers researchers a way to combine the two environments of desktop computers and high performance servers into one entity.

Star-P is being used by researchers at the University of Texas San Antonio (UTSA) who are using ISC’s software to reverse-engineer brain neurons to try to build better computers. By understanding how neurons process chemical signals when a person learns and remembers information, researchers believe they can create more reliable computers. UTSA’s biology department purchased a Star-P license to link its desktop computers to an eightprocessor parallel cluster, with funding support from the Cajal Neuroscience Research Center and the National Institute of Health. To further accelerate the team’s research, ISC granted UTSA an additional license to deploy Star-P on a 120-processor cluster in the near future. And scientists at the Arizona State University (ASU) are using Star-P in quite a novel way to look into improving HPC for scientists. ASU is ditching the concept of buying the most and biggest amounts of kit and instead focusing on the human side of supercomputing, by studying new tools and techniques that make researchers work faster, easier and more productively. The High Performance Computing Initiative (HPCI) at ASU’s Ira A Fulton School of Engineering is exploring a range of future programming paradigms for HPC systems, comparing them against traditional parallel programming methods. The study is funded under the US Department of Defense programme called User Productivity Enhancement and Technology Transfer (PET), which is a project that aims to gather and deploy the best ideas, algorithms and software tools emerging from the national HPC centres into the Desktop on Demand (DoD) user community.

The HPCI is using Star-P in a user productivity- based study of large scale computing, investigating how parameters such as user interface, ease-of-use, interactive discovery and time-to-solution factor into an optimal computing paradigm. The university has deployed Star-P on a 2,000 multicore processor parallel system and made it available to more than 100 students and faculty members to use for a variety of complex modelling, simulation and analytical applications.

Star-P enables students and faculty members to build algorithms and models on their desktops using familiar mathematical tools, such as Matlab, Python and R, and then run them instantly and interactively on the parallel system with little to no modification. The HPCI is comparing this approach against traditional parallel methods using C or MPI languages. Star-P eliminates the need to reprogram applications in these low-level languages in order to run on parallel systems. Reprogramming with traditional languages can take months to complete for large, complex problems, so Star-P yields dramatic improvements in productivity and makes problem solving an iterative, interactive process.

‘Our mission at the university is to not only provide HPC resources for research, but to also innovate new approaches to high performance computing,’ says Dr Dan Stanzione, director of the High Performance Computing Initiative at the Ira A Fulton School of Engineering. ‘Star-P will help us develop new programming paradigms that remove the complexity and other productivity-hindering roadblocks from our HPC resources, making them available to a wider group of users.’

Connecting computers and using their combined processing power is another way a group of scientists can harness HPC power over their desktops, as James Coomer, HPC architect at Sun Microsystems, says: ‘The element of compute power on the desktop can provide a significant resource, with today’s multi-core chips. Sun Grid Engine can be used to join up the compute power across a department’s desktops to provide a significant resource.’ Sun Grid Engine (SGE) is a free and opensource offering, but can also be bought if the user wants additional support from Sun. The SGE gathers up a department’s desktop cycles, and integrates the grid with the user’s application environment, so time-to-results can be drastically reduced.

Researchers could even be using HPC over the desktop without realising it, as Coomer adds: ‘A user can sit at a desk and see his desktop – but this might be served from a remote resource, for example using Sun Global Desktop. The desktop server might be a computer sited within a large data centre – even a computer that’s part of a large, dynamic high performance computing centre.’

And this is one of the main goals across the companies trying to get scientists to use HPC power over their desktop – it needs to be a seamless process where researchers can run simulations without realising what is going on behind their computer screens. Microsoft’s Newberry adds: ‘We can do wonderful things with HPC, but it’s not easy. It [HPC] should be as easy to run as it is to buy a PC to run the internet.’

But things certainly seem to be pointing in the right direction for getting scientists to seamlessly use HPC over their desktops, whether through a dedicated personal supercomputer, accessing an HPC resource through a desktop computer or using parallel processing on a conventional desktop or connected group of desktops.

The programming barriers are being lowered as more scientists and researchers experience parallel computing for the first time; the software gap will begin to close as the time to develop custom HPC codes will diminish from months or years to weeks or days, and the high-performance computer will, one day, be used by scientists just as interactively as our desktop computers are today.



Media Partners