Clusters bring change

Silicon Mechanics’ David Manning discusses how high-performance computing clusters are enabling next-generation academic research

It’s a fact of academic life that research ideas are abundant and computer resources are always too scarce. One academic institution is now poised to step up its research leadership with the assistance of a complete high-performance computer cluster, awarded as part of a highly-competitive research grant programme. The cluster will help them handle the huge amounts of data collected as they tackle issues as varied as climate change in the sensitive Andes region, model structures that may someday play a role in the cure for cancer, and helping cities focus resources to incorporate surveillance of diseases. Not to mention helping scholars compare the scribes’ handwriting in medieval manuscripts!

The coveted cluster was awarded to Saint Louis University (SLU), of St. Louis, Missouri, based on a proposal that takes the concept of interdisciplinary computer resource sharing to new heights. Nine different departments will share the high-performance computing cluster, which includes hardware donated by Kingston Technologies, AMD, Nvidia, QLogic, Supermicro and Seagate. Software supplied by Bright Computing uses image-based provisioning, which is important for this type of shared solution, since different research applications have varying configuration requirements to run optimally.

The computer cluster was developed by Silicon Mechanics and originally used in 2011 by Boston University to compete in the SC11 Student Cluster Competition. Silicon Mechanics later sponsored a grant program in which educational institutions competed for the cluster. SLU was awarded the grant from a pool of 190 applications, made up of US and Canadian universities and research institutions. According to Art Mann, education, research and government account manager at Silicon Mechanics, who championed the grant programme: ‘We had a great variety of proposals, but Saint Louis University’s stood apart, benefitting an incredibly wide range of academic pursuits.’

SLU’s strategy in preparing their application was summed up by Keith Hacke, SLU’s interim CIO and vice president of Information Technology Services: ‘We knew that this type of hardware is not used by just one division, and a single department can’t keep that much computing busy all the time. Different departments are using it at different times in the year; there are many varied workload schedules for research. So this gives us the ability to add a lot of computational power across multiple departments.’

In addition, the cluster will help SLU reach its overarching, long-term goals for research. ‘One of our university goals is to be a Top-50 research institution, and having these kinds of computer systems is mandatory for a move like that,’ he commented. The new hardware will also be used to help update Saint Louis University’s current HPC clusters.

The breadth of research data that the cluster will handle is awe-inspiring. For example, Dr Gerardo Camilo, of the Biology Department, explained that they will be using the high-performance cluster to further research on climate change, based on modelling of high-elevation plant community ecology in the Andes Mountains. ‘High-performance computing will ultimately be used to validate the many projections made by the Intergovernmental Panel on Climate Change (IPPC) on what is likely to happen to this important area as a function of specific scenarios, or “story lines”.’

SLU researchers conducted exhaustive research on plants in the Andes, gathered by viewing and documenting collections in South American universities, as well as trekking through the mountainous region. Using all of the data collected, combined with long-term data from world climate observatories, researchers developed a climate model showing plant distribution ranges and later developed real-world models of the local situation.

The HPC cluster will be used to compare 100 different IPCC scenarios, including alternatives looking at the years 2020, 2040, and 2060 – about 2.7 terabytes of raw data! Future work will focus on looking at additional global situational models based on IPCC’s upcoming expected projections of a 4-degree minimum rise in world temperatures.

The Chemistry Department is using the HPC cluster in for National Institutes of Health-funded research to probe RNA stability and DNA-drug binding. The investigations include basic research that may advance cancer therapeutics using small interfering RNA (SiRNA) to silence genes by binding to messenger RNA and inhibiting its ability to produce certain proteins. The new cluster will enable researchers to get their data more quickly than before, and address more research questions. They also hope to use the cluster to study carbon nanotube solubility if they are successful in receiving funding from the National Science Foundation and the Department of Energy.

The HPC cluster is not just a tool for the hard sciences; it is also being used in the social sciences at SLU. One fascinating project that will make use of the HPC cluster is jointly sponsored by the Sociology & Criminal Justice, and Environmental & Occupational Health departments. The project, known as Mapping Risk and Resilience in St. Louis (MaRRS), will look into the spatial aspects of social and economic phenomena.

‘This project is an exciting collaboration that is going to develop a model to help the City of St. Louis come up with a decision matrix on where to focus resources and how to use the laboratory to incorporate surveillance of diseases in the future,’ said Dr Kee-Hean Ong, of the Environmental & Occupational Health department.

Analysing large data sets is difficult with traditional resources. By enabling large-scale parallel processing, the high-performance cluster will make data analysis quicker and more efficient. Perhaps as important, SLU researchers expect to improve the quality of their social science research by improving research methodology, as social sciences have typically not had much direct access to HPC clusters.

A final example comes from the Center for Digital Theology, which is using the HPC cluster to help process large sets of digital images of pre-modern, hand-written, and unpublished manuscripts to support existing research in the field of paleography.

The researchers are developing tools to help scholars analyse medieval European manuscripts, usually those with religious or theological meaning, written from between 500 AD and 1500 AD. Roughly one year ago, they developed a tool called T‑PEN (transcription for paleographical and editorial notation), a web-based tool for working with images of manuscripts, which allows transcriptions to be created, manipulated and viewed in many ways. Scholars can now combine the benefits of digitisation with the expert scholarly eye needed to review and compare aspects of the manuscript they are interested in, like handwriting, script, or page layout.

Since scholars are looking at thousands of manuscripts, each of which could be 200 to 600 pages long, data sets grow exponentially. The HPC cluster will let the CDT expand the tool in several possible ways, including pre-processing for manuscript repositories now seeking to digitise their collections. This would allow them to provide immediate access to their manuscripts without endangering conservation by handling of the documents.

The success of this first campaign, for both Saint Louis University and Silicon Mechanics, has inspired the start of a second annual research grant. Silicon Mechanics’ Art Mann commented: ‘Seeing how positively this grant has affected the university, Silicon Mechanics will be awarding another HPC cluster grant in 2012. Research driven by today’s educational institutions can change the world, and we want to be part of that change.’

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers