Taking on the student cluster challenge

Dan Olds takes a look behind the scenes at the ISC’13 Student Cluster Challenge

A special computer industry event has taken place at the 2013 International Supercomputing Conference in Leipzig. It was the second annual ISC Student Cluster Challenge (SCC), a friendly competition where eight teams of undergraduate students, representing universities from around the globe, test their supercomputer prowess in a live face-off.

The ISC’13 event is one of three major student cluster competitions taking place in 2013. The first annual Asian Student Supercomputer Challenge recently concluded with the Shanghai final competition in April, and the US-based SC’13 Student Cluster Competition will take place in Denver this November.

What makes these competitions so unique is that they give students the chance to demonstrate that they know their way around system design and software, and that they know how to tune a system to run HPC applications.

These competitions are deceptively simple on the surface. Students design and build clustered mini-supercomputers and race against each other to see which system can run a set of scientific workloads and benchmarks the fastest. The only limit on their design is a hard power cap of 3,000 Watts.

The first step in the process is to provide event organisers with a detailed project proposal. This needs to cover the make-up of the team, the hardware/software they’re going to use (which is typically supplied by a vendor sponsor), and how they are going to approach the various benchmarks and scientific workloads they will be required to run in the competition. But even the best proposal doesn’t guarantee acceptance into the event. The major competitions have had to limit their fields to eight or nine teams. However, putting together a solid proposal goes a long way towards getting your school considered for a berth.

Assuming the school gets into a competition, the next step is getting sponsors lined up and designing their system. Students have many decisions to make. Do they want to build a conventional system with CPUs only, or mix in accelerators like GPUs, FPGAs, or Intel Phi processors? How much memory per node do they need? What are they going to use as a software stack?

While the students are making these decisions, they also have to keep the applications in mind. Every competition has students run the traditional HPCC and Linpack benchmarks, but they also have them run a series of scientific applications covering areas like molecular dynamics, weather forecasting, and fluid dynamics. Students need to learn the applications and, just as importantly, how to make the application run at maximum efficiency.

Universities have found that these competitions serve as a vehicle to expand their HPC curriculum. Several have added dedicated courses geared towards giving their students exposure to the scientific domains (like atmospheric science or physics) they will see in an upcoming cluster competition. Schools have also added courses on advanced system design and tuning, which gives students deeper understanding of how to optimise a system for different workloads.

During their training time and in the actual events, students have to work effectively as part of a team. The requirements of the competitions are large and complex enough that teams are forced to divide up the tasks to even out the workload. Usually a team member is assigned to each application and is responsible for thoroughly understanding the workload and how to wring the most performance out of it. Others might be in charge of hardware management, operating system optimisation, or other tasks.

The benefits for the students are substantial. For some, these competitions have caused them to change career plans, often leading them to embark on a HPC-related career track. Many students have found internships and full-time jobs as a direct result of their participation. We hear a lot about how there is a worldwide shortage of young, yet highly skilled, workers who can quickly add value to both research and commercial organisations. Employers have found that the students who participate in these competitions are high motivated self-starters who work well with others and, as if that’s not enough, also have technical skills way above the norm.

Just being selected to participate in a student cluster completion is quite an achievement for both the university and the students on the team. However, these are competitive affairs and all of the teams are looking to prove that they are the best.

Student teams are scored on how fast their systems process the various workloads and the accuracy of their computations. But these competitions aren’t only about raw machine performance. The major goal of these competitions is to expose students to HPC as a career and give them an interesting, educational, and fun experience in a friendly competitive setting.

With this in mind, scientific domain experts and HPC practitioners interview each team to ensure that the students truly understand the applications and the science behind them. Scores from these interviews account for a large percentage of the final score for the teams.

The ISC’13 Student Cluster Challenge in Leipzig had a very geographically diverse field, with entrants from every continent except Australia and Antarctica. Participating institutions included:

  • Europe: Chemnitz University of Technology from Germany and Scotland’s Edinburgh University;
  • North America: Purdue University and the University of Colorado, both from the US;
  • Asia: Tsinghua University and Huazong University of Science & Technology from China;
  • Africa:  A team formed by the South Africa Centre for High Performance Computing will be the first team in any cluster competition from Africa; and
  • Central/South America: Rounding out the field is the Costa Rica Institute of Technology team, (which has nicknamed themselves the Rainforest Eagles 2.0).

Sponsored by Dell, the team from South Africa’s CHPC claimed victory. Not only was this impressive team the youngest in the competition – composed entirely of undergraduate students – they had a number of obstacles to overcome, including no access to their configuration until a day after they arrived in Leipzig. The configuration in question comprised Dell PowerEdge R320/R720 servers and made use of eight Nvidia K20 accelerator cards. The team’s cluster had eight nodes, each with dual Xeon E5-2660 processors, for a total of 128 CPU cores, and had 512 GB of memory (64GB per node). It also featured a Mellanox FDR Infiniband interconnect.

On a final note, one of the things that observers always comment on is how much fun the students seem to be having. They are all friendly and personable, even when there’s a language barrier hindering full-bandwidth communications. They’re also eager to learn and will engage show attendees in conversations about HPC, science, and technology. While each participant wants to win the competition, the competitive element is always friendly – as was the case this year. Towards the end of each event, teams mix together and form long-term relationships. And sights are now set firmly on the SC’13 Student Cluster Competition in Denver this November.


Dan Olds is founder and principal analyst at Gabriel Consulting Group, and self-appointed Student Cluster Competition chronicler. Visit for more about these competitions.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe looks at the latest simulation techniques used in the design of industrial and commercial vehicles


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers