Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Responsibilities beyond technology

Share this on social media:

Topic tags: 

In the first of our new regular features on HPC directors, Stephen Winter, Dean of Informatics at the University of Westminster, explains how his role is as much about advocacy and marketing as it is about choosing and managing the technology.

I am Dean of Informatics at the University of Westminster and I am head of the HPC research team (which is managed by the university’s Centre for Parallel Computing).

My interests lie in a whole host of information and communication technology areas from grid computing and clustered environments to parallel computing. That said, you might be surprised to learn that my own role is not so much technology management (although this remains an important issue) but more an advocating role, encouraging the adoption of our facility from within neighbouring faculties and also outside organisations looking to advance their research, technical developments, and operations. This includes both academic institutes throughout the UK and private sector enterprises.

At Westminster, and within our HPC research team, we have adopted a philosophy that puts us firmly in the space between the technology and the user – a space of huge importance to the IT profession, and to the users themselves, but one which is underresearched, under-populated and generally under-addressed.

The whole UK e-science research programme over the past five years has been aimed at bringing together communities of scientific researchers in biological science, chemistry, automotive, finance... all the non-IT industries, and encouraging them to harness the many benefits of HPCC and grid computing created by the IT research community.

The fact is little has been done to address the processes by which engagement actually happens. This entire area of how to encourage non-IT people to engage in a sustainable way, with any technology can be very challenging even for the experts; and to try to communicate and deliver a service to those people all in one hit is a big ask, especially for a research-orientated IT community.

We have positioned ourselves, as a team, to consciously tackle those challenges. Within the university, we are regarded as initiators to HPCC and wider grid computing adoption. My role is to encourage take-up of our HPCC and the National Grid Service facility, not only to our nearest neighbours, but to those non-IT literate groups outside the university.

Of course, that means we have to find those groups and seek out computer activities they might be undertaking, such as simulation, that are amenable to large-scale performance enhancements, and encourage them to look at grid computing – initially via our grid!

By the time the ‘light has gone on in their heads’, they change from being sceptical to proselytising on our behalf. But it is hard work switching on the light.

This is where Westminster is unique. Most other universities do not take on this role because the funding councils do not make the funds available for this type of activity. Most research funding tends to go to leading technical or scientific research projects rather than into this hugely important knowledge transfer area.

Maintaining use of the facility

Academic institutes need to be more entrepreneurial and approach business in a similar vein to the private sector.

In essence, if you are to encourage adoption of any service – technology driven or not – you must offer certain levels of quality of service (QoS) and availability. It is a mistake to offer something that is ‘here today and gone tomorrow’.

This scenario happens all too frequently in the academic space. Once the funds for a research project have ‘dried-up’, universities tend to dismantle an infrastructure and move onto the next project. This is an ideal scenario for knowledge transfer; it does not create the sustainable level of engagement conducive to successful relationships, academic or private.

We have consciously invested in a production level quality of service – without it people become disillusioned in technology very quickly.

Selecting your HPC partner

Likewise, any university that has invested in an HPCC must also demand a QoS from its supplier to ensure the HPCC has 100 per cent uptime for its users. All too frequently we read about IT suppliers leaving public sector customers ‘high and dry’ – think NHS IT!

Be very specific about your requirements. When we tendered our upgrade, we received half a dozen proposals – this was whittled down very quickly to our current providers – IBM and OCF. The latter is responsible for the solution’s entire design, install and maintenance. HPC by its very nature does have high levels of entry, therefore do not look at some ‘new kid on block’; go with experience, QoS agreements and those suppliers that can demonstrate an ability to be flexible.

The HPCC

While our hardware is provided by IBM, we have embraced the open source software community. To provide a production level service – which we are required to do to be part of the UK National Grid Service – would be very difficult on deeply proprietary software.

In addition, one of our primary remits is to encourage the use of the facility by non hightech savvy individuals – academic and private. As a result we have developed our own software, which provides an easy-to-use interface that includes graphical computational workflow design and management tools. We have found that the workflow concept is something with which most non-technical users are familiar, and this provides a useful language in which to conduct a computationally-oriented dialogue between ourselves and non-technical clients. Some clients can even design their own workflows using our graphical interface. We are in the process of developing bespoke interfaces for different industry areas – biological, chemical and finance for example. An easily understood interface helps the non-IT experts understand the system and understand the benefits.

Types of research

Today, the University of Westminster is offering private sector companies, such as banks or insurance companies, looking to benefit from the power of HPCC, direct usage time and support on our newly upgraded cluster. Finance houses are quickly discovering that the quality of their decision making is directly proportional to the amount of computing power they have available to them.

Among other projects, researchers from around the UK and beyond are accessing the NGS and drawing on its new additional power to more quickly understand how molecules, such as cancer or HIV, interact with each other under certain stimuli. The faster researchers can do this, the more possibilities they can evaluate and the quicker potential life-saving treatments can be discovered.

The future

HPCs provide an invaluable role to UK scientific research; without them the time taken to introduce new life-saving drugs, or to make cars safer, would be significantly increased.

However, I do see the way in which HPC is used will change in the near future. Sustainability is key; purchasing the hardware is merely the first step in the game. Many think that managing a cluster is as straightforward as managing your desktop – it is not. I envisage an increasing number of specialist HPC system service providers, offering HPC as a service.

It is the logical way forward because of all the support you need to pour around the cluster. This is all too often underestimated – and we do hear of people purchasing clusters, only for them to remain virtually abandoned and unused. That is not sustainable.

About Westminster's HPC

In June of this year the University of Westminster upgraded its previous 32-node high performance compute cluster (HPCC) with a new 96-node HPCC. Built on 82 IBM System X 3455s – 328 cores of AMD Opteron – using Cisco Infiniband high speed interconnects, supported with IBM GPFS (General Parallel File System) and IBM DS4200 storage controllers, the cluster is providing the UK research community and private sector companies with significant computing power to complete more complex simulation experiments far quicker. The University of Westminster is also a partner of the UK National Grid Service UK (NGS) and its HPCC is providing a significant threefold increase to this widely used grid computing facility.