Skip to main content

Everybody dreams of faster applications, but …

Whichever application we consider, whichever science area, in both private and public sector – it is rare to find a user of modelling/simulation who doesn’t wish their application could run faster, or solve a bigger problem in a feasible timescale.

Solve that stress calculation in hours instead of days. Model a race track at 1cm instead of 10cm resolution. Follow the dynamics of 100,000 molecules instead of 10,000. Simulate the performance of an entire wind farm rather than just a single turbine. Find one face in a million not just one face in a thousand. Test 10,000 formulation candidates rather than merely 100.

Well dreamers, be happy, for the good news is that this is possible to achieve. There are two ways to get applications running faster or on bigger problems: either run the application on a more powerful computer, or make the application run more efficiently on the same hardware.

Well, alright, you’ve got me – there might be one or two catches. But, the prize – much better science/engineering throughput and capability – is worth braving a few catches. So let’s explore.

While we’re admitting it might not be so simple, this is a good place to note that some variation of ‘lack of access to knowledge/skills’ or ‘don’t know where to start’ regularly shows up in user surveys and analyst studies as one of the top barriers to undertaking projects to get better application performance. So, as well as discussing the challenges here, I will also present some solutions to that ‘skills barrier’.

Running on a more powerful computer means choosing a more powerful computer. Will you buy a new computer or use someone else’s computer? What matters for choosing between the two? If buying a new computer, will you navigate complexities of technology options, vendors, and configuration, or ask an expert to help you? If using someone else’s computer, will that be a supercomputer centre or ‘the cloud’? Which centre or which cloud?

Even once you have access to a more powerful computer, there is no guarantee that your application will simply drop onto it and chuck out results at a dramatically better pace. It turns out that the software usually needs to be adapted to get the promised performance of the new hardware. This is especially true if you are looking at some form of supercomputer or high-performance computing (HPC) system, where many computer nodes are combined to create a more powerful computing platform.

No, don’t give up yet. Yes, this dream is turning into complexity and the whiff of money plus effort. However, there are an exciting wealth of success stories about step-changes in business performance or science capability from those who have trodden this path. For some businesses, a few per cent improvement in application performance is a business-changing result. There are examples of this in sectors such as finance, sports, oil and gas, security, and manufacturing. For others, the application performance change must be twice or 10 times, or even more, to deliver a meaningful impact on the business or science. The clear evidence of those who have tried is that the result is worth the effort.

So, motivational bit over, back to those catches. We explored how getting more powerful hardware can be hard, but making the software smarter is often an even bigger challenge. If you use proprietary application software, then you are reliant on your software vendor to solve this problem for you. Some ISVs have a positive track record here, but the hard earned reputation of many ISVs is one of slow adoption of new technologies or performance enhancing methods such as algorithmic innovation, or scalable parallel implementations.

If you are using open-source application software (see my article in the April/May issue on open-source vs. proprietary) or software developed in-house, then the possibilities are more flexible. The software can be tuned to use new hardware technologies optimally, or have faster or more scalable algorithms implemented or other software performance enhancements.

The catch is that this requires skills and experience that are not part of the normal toolset of most software developers, scientists, or users. The choice becomes one of either investing time, effort and money into learning the skills to do it yourself, or paying an appropriate professional to do the work. This is especially true if chasing the real step changes offered by HPC systems. So, it’s not easy. But remember the motivational bit? I promised I’d talk about solutions to the skills barriers, and this is a good time to start.

There are two skills issues to address – getting the right hardware and making the software smarter. As we noted above, making the software smarter is often needed to benefit properly from new hardware too.

At one level, getting access to a more powerful computing isn’t a difficult skills barrier to solve – simply let your favourite vendor know you want to spend more money on a bigger system and I’m sure they will be delighted to pop in for a chat about the options. However, you may wish to be more certain that you are getting the best value for money and the right capability for your organisation.

First, you will need to become a regular reader of the various HPC media (such as this publication) and maybe some HPC bloggers. Social media, especially twitter, can help point to content – for instance, I am active as @hpcnotes and there is the prolific but anonymous @HPC_Guru. If the investment will be significant, then reading is probably not enough. You will also want attend several HPC conferences to research the technology choices and network with peers to learn what works and what doesn’t. The two main HPC conferences are ‘SC’, held annually in November in the USA and ‘ISC’, held each year in June in Germany. There are many more, including some sector specific HPC workshops – one list of such events can be found at www.hpcnotes.com/p/hpc-events.html.

If this sounds like a commitment of hard work, time, travel and money – that’s because it is. It can also be fun. But, if you don’t like the sound of that, or your organisation can’t spare the resource to do it properly, then there are a couple of options: either hire an impartial (not vendor aligned) HPC consultant to help; or subscribe to services such as NAG’s HPC Technology Intelligence Service (www.nag.com/hpc-technology-intelligence) or analyst publications (such Intersect360 or IDC, etc.) that can do the hard work for you.

How much effort to put into this depends on how much you plan to invest in a HPC system, and how much HPC experience you have in-house. The cost of attending conferences or hiring consultants for a $50k HPC investment will probably be out-of-proportion to the solution optimisation gained or risk mitigated. If you can justify a $1m HPC investment, then proper research is a must and the experience of an impartial HPC consultant will be money well-spent. Once the investment gets beyond a few million dollars, then it would be a case of needing a clear justification if you weren’t involving experienced HPC consultants plus attending HPC events yourself.

Where can you find these magic HPC consultants? There a few – very few – firms that offer genuinely impartial HPC experience – such as NAG, Red Oak Consulting, and similar. Some of the major consulting firms might be able to help, but many don’t have true HPC experience or skills. Some academic supercomputer centres will be happy to help. Another excellent source of such consultants is semi-retired HPC professionals, especially former HPC centre directors.

That should act as a workable guide to the hardware side of solving your application performance dreams. The real challenge for application performance – and the skills barrier – is the software side.

If you don’t have access to your application source code – for instance, if you are using proprietary software – then this skills barrier is purely about your business negotiations with your ISV to get them to adapt the performance to the new hardware, improve scalability, or whatever.

If you do have access to your application source code, then securing performance improvements is within grasp. There are many providers of training in software performance tuning, scaling, and related skills. A good place to look is the academic supercomputer centres, although there are some commercial providers too. Your HPC system vendor might be able to make a recommendation.

There are several tools to help this task – for instance, Allinea provides tools to analyse application performance, Ellexus provides tools to understand I/O performance, and most HPC vendors offer some level of performance analysis tools on their systems. Analysing your application’s performance limitations and identifying possible solutions is also available as a service, free of charge to EU-based organisations, via the EC-funded POP project at www.pop-coe.eu.

If you need outside help to deliver your application performance improvements, then contrary to the hype, there are many choices. Specialist providers service both private and public sector clients, ranging from broad services such as NAG’s Software Modernisation Service to focused offerings such as StreamComputing’s OpenCL consulting for GPUs. Several academic HPC centres will provide software scaling and optimisation services, although often ideally as part of an overall research collaboration. And again, there are individuals and small collectives of individuals who can be an excellent choice for such work, like www.sourceryinstitute.org.

So why is there a hype around a shortage of skills? It is partly true – there are fewer experienced HPC professionals and software performance engineers than there is demand for, but there are also more sources of such skills than are sometimes acknowledged. One part of this is that HPC/software performance skills are a skilled niche and so command higher fees than generic IT services. Unfortunately, ‘shortage of staff/candidates’ often actually just means ‘I’m not willing to pay the going rate for HPC skills’, usually making a false comparison of costs against commodity IT staff. Recruiting in-house staff remains the best and cheapest long term solution. However, for short-term needs, or ‘burst capacity’ of such skills, or specialist skillsets, then contracting an HPC/application performance consultant or service is a very practical and cost-effective solution.

In summary, the dream of your application running faster or solving bigger problems is very achievable. The much quoted and feared skills barrier need not be a show-stopper. There are plenty of training or self-learning opportunities, and there are proven providers of HPC experience or application performance engineers. Keep dreaming – but now you know how to make those dreams reality, and to become another success story of application performance improvements delivering significant business innovation. 



Media Partners