Programming difficulty is killing engineers' productivity

While supercomputers are thought to help accelerate engineering and scientific discovery, a new study sponsored by Interactive Supercomputing suggests difficulties in programming is increasingly becoming one of researchers’ biggest productivity killers.

The Development of Custom Parallel Computing Applications study conducted by the Simon Management Group surveyed more than 500 users of parallel high-performance computers (HPCs) from a range of industries including education, government, aerospace, healthcare, manufacturing, geo-sciences, bio-sciences and semiconductor. The report examines the software tools currently used, probes current application development environments, practices, and limitations, and catalogues critical issues and bottlenecks.

The study indicates that parallel code writing, programming efficiency, translation, debugging and limits of HPC software are the most frequently cited bottlenecks across all industries. Respondents indicated there is an urgent need to shorten the application development time of custom algorithms and models.

The largest category of respondents (42.3 per cent) said that a typical project takes six months to complete, yet nearly 20 percent of respondents’ projects consume two to three years of their time.

The majority of parallel application prototypes (65 per cent) are developed in very high-level languages (VHLLs) such as Matlab, Mathematica, Python, and R. While C and Fortran are frequently used to prototype, respondents overwhelmingly said they would prefer to work with an interactive desktop tool if the prototype could be easily bridged to work with HPC servers.

The disconnect stems from the fact that desktop computers cannot handle the processing and memory requirements of the huge amounts of data that many scientific and engineering problems analyse. The problem is only getting worse; according to the study, the average median-sized data set used in a technical computing application today ranges from 10 to 45Gb and is expected to swell to 200 to 600Gb in just three years.

'This study demonstrates that programming tools have not kept pace with the advances in the computing hardware and affordability of high-performance computers,' said Peter Simon, president of Simon Management Group.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference


James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC


Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.


As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC


By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers