Scientists and engineers find computing is an important aspect of their day-to-day work, but for the most part, the computing itself is just a means to an end; that is, computing enables their scientific research and engineering development. Of course, some scientists become fascinated by computing itself, and become as expert in programming as they are in their scientific discipline.
In the past few years, there has been a significant shift in the development of computing hardware: individual processors have reached a plateau in terms of performance, so increased performance is obtained by increasing the number of processors – or cores. This, in turn, has led to the increased importance of parallel computing, which can be broadly defined as harnessing the processing power of multiple processors, whether within the same computer or across multiple computers.
There are three very different kinds of parallel computing: multicore, multi-computer, and attached processors. Software companies, such as The MathWorks, are embracing this shift in technology to ensure that our programs leverage the advantages of all these forms of parallel computing.
With multicore, there are several processors on one chip. Here, the user can run several arithmetic processes at once, all running the same program, and thereby tackle low-level portions of the computation. However, multi-threaded programming is a lengthy, complicated and laborious process – unless, of course, you use a software environment that does this all for you. Our basic version of Matlab includes multicore support for certain functions without the user having to know anything about programming at all – all of the multicore optimisation takes place ‘under the hood’ automatically.
Multi-computer involves, as the name suggests, multiple computers, each with their own processors and memory. Here, some understanding of programming is required to get the best performance, usually with additional software, such as Matlab’s Parallel Computing Toolbox and Distributed Computing Server. This version of parallel computing enables multiple different tasks to be run simultaneously. It involves more set-up on the part of the user, but is still not an onerous task, especially if the optimising environment is intuitive and easy to use.
Finally, ‘attached processors’ is a term covering the use of FPGA or GPU processors to carry out computing tasks. Vendors such as Nvidia provide libraries and plug-ins that enable various software environments, such as ours, to utilise the processing power of GPUs and FPGAs. Again, some programming expertise is required here to take the best advantage of the power on offer.
The RoadRunner supercomputer at Los Alamos National Laboratory is one example that uses all three types of parallel computing, so it’s not necessarily about choosing one method over another. Rather, it is important for today’s scientist and engineer to understand that parallel computing is here to stay, and that it can deliver huge performance benefits. There are, of course, different levels of acceptance among groups of users, and, as previously stated, some scientists are more at home with programming than others. At The MathWorks, we pride ourselves on being able to offer a portfolio of products that ensures we have a solution for everyone, regardless of their programming capabilities.
And this is my recommendation to scientists and engineers: you have a choice of software available to you, regardless of your particular area of expertise. As part of your evaluation process, though, I encourage you to assess the program’s ability to take advantage of parallel computing, and gain a thorough understanding of just how much programming knowledge you will need in order to make the best of any such advantage. It’s worth noting that the greater control you want over your large data sets when running in parallel, the more programming expertise you will require.
Choose wisely – and don’t be afraid of parallel computing. It’s here to stay and will make your research and development more efficient and more effective.