Microsoft releases major update to Windows HPC Server

There's an industry joke that Microsoft gets things right the third time around, and some evidence for this might be the third spin of Microsoft Windows HPC Server. The main goal, explains Bill Hilf, general manager of Microsoft's Technical Computing Group, is to help mainstream users take advantage of parallel hardware whether for a client, cluster or cloud. 'We want to focus on the user rather than just the "plumbing" alone,' he explains.

At the client level, points out Hilf, advanced debugging tools and compilers are increasing performance. He points out that by changing just one line of code in an example program it's possible to access multiple cores and achieve a 4x performance boost. With such tools, Microsoft is trying to 'raise the boat' for all programmers.

At the cluster level, he notes the troubles normal users have in managing and deploying HPC clusters. Here R2 allows the deployment of more than 1,000 nodes using a number of enhanced cluster-management features. Further, with node templates you can also create GPGPU node groups within the same environment. It's also short work to add Windows 7 desktops to a cluster via node templates in which you can set up policies; it's even possible to do 'cycle savaging' on desktop systems within a cluster for additional power.

He points to HPC Server Services for Excel as a prime example. R2 now enables running multiple instances of Office Excel 2010 in a Windows HPC cluster where each instance executes an independent calculation or iteration from the same workbook with a different dataset or parameters. Using a special tab in the Excel user interface, you can have that software spread the work from a workbook among a large number of nodes, and it's even possible to close the client software and have 'broker nodes' send an e-mail notification when their work is done so when you re-open the workbook the results are waiting for you. Further, when stealing cycles from desktop systems, you can set a policy that make them available for HPC tasks only at nights or weekends, or even during the day – without disrupting desktop users – as long as there's no keyboard/mouse activity or as long as CPU activity is below a certain threshold.

At the cloud level, Hilf notes that it's convenient for some users to access external public resources across the public network only when needed. He points to examples of 'bursting out to Azure' (Microsoft’s own cloud-computing platform) when needed, sometimes even in a mixed-mode burst from on-premises to public resources when the workload demands it.

Hilf presented much of this information at the High Performance Computing Financial Markets conference, but the benefits for scientific users are also very promising. For instance, Josh Kunken of the Scripps Research Institute has been transitioning to R2 in high-throughput image analysis for their cancer initiative and notes that they've experienced an 800 per cent increase in performance with R2's parallel capabilities and that doing so was quite straightforward.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers