Texas cancer centre turns to supercomputing technology

Researchers at The University of Texas' MD Anderson Cancer Center are applying new supercomputing technology to biomedical imaging in the quest to detect and eliminate cancer.

MD Anderson's Department of Imaging Physics is experimenting with advanced image registration, reconstruction and segmentation techniques that combine different imaging modalities to create multidimensional image representations with the ultimate goal of improving diagnosis and treatment of disease.

As a stepping stone toward more advanced infrastructures such as large scale clusters and SMPs, MD Anderson's Department of Imaging Physics' staff will first address this computational hurdle by parallelising their imaging software to run on a Sun Fire X4600, a powerful server with eight multi-core Opteron processors and 32GB of memory. The researchers' primary interest is the development of improved algorithms and techniques that lead to useful clinical discovery, which is why they have chosen to do their development in a very high level language (VHLL). The algorithms will be developed on desktop PCs using Matlab, and then migrated and run transparently on the departmental server using Star-P software from Interactive Supercomputing. When the need for more speed comes up, the same code will then be directly translated to the institutional cluster that also runs Matlab and Star-P.

Image registration and fusion place images produced on different modalities – such as computed tomography (CT), magnetic resonance (MR) and positron emission tomography (PET) – into a common spatial referential system so that the different kinds of information they contain can be optimally integrated, visualised and understood. The challenge was dealing with the enormous data sets generated by the most modern implementation of these imaging techniques. With the sub-millimeter spatial resolution currently afforded by modern CTs, registering image volumes potentially made up of billions of voxels would take hours or even days to process on the department's traditional computers. Such processing times are limiting the ability to employ more advanced mathematical techniques in order to improve the accuracy of registration and other processes such as enhanced reconstruction, segmentation and parameter extraction from the data at hand.

Star-P delivers interactive parallel computing power to the desktop. It enables faster prototyping and problem solving across a range of biomedical - as well as financial, scientific, and engineering - applications. Star-P eliminates the need to re-program the applications to run on an advanced server for parallel processing using complex languages such as C, FORTRAN and MPI (message passing interface) which would otherwise require arcane programming knowledge and months to complete.

'Star-P will enable us to go beyond the limitations of our conventional computers,' said Luc Bidaut, PhD, associate professor in the Department of Imaging Physics and Director of the Image Processing and Visualisation Lab, where the project is spearheaded. 'Achieving high image resolution in every relevant dimension is critical to our success. Star-P enables my team and collaborators to easily code algorithms using their familiar desktop environment while inherently providing the instant capability for that same application to run on a high grade Sun multi-core server or even more complex infrastructures.'

When the project is complete, the Department of Imaging Physics plans to utilise the resource for other experimental imaging applications, as well as extend it to other groups and areas of research within the institution.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers