Skip to main content

Visualising the future of medicine

Chris Johnson, director, Scientific Computing and Imaging Institute, and Distinguished Professor, School of Computing at the University of Utah

I direct the Scientific Computing and Imaging Institute at the University of Utah. We’re an inter-disciplinary research institute of around 200 faculty staff and students, and we specialise in visualisation research, image analysis and scientific computing. I also direct the NIH/NCRR Center for Integrative Biomedical Computing, which is in its 14th year and focuses on image-based modelling, simulation and visualisation.

The work we do is essentially a piece of the personalised medicine pie – creating biological-subject-specific-based models from images, such as from MRIs, X-Rays and CT scans. Those images are then segmented in order to obtain the geometry of the patient or parts we’re focusing on and used to create patient-specific computer-generated models. From the models, we perform functional simulations such as simulating the electrical activity in the heart, localising epileptic seizures within the brain and calculating stresses or strain on artificial joint transplants. The results of the simulations and models are then visualised and ultimately taken to clinics to be used for diagnosis and treatment.

Behind the scenes we write all the software involved in this process and it’s taken us at least 10 years of hard work to get to the point where we’re actually able to do the whole process of image-based modelling, simulation and visualisation for real-world clinical application. Currently, we’re working with a cardiac surgeon on the simulation and visualisation of atrial fibrillation for surgical planning, working with neuro-scientists and neurologists for localising the source of epileptic seizures, and designing internal, implantable defibrillation electrodes and optimising their efficacy.

The software we’ve created is a set of tools that take you through that entire process. Each one is open source, available to download from the website and runs on multiple platforms. The first piece of software we use to do the segmentation of the images is called Seg3D. To do a simulation of an epileptic seizure, for example, we would first take an MRI of the head. The Seg3D software has automatic algorithms that locate the boundaries of the skull and cortex of brain and the pulls that information out from each slice of the image, which provides us with sets of surfaces. This can also be done in three dimensions if 3D images are used. To run the simulation we need volumes and so the next piece of software we use is BioMesh3D, which takes those images and creates full 3D geometric meshes – the digital geometry necessary for the simulation. Our next set of software, SCIRun, does the finite element or boundary element simulation required in electric source modelling. Our final piece of software, ImageVis3D, provides visualisation of the simulation.

To create software that’s useable is a multi-year process. I think a lot of people underestimate how difficult and challenging it is to do, and we have a number of gifted and talented software engineers who work incredibly hard on achieving just that. The challenges along the way are many, especially because when doing biomedical modelling and simulation we need to recognise that our biology is incredibly complicated. We can never truly address all that complexity and so need to simplify things in some ways and decide on the level of complexity we do want to address. Over the past few years we’ve been continually addressing more complexity, which means larger scale models and larger amounts of data that need to be dealt with in order to compute and run simulations and visualisations.

We work with a lot of clinical and biomedical collaborators and the one important fact is that they’re not computer scientists; they don’t want to write programs, they want a software package that’s easy to use. Because of this, there’s a good deal of effort made in the design of the user interface and there are a lot of iterations between us and our collaborators trying to make everything as intuitive as possible. We then put everything through testing phases; however, no matter how much time or effort you put into making the software work, users will find ways to break it! We receive emails from people saying that it’s not working for them and in many cases we would never have dreamed of the software being used in those particular ways! But we work to try to correct the issue and enable them to use the software.

The ultimate goal is to create this system of visualisation and simulation modelling techniques and corresponding software. I think there is a tremendous movement towards personalised medicine. People are recognising that one size doesn’t fit all and that we can give a better level of diagnosis and treatment if we tailor things to an individual. That’s really what we’re working towards and while it’s taking longer than we thought, we’re making significant progress. I do believe that in the near future it will be possible to create treatments like implantable defibrillators and optimise their placement, and that we’ll create custom-fit joint replacements. And I think that’s going to be a very exciting time.

 

Alejandro Frangi, Professor of Biomedical Image Computing at the Department of Mechanical Engineering department of the University of Sheffield, and Director of the INSIGNEO Institute on Biomedical Imaging and Modelling

I am part of a collaborative project at the Institute for Biomedical Imaging and Modelling (INSIGNEO) that aims at developing computational models of organ systems and eventually of human beings in their entirety. Essentially we are creating the Virtual Physiological Human (VPH). In order to achieve this, the models will need to show the physiological mechanisms that underlie those organ systems in a descriptive and predictive manner to aid studies such as the evolution of diseases or their course as a result of various treatment strategies. Finally, the models will need to be personalised to enable both description and prediction of subject-specific (patient-specific) response. It’s the combination of these three features which makes the VPH not only a cutting-edge research topic but also a new approach to healthcare. A multitude of biomedical data sources will be incorporated into the models, such as data from imaging examinations, physiological signals coming from body sensors, genetic information and laboratory results, in order to generate simulations of individuals that can then be interrogated on specific questions that hold diagnostic or prognostic value.

This vision of the virtual human is a long-term undertaking and currently we are focusing our attention on specific organ systems; namely, the cardiorespiratory and neuromusculoskeletal systems, which have a critical role in major current diseases. In addition to developing models of those areas, we are creating models of a number of devices that correspond to specific treatments. For example, in the cardiovascular system we are modelling prosthetic valves, cerebral aneurysm embolization devices as well as coronary stents. The biomedical data we gather informs the models and allows clinicians to assess the performance of the devices and the expected recovery of the patient.

In the case of the musculoskeletal system, we are trying to assess the risk of bone fracture – particularly of the femur and the spine – and attempting to understand all the elements involved so that we can develop computational models displaying the risk a patient has of sustaining injury. To do this we take into account details of the bones taken from CT scans, muscle information from MRI images and measurements taken from gait to provide the dynamics. These three sources are then integrated through the finite element models where anatomical detail is provided and the resulting prediction reflects all these factors and offers variables that cannot be measured directly. The models on the cellular level provide a further level of information in terms of how the bone is able to regenerate or absorb forces.

Traditionally, clinicians would do these assessments by examining the scans and deciding what risk factors are present. We are trying to create a tool that will aid them by integrating and visualising all that information within a multi-scale model. The difficulty is that the data we use is captured from National Health Service (NHS) information systems that scan across many different databases in a range of departments, and sometimes across a variety of formats – digital records and paper documents, for example. We therefore attempt to work within IT infrastructures to get a more seamless integration of the information, while respecting all the data privacy and ethical issues of current legislation to the best benefit of an improved diagnosis and treatment for patients.

Working as part of such a broad inter-disciplinary team is challenging as clinical researchers and those on the engineering side have very different languages and priorities. It definitely takes patience on each side to be able to understand the other’s perspective. But we are making great strides and in the future we believe we’ll have a more holistic understanding of how the different physiological systems interact and will be able to create personalised models that take biological and environmental variables into account in order to treat, and indeed prevent, disease and degradation.



The ScalaLife Project

Scalable Software Services for Life Science (ScalaLife) is a European initiative launched in 2010 with the ambitious mission of implementing new techniques for efficient parallelisation combined with throughput and ensemble computing for the life science community.

ScalaLife seeks to establish a new approach for targeting the entire pipeline from problem formulation through algorithms, simulations and analysis by focusing heavily on actual application issues. In particular, the project will provide long-term support for major European software by establishing a pilot Competence Centre for scalable software services for the life science community to foster Europe’s role as a major software provider.

The Barcelona Supercomputing Center is participating as main partner of ScalaLife and holds the responsibility of connecting the latest research on scalability and hardware design with application software work, and to properly document algorithms and optimisation techniques so that they can also be applied to other life science simulations. Professor Modesto Orozco, BSC director of Life Science, and Databases Work Package leader of the ScalaLife project, explained that the future of structure-based drug discovery, for example, relies on software tools capable of scaling on multi-core supercomputers.

‘ScalaLife will provide the life science community with fast and flexible access to high-performance computing resources,’ he said. Additionally, Professor Orozco is coordinating the development of standards for handling both the storage and exchange of the ever-increasing amount of life science simulation data.

 

NeuroApps

Elsevier has released the first in a new series of apps created for the iPad. This interactive application, dubbed NeuroApps: MRI Atlas of Human White Matter, is based on the MRI Atlas of White Matter by Kenichi Oishi, Andreia V. Faria, Peter C M van Zijl and Susumu Mori and essentially re-makes the atlas for a new generation of neuroscience researchers and clinicians.

‘Traditionally, brain atlases have comprised of a collection of 2D panels with anatomical annotations. Even the most bulky atlas book has severe limitations in the amount of panels,
magnifications and information it can carry,’ explains Dr Susumi Mori, one of the creators of
the atlas. ‘NeuroApps is the first brain atlas that removed this physical barrier. With NeuroApps,
more than 100 brain sections, two contrasts and free magnifications are in the small iPad dimension with the sophisticated interface, which can be carried everywhere.’

Using NeuroApps: MRI Atlas of Human White Matter, the user is able to find, visualise and
learn to identify the major pathways through the brain and their proximity to key neuroanatomical structures. The interactive tools provide the ability to compare coronal, horizontal and sagittal sections in one view, and to switch between MRI and DTI images for any location in the brain.

Brain structures can be super-imposed on the MRI/DTI images – altogether there are 53 white
matter structures, 38 cortical areas and 22 deep grey matter structures defined and labelled. In
addition, locations of 11 white matter tracts and 36 cytoarchitectonic areas are defined. The app
also features the ability to scroll through the brain, following a tract as it moves past brain structures.

While the images are based upon those from the print book, they have been digitally enhanced
and the resulting four-colour images are much sharper on iPads, which allow image zooming for
more detailed study. It is the first app of its kind and aims to move biomedical modelling forward.

Topics

Read more about:

Modelling & simulation

Media Partners