Skip to main content

Saving lives with supercomputers

Hospital dramas are a staple of popular television, but although images of high-tech medicine form a backdrop to the human drama of staff and patients, no programme has yet featured a surgeon calling out: ‘scalpel . . . swab . . . supercomputer’. But HPC is already playing a life-enhancing role in the lives of patients undergoing kidney dialysis and promises to hasten the post-genomic era of individualised medicine.

The work of Paul Fischer, a computer scientist at the Argonne National Laboratory, in Illinois, USA, is improving the quality of life and the long-term survival of patients with kidney disease. Fischer uses some of the 65,536 cores of the IBM Blue Gene/P at the laboratory to carry out computational fluid dynamics simulations of how best to arrange the flow of blood from the patient’s body to the external dialysis machine.

For patients suffering from renal failure, the only treatments available are dialysis or a transplant. Haemodialysis requires routing the blood in a patient’s body through a dialysis machine, which then carries out the function of the patient’s ineffectual kidneys by removing waste products such as urea and potassium. This process must be carried out several times per week in order to keep a patient healthy, but each dialysis session may take a long time – which can be inconvenient – and also requires connecting up the patient’s blood vessels to the dialysis circuit – which can cause scarring or infection. To make the process as quick and as comfortable as possible, surgeons often make use of arteriovenous grafts, or AV grafts. These are pieces of synthetic tubing surgically implanted into a patient to connect an artery to a vein, usually in the wrist or forearm. By connecting a high pressure artery to a low pressure vein, surgeons can guarantee that most of the patient’s blood will flow through this synthetic graft within a short period of time, which makes this ideal as the diversion point for the haemodialysis machine. An AV graft can cut an eight-hour dialysis session down to two hours or less, which significantly improves a patient’s quality of life.

AV grafts are, however, unreliable. Currently, the grafts can fail within six months of being implanted, meaning that further surgery is required to fit another one. Although the mechanism of the failure is yet to be characterised definitively, it is widely thought to result from turbulence of the blood flow within the graft. Under normal circumstances, blood in the body’s vessels remains laminar, i.e. it flows smoothly, in one direction, at a fairly constant velocity in a viscous manner. Blood flowing through the graft, however, moves very quickly relative to its viscosity (i.e. it has a significantly higher Reynolds’ number), and parts of the laminar flow become turbulent. Turbulent flow causes vibrations, and it is these vibrations that stimulate the smooth muscles within the lining of the vein, meaning that the muscles grow rapidly, and ultimately occlude the vein somewhere downstream of the anastomosis, or point at which the graft joins the vein. This occlusion eventually leads to failure of the graft.

Fischer’s simulations focus on the join between the graft and the vein. At this point, high pressure blood from the artery is introduced into the slower, low pressure blood of the vein, and this is where turbulence is known to have a detrimental effect. In the simulations, the vein is 6mm in diameter, and the simulation extends around 35mm up the graft, 25-30mm upstream of the anastomosis, 35-40mm downstream of it, where failures are most likely to occur. Fisher explains that turbulence occurs on a much smaller scale than these dimensions, typically in the order of 0.25-0.5mm: ‘There are a lot of small scale structures generated by this turbulence, and the simulation mesh we use has to be smaller than the smallest length scale,’ he says. ‘Typically, for the geometry described, we’re looking at two or three million grid points.’

The team looked at varying the diameters of the graft. The grafts are tapered, from 4mm at one end to 6mm at the other end, in order to match the diameter change between the artery and the vein. ‘We have found that this is one of the key ways in which to avoid turbulence; we want to avoid having any sudden transitions [in tube diameter].’

For a single graft study, or simulation, a computational domain is created in the form of a mesh. The meshes do not only cover the surfaces of the graft and blood vessels, but also fill their interior. That set of three million points is partitioned over 256 processors of the IBM Blue Gene. Each processor is treated as an entire computational domain, and boundary data is exchanged at the beginning of each time-step in order to keep each component accurate to its neighbours. ‘We partition the domain, or the grid points, onto separate processors, and then we time-march the unsteady blood flow. At every time step we have to update the velocity and the pressure, and we have to update a set of equations to achieve that. Typically it turns out that we’re solving a large system of equations, for three million grid points, with roughly 10 or 11 million unknown values.’

Using its simulations, the team has found that if blood faces an adverse pressure gradient because of a change in velocity, a transition to turbulence will occur at the anastomosis. In order to reduce the effects of this transition, the graft is angled at the joint: ‘Where the graft joins into the vein, it is pointed towards the proximal segment [the downstream side],’ says Fisher. ‘Even so, the pressure can be sufficient to reverse the flow of blood, even to the extent that blood from the graft can end up moving in both directions within the vein. In our study, this effect [of a split flow] was found to be the most sensitive parameter.’

The team found, based on simulations and on statistical research, that the crosssectional area of the join between the graft and the vein is a key parameter in reducing turbulence. ‘That gave us a fair amount of confidence that we had discovered one of the mechanisms for turbulence in these AV grafts,’ says Fisher. Lessons learnt through this study are already being implemented in the AV grafts inserted into patients, making a difference not only in the virtual, but also in the real world.

Storing one’s genome

Fischer’s simulations display how the number-crunching characteristics of HPC can make a difference in contemporary medicine. But the abilities of supercomputing systems to store and process data are just as important in the application of today’s research biology to medical practice of the future. The first sequence of the human genome was rightly lauded as a landmark in biology and medicine when it was completed. However, no single individual’s genome was sequenced – it was rather a generic, ‘reference’ genome. Now, the interest is in sequencing individuals and correlating variations in their DNA with their individual health outcomes (and also their individual reactions to pharmaceutical drugs). In this way, genomics may lead to personalised medicines. But first, all that genetic data must be acquired, stored and processed.

The human genome project spurred a drive within research institutions around the world to find ever faster ways of achieving accurate genetic sequencing. Now, two decades after the project began, there exist many research groups specialising in sequencing genomes, with a view to learning more about the roles and effects of individual genes within each person. The process of sequencing the genome has been automated to a high level, and modern sequencers are able to complete the task in a fraction of the time it once took. So the demands upon the sequencing centre’s storage facilities have increased.

A single sequencing run on a state-of-the-art machine produces 7TB of data per run. Of this 7TB, only 100-200GB of usable data may be kept afterwards, but with thousands of sequencing runs performed every year, the demands on storage soon add up. One run can take anything between a few hours and a few days. Multiple sequencers are routinely linked to a single storage system, meaning that the storage system must be fast enough to take in data as quickly as the sequencers can produce it in order to ensure that the sequencers are utilised to the limits of their capabilities. ‘The users do not want to have a room full of expensive sequencing machines standing around doing nothing, as they wait for the storage solution to catch up,’ says Igno Fuchs, senior product manager with Data Direct Networks, a provider of customised storage systems for HPC. ‘In addition, multiple disparate storage solutions scatter data over several systems, making it more difficult to manage, and so a single consolidated system is preferable.’

TGen (Translational Genomics Research Institute) is a non-profit organisation based in Arizona. It is expanding its line-up of sequencers to include five next-generation ABI Solid systems, and more sequencers will be added thereafter as demand increases. It has installed a storage system, designed by Data Direct Networks, capable of taking in 5GB per second, which is fast enough to handle the data output from the current sequencers. As the organisation continues to buy additional sequencers, it will be able to scale the system up to a maximum bandwidth of 200GB per second by adding additional nodes and scaling up their storage. This means that TGen will not face bottlenecks as it continues to expand its sequencing capacity.

The difficulties involved in obtaining such high performance become apparent from the fact that TGen’s storage system serves a compute cluster with 4,680 nodes, each of which has multiple cores. When it comes to analysing the terabytes of raw data produced by a single sequencer run, the massive parallelism of the compute cluster must be matched by the speed of the single storage solution in order to avoid potential performance bottlenecks.

Fuchs states that each high-performance storage system must be designed from the start if it is to reach an adequate performance level, and that these considerations become the driving force behind all design choices. The design of the RAID system, the development of the control algorithms, and the physical enclosures themselves are all engineered for performance. ‘This approach gives us a tremendous performance advantage over solutions based on a windows kernel or similar,’ explains Fuchs. The system is designed to provide predictable performance, even if failed disks are being rebuilt by the RAID controller in the background.

As the demand for sequencing increases, ever more computing power is required by the institutes performing the work and subsequent analysis. Data centre costs represent a massive expense for such companies, in terms of floor space, cooling, and power supply (including UPSs). Minimising the physical size of the storage system allows research institutions to manage their storage costs. The solution used by TGen stores a petabyte in a standard server cabinet. ‘All of these infrastructure considerations may not have that whole life science “flare” to them,’ Fuchs acknowledges, ‘but in the end you need the right tools to get the results, and we’re providing those; they are the tools in the background that make it happen.’ Perhaps the TV dialogue of ‘scalpel. . . swab . . . supercomputer’ is not so far distant after all.



Topics

Read more about:

HPC

Media Partners