Skip to main content

The future of computational science

Computational science has progressed steadily over the past decade. Improvements in computing performance allow us to model much more complex chemical, biological, and materials systems, and to understand increasingly large quantities of data. But these advances require finesse, as well as power. Scientists continue to develop novel algorithms, or to extend existing ones, and combine them in new ways.

We are also witnessing a profound cultural change. A new generation of scientists expects the computer to be embedded in the research processes. In this, science increasingly reflects the business world. In the 1990s, the term 'e-business' was coined for the use of integrated information systems to create efficiencies throughout business value chains. It is perhaps curious that, for all its 'hi-tech' gloss, this is an area in which R&D lags behind more mundane endeavours as banking and retailing. Yet it is not surprising. The complexity and niche focus of scientific tools can mean they are developed to extremely specific requirements. Different scientific domains often have distinct cultures, approaches, and even 'languages'. Research has a tendency to resolve itself into 'silos', and its supporting IT has followed that pattern.

Breaking down barriers
So what can we expect over the next decade? Some trends will continue. Computation will solve more complex research problems with greater accuracy - it will go deeper. It will be applied by more scientists, more routinely - it will go wider. But the greatest change may be its role in breaking down barriers between scientific domains and enabling real 'e- research'.

First, consider the trend towards deeper insights. A recent grid-computing project involving partners including Oxford University, United Devices, and Accelrys, signed up 2.5 million individual PCs to screen 3.5 billion compounds against protein targets that are involved in the pathogenesis of cancer, anthrax, and smallpox. Here are many of the elements that will drive this trend - advanced software methods, increased compute power, and a novel way to harness and apply them. The result is a glimpse into a future, in which virtual screening roams chemical space, seeking likely drug molecules. Such studies are also critical in validating the core scientific methods.

Validation leads us to the second trend: opening up to the wider science community methods previously available only to discipline experts. Such access demands rigorous validation and smart packaging. Many methods will not suit such requirements but some are ideal. For example, pharmacophore modelling, conformational analysis, and biological activity prediction can be bundled to provide tools enabling medicinal chemists to evaluate potential drugs. Wider access is also enabled through hardware and operating system developments. Our recent surveying shows modellers moving away from their traditional reliance on proprietary UNIX workstations towards Windows and Linux. Vendors are responding by delivering solutions on such cheaper, more readily available, and open systems. This is a key factor in our third trend, integration.

Over the next decade, research organisations will demand less reliance on proprietary and stand-alone software. They will want tools that integrate with each other and with corporate systems. For example, a pharmaceutical company wants its geneticists to use gene-sequence analysis to identify interesting protein sequences and, then, to pass these via the company database to structural biologists and modellers. These scientists solve protein structures and characterise targets, in their turn making this information available so that medicinal chemists can plan their lead compound chemistry with computational and experimental methods. Of course, in global organisations with thousands of scientists, the workflow is nowhere near this simple, and so one needs to project knowledge-management tools that collate and connect such diverse data and information across the discovery process.

Nanotechnology
No area brings home the inevitability of these trends more strongly than nanotechnology. Overhyped? Perhaps. I predict that in 10 years' time our health will not depend upon tiny robots in our bloodstreams. However, billions of dollars-worth of new products, from flat-screen televisions to sports equipment, will depend on molecular engineering. This demands a deeper understanding of molecular processes, driving development of methods, like quantum mechanical calculations, that deliver such insight. It requires a wider application of such methods downstream in the engineering process, as molecular behaviour determines more device properties. Finally, nanotechnology means integration. The term was invented as shorthand to describe a highly inter-disciplinary effort. Nanotechnology will need today's chemistry, biology, and materials science software to connect more effectively, and to work alongside engineering systems

Don't look for a sudden 'big bang', but when Scientific Computing World celebrates its twentieth anniversary, expect us to be living in a more nano-centric world in which computation is a key driver in 'joined up' research.

Dr Scott Kahn is Chief Science Officer at Accelrys




Media Partners