Integrating for success
The process of drug discovery is long and expensive. It can take 12-15 years for a new drug to be approved for medicinal use and along the way hundreds of thousands of potential compounds are considered.
A couple of decades ago the idea of high-throughput screening began to become popular. The idea is to try out a vast array of chemical compounds and see how they react with a particular protein, for example. As Robert Scoffin, CEO of UK-based Cresset BioMolecular Discovery, explained: ‘When you have no information you don’t have anything to hang a design on, so you want a diverse range of options. The argument from the mid 1990s until about four or five years ago was to just screen everything. The more drug-like the better, but at the end of the day without any information you are just screening by catalogue.’
And this is a tall order, as he went on to note: ‘You can probably buy 10-15 million compounds and could make many millions more, so even screening 100,000 is just scratching the surface. The reality is that there is so much chemistry out there and so many possibilities that you are fishing in a very big pond.’
With such a scattergun approach the resulting hit rate is inevitably very low. What’s more, it is a costly exercise for pharmaceutical companies that have to buy or make every compound they screen. More recently, however, this approach has become more effective thanks to the increased role of computational tools in the process.
For Sander Nabuurs, head of computational chemistry at the Netherlands-based pharmaceutical startup Lead Pharma, computational tools such as molecular modelling are essential for the drug discovery process. ‘Our company is relatively small – 32 people – but there is a substantial effort in computational chemistry because we believe it can make a big difference and that it needs to be as integrated as possible,’ he explained.
Indeed, being only four years old and small compared with the pharmaceutical giants, the notion of blanket screening hundreds of thousands of compounds in the hope that a small percentage might prove promising against aging-associated diseases, which is the company’s focus, is impossible.
‘Using computational models to screen can really help. We use in-house virtual screening to select promising compounds. We can screen a one-million-compound library of very diverse molecules and identify subsets of potentially bioactive molecules. These “in silico” hits will then be moved forward,’ he explained. ‘It allows us as a small company to screen a much larger library than we really could with wet screening.’
In addition, virtual screening enables companies to screen compounds that have not yet been made and this can be used to guide the strategies for medicinal chemists. ‘In lead optimisation we typically use docking methods to prioritise what to synthesise from newly-designed libraries,’ Nabuurs commented.
Elmar Krieger, founder of YASARA Biosciences in Austria agreed on the benefits of computational techniques: ‘Being able to investigate and manipulate virtual biomolecules on a computer screen, atom by atom, has become an essential tool for drug discovery. Even though the required approximations lead to a sort of ‘reality gap’ (meaning that in silico predictions hardly ever match the real wet lab experiments exactly), none of the major pharmaceutical companies could survive without molecular modelling,’ he said.
Throughout the process
Computational techniques are used at various stages of drug discovery. As Krieger noted, the first step is in identifying the 3D structure of a suitable drug-target protein. ‘If the structure of a highly-homologous protein is known, then the target structure can be predicted in silico using “homology modelling”; otherwise one needs to solve it via X-ray crystallography,’ he said.
Computational tools also help in the next step of docking ligands with the target. YASARA, for example, includes a customised version of the open-source docking software AutoDock, developed at Scripps Research Institute. Setup and analysis steps are automatic in this tool, including ‘often missed ones, like optimising the receptor hydrogen bonding network, or generating an ensemble of receptor structures to consider receptor flexibility,’ according to Krieger.
Other ways that such tools can help include refining receptor-ligand complexes, performing ‘induced fit docking’ and allowing users to modify the drugs interactively and recalculate binding energies. In addition, some tools enable users to use database knowledge about common synthetic pathways to ensure that the ligands designed by molecular modelling can be synthesised readily.
Another interesting area, according to Robert Scoffin of Cresset BioMolecular Discovery, is to allow people to scaffold hop with their compounds. This means generating a set of alternative compounds with different underlying structures but predicted to have the same biological activity. ‘Often you want a backup – a completely different chemical series in case you discover an inherent limitation with the chemical series you have been studying. Having one or more backups saves you having to go back to the start,’ he explained. ‘Our tools allow people to jump from one core chemistry to another while retaining the same biological activity.’
Molecular modelling tools are primarily used by computational chemists but they are also used in different ways by medicinal chemists. As Scoffin explained: ‘You could run a model at the start that would generate ideas for a month or so and then feed back experimental results into the process. In that case the application is not used every day, but maybe once or twice a month. At the other extreme, the application might be used every day to see why one compound is active and another is not, in order to enable chemists to understand what they are seeing on the bench.’
This reveals an interesting issue with molecular modelling tools: the products themselves are the starting point, but the ways that they are used vary depending on the companies using them. ‘We have customers who mix the models in quite interesting ways,’ observed Scoffin. He gave the example of one company that does high-throughput screening in the traditional way, but then uses the Cresset software to try and control for false-negatives, to thereby enrich the hit rate from the confirmation screening.
‘The error bars on single-shot wet screening are pretty high and there are many possible factors for failure, such as a compound degrading or interacting badly with some component of the assay. Those compounds could be inherently positive. If you miss compounds as false negatives then you’re missing both information and leads,’ he said.
In contrast, there are also situations when wet screening reveals that a compound is promising but this was not identified with in silico screening. ‘These compounds are interesting in themselves,’ noted Scoffin. ‘Perhaps they bind in different ways.’
In the pharmaceutical industry it is important to look at the big picture, believes Sander Nabuurs of Lead Pharma. ‘We try to cover the whole spectrum of computational techniques and aim to take a look broader than other companies do early on in the process at physicochemical properties such as solubility,’ he said.
‘A trend as far as we are concerned is to try to identify potential side effects early on to decrease the chances of a drug failing in clinical trials. We try to do molecular profiling at the early stages, looking at, for example, compound-related gene expression effects on unrelated or unwanted genes. If we do this at an early level we can steer away from undesired effects,’ added Nabuurs.
Such an approach requires detailed interaction between computational tools and the chemists using them. ‘We get the best results if we integrate experimental results with molecular modelling techniques. We gather as many experimental results as possible in setting up virtual screening techniques and keep the feedback loop alive. When we identify a set of active compounds we feed what we learn back into the models,’ said Nabuurs. ‘One should be careful about using computational tools as a black box. You need to understand how to use them.’
This is something that Danish cheminformatics company Molegro is well aware of in its product development plans. ‘What people like to do is to take the knowledge of the chemists – such as replacing a certain fragment of a molecule – and see the effect of this design change immediately,’ said René Thomsen, CEO and one of the founders of Molegro. ‘So far these techniques have been very advanced and required a lot of prior knowledge, so these types of tools are typically used by computational chemists. However, the trend is for software to move to the area of medicinal chemists, making it easier for them to use.’
Scoffin sees intelligent or iterative screening, where models are run and refined with input from experimental results, as the way to get the best from the tools. ‘The computational process has to be part of the start of the process,’ he argued. However, there is a challenge that he identified; a communication gap between computational and synthetic chemists. ‘Techniques need to be understood across the two communities but both groups come from a different perspective,’ he noted.
And the problem is exacerbated because of the traditional hierarchy in pharmaceutical companies, he believes. ‘Medicinal chemists traditionally progress more quickly in companies. Synthetic medicinal chemists (or biologists, pharmacologists or biochemists) – rather than computational chemists – are likely to become head of R&D in these companies, so it’s almost baked into the system that you don’t have computational chemistry in the big picture.’
Other industry changes may not help to bridge that communication gap either. Scoffin observed that cost-cutting measures have led many pharmaceutical companies to reduce their employee headcount in the area of computational chemistry. ‘Computational chemists are very expensive because they have big toys so there is an increasing trend towards taking that cost out of the running costs of the business,’ he explained.
Cresset’s business model is to provide both software and consulting services and the company has noticed a shift towards the consulting side of its business in recent years. ‘Pharmaceutical companies still need and bring in expertise so we drop consultants into a company at the point they need them – often computational chemists who have set themselves up as one-man-band consultants when they were made redundant by those pharmaceutical companies, and where we can act as the business development front-end for a group of such experts,’ he said.
There are other trends too: ‘At the moment the overall market is in turmoil, with a shift in emphasis from large companies back down to small- and medium-sized companies,’ Scoffin added. ‘It is easy to look at the overall pharmaceutical market and be depressed about headcounts going down, but the research is still going on; just in a different way. Today, discovery is going on in academic and not-for-profit settings and charities, as well as pharmaceutical companies. There is a need for flexibility.’
There are still challenges and opportunities with molecular modelling tools too. According to Krieger of YASARA Biosciences: ‘Today’s docking software is already pretty good at sampling (generating plausible receptor-ligand complexes), but two major difficulties remain unsolved: First, the accurate prediction of induced fit, how the receptor adapts while binding the ligand. And second, the reliable prediction of the free energy of binding, which in reality includes many more contributions than current software tries to consider. For example, subtle entropic and quantum effects.’
‘We’ve come on in leaps and bounds in terms of sheer computational capacity and are definitely seeing the benefits of that in terms of the number of molecules we can look at and the detail, but there is still a limitation,’ agreed Scoffin of Cresset. ‘We could do quantum chemistry calculations, but we can’t do that over a series of molecules in a reasonable time frame because of computing capacity. There is still a limitation in terms of turnaround time too. The software needs to be able to provide a rapid answer, not necessarily the best answer.’
This is a challenge that Molegro has been working on with its software as well. ‘The aim is to reduce the time spent doing simulations, but at the same time keep a certain level of accuracy,’ noted René Thomsen of the company. ‘We’ve recently introduced a way to run simulations on graphic cards. This has increased the speed by a factor of 30. Previously, one model of a protein and a ligand typically took a few minutes, but with a graphics card it takes a few seconds.’
Such developments are needed as the demands on computational tools increase. ‘There are many degrees of freedom. You need to have a computer algorithm to search possible solutions and a good scoring solution that can score and predict binding modes.’ In addition, he identified challenges in, for example, how to handle the flexibility of proteins and predict binding affinity.
Another area of interest for Thomsen is modelling water in the systems. Proteins and their potential ligands are soluble in water, but this has been a difficult thing to model accurately. Thomsen and collaborators at Aarhus University, Denmark, have recently published a paper about incorporating water molecules in protein-ligand docking.
These themes were echoed in discussions at the recent Discovery Chemistry Congress 2012 held in Munich, according to Nabuurs of Lead Pharma. ‘Impressive progress has been made,’ he said. ‘Future trends will include better integration of tools and looking broader than protein-ligand interaction to really make the transition from animal trials to human successfully.’
‘I don’t see clear limitations with computational tools per se,’ he concluded. ‘It’s up to the creativity of the person using the tools to get the most out of them and use them in way that produces something useful.’