Skip to main content

The future of pharmceutical development: Optimising drug design with quantum computing

Dr Gian-Luca Romano Anselmetti

Dr Gian-Luca Romano Anselmetti is a quantum computing scientist

Credit: Dr Gian-Luca Romano Anselmetti

Quantum computers have made significant progress over the last few decades, evolving from experimental novelties in scientific labs to large-scale industrial efforts aimed at developing a machine capable of tackling problems currently intractable by classical computers. The selection of problem types where we can expect an advantage from transitioning from classical to quantum is still modest. However, drug design is usually among the selected few with industrial relevance, where quantum computers are attempting to make a difference. What problems can already be translated into a quantum algorithm that scales favourably over current classical methods, and what further research is needed to make the dream of quantum computers helping to develop a new treatment a reality?

Dr Gian-Luca Romano Anselmetti obtained his PhD in physics, developing quantum algorithms for chemistry and error mitigation from the University of Cologne. Before joining Boehringer Ingelheim, he had previous affiliations with Covestro and Microsoft Quantum.

How is Boehringer Ingelheim planning to use quantum computing?

Anselmetti: There's a team of five of us inside the company who do a wide range of projects, ranging from sketching out how large of a machine one would need when sort of tackling industry relevant problems that we have inside our company, and then also going further, developing the algorithms to take it a step further and show scientists or other scientists where our needs lie, what kind of projects would be interesting to us, and sort of what information to extract from these kind of calculations to kick us off, what's the main problem a pharmaceutical company has? I would argue it's drug design.

All these processes start with a target, which is usually associated with some molecular pocket related to a disease. Sometimes it's inside your body, sometimes it sits on a parasite or a virus, and so on. And your “only task” is to find a molecule that fits inside this pocket and binds to this inhibitor and doesn't kill you in the process. Sounds easy enough.

It turns out to be a very hard problem, and every little bit helps in finding the right molecule for each target. Typically, all the drugs we see here, which are similar to these small organic molecule drugs, are sold in the world today. So they are out of 10s of atoms. You usually don't have super interesting atoms around it, a couple of hydrogens, a couple of oxygens, and some carbons.

What is the existing process for drug discovery?

Anselmetti: The motivation to find new ways of designing drugs is that the cost of developing new drugs has exploded over the last decades, even when it's adjusted for inflation, so there is a need to bring more cost effective ways of finding new drugs to the market, and quantum computing is proposed as one of these to bring this current curve down a bit at the far end.

This whole pipeline is very costly. Usually it's very long, about 10 years, and nowadays it costs about 2 billion from start to finish. You start by finding the target you're interested in, and then starting with 1060 of the possible molecules that are available today. You have to narrow that down, step by step, to find a handful of molecules you can then send into clinical trials. And in doing so, you try to leverage every technology available to you, including chemistry, biology, and material sciences. Everything that can be used will be utilised in the search for the right molecule. When viewed from a different angle, it becomes clear how this process unfolds up until the clinical trials, as it begins by imaging tissues to gain insight into how the disease interacts.

Then chemistry happens, which we will see in a second, where we are concerned with two microdynamics. So how does the potential drug bind to this target? Does it float away? We don't know. And then, how strong does it bind? In terms of the binding of any prediction here, and then on the further end, we delve into understanding how drugs pass through the body. So, how they're adhered, pass through, and then at some point, get excreted, and then at the very end, are there any major side effects, or could we predict side effects when looking at these drugs and when we started out?

The group has existed since 2020, focusing on chemistry as it seemed the most natural way to integrate quantum computing into this pipeline, given the clear use cases and guarantees that quantum computing would have an edge over classical computers.

So when you're interested in finding or modelling these systems on a computer, you are sort of working in different length scales. You can see here, from left to right, that you can go lower in space, because you sort of go higher in accuracy. As you need higher accuracy for your needs, the number of molecules you can treat in your system decreases.

So at the very far end, when you want to have the exact solutionary problem, you only allow to treat a couple of handfuls of atoms. And then at the top, you go to copper cluster, which is usually referred to as the gold standard in chemistry, but it also only allows you to expand the spaces, or the number of atoms you can treat by a bit.

Then there's density functional theory (DFT), which is one of the major workhorses, especially in the pharmaceutical industry, where it allows for the treatment of even larger systems. And then there are empirical methods, which are computationally inexpensive but also crude. Sometimes, that's good enough, allowing you to model large chunks of your molecular system. Quantum computers will be slow, in some respects, because their clock cycles are currently 1000 times slower than Those of Current classic computing.

How can Quantum make a difference?

Anselmetti: Back in the day, with Google's hardware, you needed around 4 million physical qubits, which then corresponded to more than 1,000 logical qubits and had a runtime of three days, which was too slow for competitive modelling at the time. However, it was the first stick in the ground that provided an estimate of how expensive these calculations could become for problems of industrially relevant size.

The main issue with this is that it is quite a lengthy process. Because the ground set energy calculation that this constitutes is not an application, per se. Typically, on the classical side, this calculation occurs millions of times to enable dynamics and modelling of how things work. To know if these two systems bind or not, you have to compute thermodynamic properties, which is much more expensive. This is just a routine used in these calculations.

So what we then did was to take this first estimate and optimise it, because we knew there were different knobs you could turn. And one of the main knobs you can turn is using in classical computing, again, the problem is modelled by the Hamiltonian matrix. Then the task becomes one of massaging this matrix into another one that, hopefully, or if done carefully, still encodes the same problem you had before, but at a lower cost to your quantum computer. Then there's something you can do that's called double factorisation, where you use your tricks in linear algebra and find a matrix that still encodes the same problem.

On the other side, you can also find more innovative ways of compiling these algorithms. Known as “Active volume”, which is a technique developed with PSI Quantum. They utilise a photonic architecture, which allows them to make more connections between their individual elements. As a result, it's easier to route another fibre optic cable to another box than, for example, creating another connection on a chip.

So by exploiting how their systems are interconnected more heavily than others, they can structure their problems in different ways. This is quantum addition as a circuit, which would typically scale radically because it requires back-and-forth operations. However, in their approach to compiling this algorithm, the scaling is more linear, resulting in a significant saving.

And then you can combine these two techniques to achieve another 108x speed-up in the runtime of these systems, reducing the time to 14 hours or even one hour, depending on the size of the factory you want to build.

We've seen the size of the system they want to build, and it's right between the two shown here. You still need a large system to host these calculations, but at least in terms of runtime, we have another factor of 108 in three years. Now still too long, but the direction is where we want to go. As we continue to shave off more and more orders of magnitude from the runtime of these calculations, this becomes increasingly feasible for us to compete with current classical computing.

Gian-Luca Romano Anselmetti is a Quantum Computing Scientist at Boehringer Ingelheim
 

Media Partners