At University College London (UCL), Professor Peter Coveney holds a chair in Physical Chemistry, is Director of the Centre for Computational Science (CCS), is an Associate Director in the Centre of Advanced Research Computing, and is an Honorary Professor in Computer Science. He is a Professor in Applied High Performance Computing at the University of Amsterdam (UvA) and Professor Adjunct at Yale University School of Medicine (USA).
Coveney’s research covers a broad area of interdisciplinary research, including condensed matter physics and chemistry, materials science, as well as life and medical sciences, all of which high-performance computing plays a significant role. He has led numerous large-scale projects, including the EPSRC RealityGrid e-Science Pilot Project (2001-2005), its extension as a Platform Grant (2005-2009), and the EU FP7 Virtual Physiological Human (VPH) Network of Excellence (2008-2013).
What are the current limitations of quantum computing?
Coveney: The scale of electronic structure calculations feasible on current or near-term quantum hardware is constrained by several inherent limitations, including coherence time, qubit count and connectivity, and device noise.
All these limitations taken together severely impact the number of qubits that may be put to work constructively for chemical applications. While we have access to quantum computing devices up to and exceeding 100 qubits, only a fraction of these can be utilised effectively. However, if these resources target a key subcomponent of a molecular system, they can still be of significant value. This fosters the integration of conventional HPC resources with quantum processing units to address problems of scientific interest in which only a key sub-component of the system is analysed on the quantum device.
My talk at ISC focused on integrating quantum computing and quantum processing units with pipelines and computing systems, not for their own sake, but to enable different types of research that would not be possible otherwise. I want to convey to you why that's actually important today, and inevitably, it's quite a challenging topic because it involves people with interesting scientific applications. It involves a quantum computing company. Most of the work I'm going to discuss actually is being done with iqm, which is this Finnish/German company, collaborating directly with both us and the Leibniz Rechenzentrum which is in Garching near Munich.
What is the purpose of this collaboration?
Coveney: They've a quite advanced approach to integrating quantum processing units onto one of their supercomputers. The collaborators include participants from different institutions in the UK, the US, and Munich.
This particular application scenario exploits a natural affinity between the problem and the required hardware, specifically the quantum computing infrastructure, as it's fundamentally focused on advancing capabilities in quantum electronic structure calculations.
Quantum computing for quantum chemistry has been a long-standing area of research and is a significant application. Why is that? It's because conventional computers, when attempting to solve electronic structure problems for molecular systems, encounter a wall of intractability very quickly. They're solving a Schrodinger equation for a many-body system. And the concept of the parameter of interest here being n, you could think of it as the number of electrons in the problem, or it's, more accurately, the number of spin orbitals that are just used to describe the electronic configuration.
The problem is that conventional algorithms scale in a nasty way with that number N. And, the most accurate calculations that you could do, which are called full configuration, are worse than exponential in n. It's factorial in N. This means that you can't obtain exact solutions to electronic structure problems unless you're working with very small molecular systems.
There's a whole hierarchy of approximations and so-called levels of theory that guide the field in its attempt to examine systems of interest. And what kind of systems might we be concerned with? Most quantum computing applications for quantum chemistry have been limited to examining gas-phase molecules and small molecules. But how small are we talking about? Well, if it's a gas-phase molecule, it's not usually very large. The type of molecule that might interest a pharmaceutical company is a small molecule, which is a small organic compound that can have tens of atoms in it. The number of electrons becomes extremely large, rendering a full configuration interaction calculation impractical. We have to use approximate methods.
How can quantum computing solve this problem?
Coveney: Quantum computing is designed to solve larger problems in contexts where they are intractable on conventional computers. Our main challenge in electronic structure theory is to understand electron correlation, which refers to how electrons interact with one another and create structures within molecules, thereby determining their behaviour.
You can think of cases where you have a single covalent bond between two nuclei. That would be two electrons in a bond, typically. Still, there are cases where you may have multiple electrons, so you could have a double bond with four electrons, or even, let's take a nitrogen molecule, it's got a triple bond, so there's six electrons crammed between the nuclei, and the correlation effects there really become very important to capture.
If you don't perform the calculations accurately, you won't obtain meaningful results. But the problem here is, as is said in the last line here, the accuracy comes with high computational cost, and the hope would be that you can deal with this on a quantum computer more effectively than you can on a conventional machine.
The reason for this, that I wasn't intending to dwell on here particularly, is just that, if the molecular system is quantum mechanical, you represent its state by a wave function. That wave function has to be represented on a conventional, or in the jargon, classical computer, in terms of two to the power n bits, where n, as I mentioned earlier, is the number of spin orbitals. So it becomes, it's exponential in n, and you can't handle the size of the molecules very effectively, whereas on a quantum device in the Hilbert space can be represented linearly, so you're suddenly going from an exponentially large problem to something that can be described linearly.
What are the limitations of today’s quantum systems?
Coveney: That sounds good, and that's what all the happiness about quantum computing revolves around here. But the reality is that these computers today, in this NISQ (Noisy Intermediate-scale quantum) era, with a very high degree of noise and unreliability in the machines, rapidly prevent you from exploiting that potential benefit. So you have a huge amount of noise. You need to control the noise. You need to mitigate all the errors that take place on these qubit devices. And there's also a problem with the coherence time, which is the time you have available when the wave function is in its sort of superposition state, which is the state when you can exploit the quantum parallelism, which can often be in the sort of microsecond time domain or less.
So your calculations can never last very long. You have to do a large number of them. Additionally, designers of quantum devices are creating topologies because they have determined what would be beneficial for them. Still, it's not optimal for the computations you're interested in. As a result, the design of the topology can, on its own, hinder the application.
And this is what often happens. So you can be as we are, in the situation where we have access to the entire IBM Cloud, which has now got up to 156, qubit chips, the Heron chips, with a plan for IBM to take the quantum to 1000s of qubits by using quantum interconnects between those in the sort of sense we deal with conventional computers. But the reality is, at the moment, we can't make more than a small handful of those qubits on a single chip play a tune.
So it doesn't matter if you've got a 127-qubit device; you can't make use of most of those qubits together. The fidelity is so low; the struggle is to reduce the size of the calculation in terms of the qubits needed to get onto the device, while constantly controlling the noise and dealing with error mitigation.
So there's this problem, which is paradoxical and ironic, that you've got this quantum device, but what you want to do is touch it as it were, as infrequently as you possibly can, and make sure that when you do use it, you're using it. Without being overwhelmed by the noise, and that means the quantum capabilities are rather diminished.
However, suppose you attempt to apply those quantum capabilities to an interesting problem that involves more than a single gas-phase molecule. In that case, you will need to interact with a conventional computer, which can handle that scale of problems much more effectively.
Quantum hardware on its own in a cloud is unable to support larger and more interesting scientific problems. You have to shift data. You have all these problems, including the difficulty of moving data from the classical compute to the remote site, as well as latency issues and so on. Many people who start using a quantum computer simply have a laptop and attempt to run qubit jobs on, for example, an IBM Cloud or a public version of it, which restricts their capabilities.
XHEAD: How can HPC help to accelerate quantum computing
Coveney: What we're after is where the two things come together closely. I would say that LRZ is one of the world leaders in trying to make this sort of thing work with the IQM devices that I spoke about earlier. The work I'm referring to here involves 20-qubit devices, but IQM has now developed a 54-qubit series of devices called Emerald.
The interesting thing about the architecture is that it has a square topology. In contrast, IBM's heavy hex topology restricts, as I mentioned earlier, the capability to perform longer-range computations with more qubits. However, with the IQM chip, which we can link to the superMUC-NG machine, located at the LRZ facility.
One of the problems we've been interested in is understanding how protons hop between water molecules in bulk water. Note, I'm not talking about a single water molecule anymore. I'm talking about a group of these molecules and atoms within them.
This scale of calculation is amenable to a quantum computer, as I described. Still, it's in the context of the bulk fluid, because water molecules, when they club together, form water, not just individual water molecules through the interactions they exhibit, and some of those we will connect to through the multiscale element. Suppose you're into multiscale modelling and simulation. In that case, you'll be familiar with the idea that you can connect different levels of as it were, physical representation of a system. You need to do some parts of the problem in more detail, others in less, and you couple the two things together, and that's the spirit in which we're now adding a quantum computational.
In the situation I described, I've a proton that's jumping between two water molecules. We can take that out of the broader picture of the quantum mechanical representation that's performed on a classical machine using density functional theory and a wave function calculation of what that proton is doing as it moves across.
That's where the quantum computer excels in effectively representing the data. We can sample that wave function on the quantum computer, retrieve the sample results, and perform the rest of the calculation on the classical device.
Professor Peter Coveney is the Director of the Centre for Computational Science UCL, University of Amsterdam