Andrew Childs is a professor in the Department of Computer Science and the Institute for Advanced Computer Studies (UMIACS) at the University of Maryland, College Park. He serves as interim director of UMIACS and was co-director of the Joint Center for Quantum Information and Computer Science (QuICS) from 2014 to 2024. Currently, he directs the NSF Quantum Leap Challenge Institute for Robust Quantum Simulation, a prestigious national initiative advancing quantum simulation technologies.
Childs has been recognised for his contributions to quantum simulation and Hamiltonian dynamics. His work has focused on developing fast quantum algorithms for simulating Hamiltonian dynamics, which form the theoretical foundation for understanding how quantum systems evolve. His research includes the development of efficient methods for simulating Hamiltonian dynamics using truncated Taylor series approximations and time-dependent Hamiltonian simulation with optimal scaling properties.
Before joining the University of Maryland, Childs was a DuBridge Postdoctoral Scholar at Caltech from 2004-2007 and held faculty positions in Combinatorics & Optimization and the Institute for Quantum Computing at the University of Waterloo from 2007-2014. He received his doctorate in physics from MIT in 2004.
In 2024, Childs was awarded the Kirwan Faculty Research and Scholarship Prize, recognising his contributions to quantum computing research. His most recent work includes developing optimal routing protocols for quantum atom arrays and advancing quantum symmetrisation techniques.
How did you first get involved with quantum computing and algorithm development?
Childs: I am based in the Computer Science Department at the University of Maryland (UMD), where we have a quantum computing centre. I came to UMD about 10 or 11 years ago, when the centre was just being established, to help set it up. It is a joint centre with National Institute of Standards and Technology (NIST). Their interests include frequency standards, time standards, and cryptography, all of which are connected to quantum computing. That was the origin of the centre.
I first became interested in quantum computing as an undergraduate student, more than 25 years ago. This was not long after the discovery of Shor's factoring algorithm in the mid-1990s. I was studying at Caltech, which had an active group working on quantum computing. It was one of the most exciting places to work in the field at the time. I became involved there and have been working in quantum computing ever since.
My interest in the subject comes largely from my background. I was originally a physics student as an undergraduate, and my PhD is also in physics; however, I have always been interested in mathematics and related fields. At Caltech, researchers such as Jeff Kimball and others were already exploring experimental approaches to quantum information processing, and that environment had a strong influence on me.
As technology advances, will the use cases increase, or are there specific types of mathematical problems that are suited to quantum computing?
Childs: Looking ahead, I hope to see more applications of quantum computing. It is difficult to predict exactly, but one important point I emphasised in my talk is that quantum computers require structure to deliver significant speed-ups.
Results show that for certain types of problems, such as unstructured search, the best possible improvement is only quadratic in nature. This is what Grover’s algorithm achieves: a square root speed-up, which is helpful but relatively modest. For such an advantage to be worthwhile in practice, a quantum computer would need to be extremely fast, reliable and able to handle very large problems.
Much of the field is therefore focused on problems where it may be possible to obtain exponential or, at the very least, super-polynomial speed-ups. These require a special structure, and while we are aware of a few examples where that structure exists and can be exploited, we lack a general understanding of what kinds of structure lead to a significant quantum advantage.
This makes it a broad and difficult question. As a result, quantum computing will likely remain useful mainly for specialised purposes, although it is too early to say definitively.
Another point I emphasised is that our understanding of the power of quantum computers primarily comes from what can be proven mathematically. Unlike in classical computing, we cannot simply create an algorithm, test it on large-scale instances, and observe whether it performs better in practice, because current quantum devices are still small and limited. They are improving, and we may soon enter a regime where they surpass classical capabilities, but we are not yet at the stage where large-scale experimentation is possible.
In classical computing, many techniques are adopted because they work in practice, not because they are backed by strong theoretical guarantees. Machine learning is a clear example: large models produce highly useful results even though there are no theorems proving their performance. If we had access to much larger quantum computers, we might well discover new use cases and behaviours that theory alone has not predicted. Whether this will be the case remains uncertain, but without the ability to test large systems directly, our knowledge is necessarily limited.
What I find most interesting about quantum computing is the abundance of mathematical challenges. It connects with many different areas of mathematics and provides a rich setting in which to explore a wide variety of questions. That is what I enjoy most.
It is not that I am indifferent to the prospect of building a quantum computer. On the contrary, that is an extraordinary and important challenge, and achieving it would have many fascinating consequences. However, I am equally satisfied working on the mathematical problems that arise independently of whether such a machine exists. Quantum computing provides a compelling framework for thinking about information processing, naturally generating a wealth of problems to explore.
What are your current areas of research?
Childs: I have several research threads. One area I am particularly interested in is simulating quantum mechanics with quantum computers. This was one of the earliest proposed applications of quantum computing, and perhaps remains the most natural: using these devices to simulate physics in a highly controlled way. There are many fascinating questions here, such as how to design algorithms that run as quickly as possible and what happens when they are applied to specific systems. This is a major focus of my work.
I am also interested in more fundamental questions about the power of quantum computers, particularly within the model of quantum query complexity. This model is appealing because it is very precise: it allows one to analyse problems in a setting where tight characterisations are possible. In the standard model, where an algorithm is given input data and asked to compute something, proving lower bounds is notoriously difficult. By contrast, in the query model one can determine how many queries are required to solve a problem, giving a clear measure of computational speed. I am pursuing a number of projects in this area.
Another line of research concerns the movement of quantum information under locality constraints. Consider a quantum processor in which some qubits are directly connected while others are far apart on the chip. If one wants to perform a gate between distant qubits, it is possible in principle to move the information, perform the gate, and move it back. The challenge is to determine the most efficient way to do this, given the geometry of the system and the allowed interactions. This raises a host of interesting questions about how best to transport and manipulate quantum information under such physical constraints.
If we think broadly about what can be computed efficiently, that is, within polynomial time, then the cost of moving qubits around only introduces relatively modest overheads. In this view, it does not make much difference whether the qubits are arranged on a line with only nearest neighbour interactions, or whether all qubits can interact directly. These variations only affect the running time by polynomial factors. However, if the goal is to optimise algorithms and make them as fast as possible, then such details become crucial. I tend to consider problems from both perspectives.
What research milestones are you most excited about?
Childs: One particularly exciting development has been the creation of algorithms for simulating Hamiltonian dynamics with much better scaling in terms of error. The earliest methods had overheads that grew polynomially with 1/ε, where ε is the error. In contrast, newer approaches achieve running times that scale polynomially with log(1/ε), which is exponentially faster. This breakthrough led to an entire class of simulation algorithms that have since been generalised and developed into a new way of thinking about Hamiltonian simulation and related problems.
Is it an exciting time to be in quantum computing? It seems like there are a lot of new ground to cover?
Childs: Quantum computing is still a relatively young field. In some sense there is more low-hanging fruit to be found, although much of it has already been picked as the field matures. Over time, the research has become more technical, with papers growing longer and more detailed. Yet alongside this trend, there remain broad and open questions. It is still possible that a simple idea could lead to a dramatic new quantum speed up and an entirely new class of algorithms. That possibility continues to make the field very exciting.
You are known for your work using fast quantum algorithms to simulate hamiltonian dynamics. What are the biggest algorithmic bottlenecks when nsimulating complex quantum systems?
Childs: One of the most challenging aspects of Hamiltonian simulation, in terms of limiting the potential advantage from quantum computers, is the existence of highly effective classical algorithms for many problems of interest. In quantum chemistry, for example, although I am not a computational chemist myself, my understanding is that many of the problems practitioners care about can already be addressed successfully with classical methods. Density functional theory and other heuristic approaches may not work efficiently in every case, but they are remarkably effective for a wide range of problems of practical importance.
This makes it essential to identify simulation problems that are both scientifically valuable and resistant to classical techniques. The challenge lies in combining an understanding of what quantum computers can offer with expertise in the target domain, whether chemistry, materials science, nuclear physics, or another field. Only by integrating both perspectives can we pinpoint problems where quantum simulation might provide a genuine advantage.
At the same time, we are still trying to understand how quantum computers perform on specific problems. For any given system to be simulated, there are many possible methods: a variety of algorithms, numerous variants, and different approaches to simulating the dynamics. When fermionic systems are involved, for example, they must be mapped onto qubits, and the choice of encoding interacts with the choice of simulation algorithm. This, in turn, affects how efficiently the algorithm can be compiled into gates.
These interdependencies mean that even for a fixed system, determining the most efficient simulation strategy is far from straightforward. It is not uncommon to see papers estimating the gate counts required for a particular simulation, followed by later work that reduces the estimate by several orders of magnitude. This illustrates both the early stage of the field and the enormous scope for progress. It also shows that we still have significant work to do before achieving implementations robust and efficient enough for widespread use. At present, much of this progress is still being made in a pencil-and-paper fashion, which highlights the limitations of our current tools but also underlines the potential for major advances once larger devices become available.
Could you explain how quantum signal processing can help reduce the resource overhead for simulators?
Childs: Quantum signal processing is closely connected with the development of Hamiltonian simulation algorithms that achieve better scaling with error. The earliest approaches were based on product formulas, which approximate time evolution by breaking it into a sequence of evolutions according to elementary terms. The error can be reduced by simulating those terms for shorter and shorter intervals, but the scaling with error is polynomial in 1/ε.
Later, more advanced methods were developed that achieve scaling polynomial in log(1/ε). These methods work by directly implementing the Taylor series of the exponential function, providing a mathematically very different way of simulating dynamics. Building on this, the idea of quantum signal processing was introduced as a powerful and general framework. Roughly speaking, it enables transformations of operators that represent quantum dynamics by encoding spectral information into a qubit and then manipulating that spectrum to perform the desired evolution.
Quantum signal processing has proved to be a remarkably versatile technique, not only for Hamiltonian simulation but also for a variety of other quantum algorithms. However, its practical value for simulation is still uncertain. Although it offers asymptotic improvements in error scaling compared with simpler methods, it introduces additional overhead. Whether this trade-off is beneficial in practice depends on the scale at which one aims to apply the method. This is part of what makes it difficult to determine the best approach to Hamiltonian simulation. One must weigh the advantages of product formulas, quantum signal processing, and their many variants in order to identify which methods yield the most efficient circuits in practice. The result is a complex and still-evolving landscape.