A quantum leap

Share this on social media:

Quantum technology is going through a period of rapid development, with several technologies driving the adoption of this emerging computing framework, finds Robert Roe

One of the biggest stumbling blocks in the development of quantum hardware is with the qubits themselves. This varies depending on the underlying technology used to create the qubits, but they are often error-prone and difficult to control, making quantum computers unstable and highly complex systems.

Advancing the technology requires larger quantum computers that can be scaled up and integrated with the cloud or existing classical computing systems. Scale is, therefore, of paramount importance in delivering real-world scientific insight.

There are many ways to build these systems, depending on the type of technology used. Universal Quantum, for example, is trying to develop the world’s first million qubit quantum computer using a technology called ‘trapped ion’.

Dr Luuk Earl, quantum engineer at Universal Quantum, highlighted the path the company has taken since its inception in 2018. ‘It’s a company that spun out from a research group at the University of Sussex. Two senior scientists, Professor Winfried Hensinger and Dr Sebastian Weidt, decided that the research they were doing was promising for quantum computing. They formed this company with a bit of venture capital funding. 

‘The real aim is to make quantum computers that can solve real-world interesting problems,’ Earl continued. ‘That is the main difference between other quantum computing companies and us at the moment. We’re just focusing on that big-scale stuff. We’re not interested in making toy models that can do some interesting science but aren’t going to impact humanity. We’re focused on that point where quantum computing is useful to everyone, which is a big challenge.’ 

And that is why a million qubit system is such an important goal, as this is the point at which some scientists and researchers believe that quantum computing systems will start to impact science and engineering. Earl noted that scientists at Universal Quantum had done some modelling of the resources required to solve particular problems. 

‘One of them is synthesising a particular chemical for fertiliser. They’ve done some simulations of how many resources you need to do the quantum chemistry for that to make an efficient process. And that kind of comes out around a million qubits. 

‘Of course, there’s always more to the story,’ stressed Earl. ‘We trade-off three things in a real-world application. One of them is the number of qubits, which is, 

of course, always important. There are also the error rates and coherence time. So if you have really good error rates, you can probably do fewer qubits, if you have long coherence times you can do with a few less qubits. So it’s a bit of a rough number. But that’s the kind of order of magnitude where quantum computers become interesting.’

Trapped ion quantum systems

Universal Quantum is developing its quantum computers based on trapped ion technology. A simple explanation would be that ions are trapped and precisely controlled using electromagnetic fields. Each ion levitates above the surface of a silicon microchip. The idea behind trapped ion systems is the ions are relatively easy to control, as they are all precisely the same shape and size.

‘A trapped ion system is what you’d initially think of if you think of a quantum computer,’ said Earl. ‘The qubits you use are naturally quantum systems; an atom is the closest thing we can come to a single quantum bit. Whereas a superconducting qubit is kind of an analogy to a qubit.

‘The big benefit is that every qubit is identical because every atom is identical to every other atom, the energy levels are very well defined, and especially with trapped ions, we can control the position and environment of those atoms very precisely,’ Earl continued. ‘By tuning electrode voltages, you can move the qubits around the surface of a chip very precisely; this means we have ultimate control over what these qubits are doing and how they’re interacting.’

Another important consideration that goes into developing these systems is the availability and price of the components used to control the qubits. For example, in the Universal Quantum system, the company uses lasers and microwaves to control the qubits. 

Earl noted it is important to focus on developing systems with technology that is available today. ‘From the work – we’ve done at Sussex and the preliminary work – we’ve done at Universal Quantum already, we think we can build million qubit machines using technology that already exists. We’re all very passionate about making real-world impactful systems as soon as possible. The quickest way to do that is using technology that already exists.’ 

Quantum accelerators

Founded in 2019, Quantum Brilliance has a very different take on the development of quantum computing. The venture-backed company develops quantum computers using a diamond substrate to help boost the reliability of the qubits and increase coherence time. The goal of Quantum Brilliance is to enable mass deployment of quantum technology to propel industries to harness edge and supercomputing applications.

The first generation of the company’s technology has already been installed in Paswey Supercomputing Centre, which is exploring how this technology might be used alongside high performance computing (HPC) systems in the future. 

Mark Mattingley-Scott, managing director, EMEA for Quantum Brilliance, explains why the company opted for this radically different quantum technology. ‘What diamond does is give you coherence for free. So you get qubits if you make qubits in diamond the right way. They maintain quantum coherence, even at room temperature. What it means is all the stuff you have to do with other quantum computing technologies, like keep it cold, or ensure it’s under a really high vacuum, or use precise lasers to get photons aligned, all those things fall away.’

This is an important distinction from other quantum systems, as it means the Quantum brilliance prototype systems can be smaller and more easily integrated with existing computing systems. They operate at room temperate and do not compex systems or advanced cooling.

‘I was actually at the Pawsey Supercomputing Centre this afternoon and I saw our quantum computer,’ Mattingley-Scott said. ‘It’s a 6u device, so it’s a little bit higher than the standard 19-inch rack unit. That’s the first generation of machines. Inside this device is a small piece of diamond with the qubits on it. And some simple optoelectronics to interact with that – stuff you would find in a 5g antenna mast. We’re working on miniaturising that, and we’re pretty sure we can get down to a graphics-card-size accelerator within the next few years.

‘Once you’ve got something like a graphics accelerator, then you’re in the same world as a normal Graphics Processing Unit (GPU) or Tensor Processing Unit (TPU),’ Mattingley-Scott added. ‘You can start to put these things in large quantities in a standard compute environment.’ Another benefit to diamond-based quantum systems is they can be used for edge computing systems, such as in robotics and autonomous vehicles.

This is relevant to high performance computing HPC and research centres in general because it allows quantum computing to be more easily integrated with classical computing architectures. This could help drive the adoption of the technology and allow more scientists and researchers to get access to quantum technology.  

But scaling these systems to the point of mass adoption is still some way off. There are significant challenges facing today’s quantum computing developers. One significant challenge is getting to the point where you’ve got enough qubits to provide a measurable performance improvement in some way. The second challenge is how to integrate a quantum computer with classical computing? While Quantum Brilliance wants to connect Quantum Processing Units (QPUs) to classical systems directly, some other organisations want to connect these systems via the cloud. 

However, Mattingley-Scott thinks this is a mistake. ‘Most companies are using the cloud, so they’re looking at a cloud hybrid execution model. We believe – and I think history bears this out – the QPU needs to be physically as close to the other classical compute devices, like CPUs and GPUs, as possible. 

‘We envisage a future in which you’ll go into your computing centre and pull a blade out – maybe it’ll be a CPU blade, maybe it’ll be a GPU blade, or maybe it’ll be a hybrid, and it’ll hopefully be a Quantum Brilliance QPU sat next to AMD or Nvidia or Intel CPUs and GPUs.

‘Quantum computing must operate in a quantum-classical hybrid – it has to be the case,’ Mattingley-Scott continued. ‘If you talk to almost all the hardware vendors, there will not be isolated quantum computers churning away doing stuff, and then delivering their results, at least for the foreseeable future – the next few decades. It is all going to be hybrid.

‘In which case, bite the bullet and put your QPU actually in an accelerator card. Next to the GPU, next to the CPU. And then you’re not worried about data throughput, latency times and interaction times,’ Mattingley-Scott concluded.

Quantum in the cloud

Quantinuum, on the other hand, is a company that has embraced the use of the cloud to help facilitate access to its prototype quantum systems. Quantinuum’s H1 generation of quantum computers is already commercially available. The Quantinuum H1 generation, currently consisting of two computers, the H1-1 and the H1-2, are fully accessible over the cloud and compatible with a variety of software frameworks.

Tony Uttley, president and chief operating officer at Quantinuum, highlights the company’s growth from both hardware and software provider. ‘Quantinuum is the combination of Cambridge Quantum with Honeywell Quantum Solutions. Honeywell Quantum Solutions did a lot of work directly with the products Cambridge Quantum Computing was making.

‘What we found as we were working together as separate companies, was that most people who are developing hardware will extrapolate away from the metal layer,’ Uttley explained. ‘They will make a separation to protect IP, and you can’t get the full integrated benefit if you have that separation layer. We realised we could make fully integrated solutions based upon both the application layer on top of the middleware on top of our hardware.’

However, although the platform is based on the integration of two distinct companies, they also choose to make the software platform inclusive. ‘The applications, the operating system that we develop, is designed to work on everybody’s hardware,’ Uttley said. ‘And as a real practical example, we are one of the biggest users of IBM’s quantum computers in the world. IBM is also an investor in Quantinuum.’

While the hardware stack continues to mature, scientists and researchers are now getting access to software development tools to create quantum algorithms and quantum simulators, or emulators, that allow them to simulate how a quantum computer might work in a classical system. This allows researchers to develop expertise and test out how applications might benefit them in the future.

‘A lot of the algorithmic work is in imagining this future where you don’t have to worry about qubits and how they interact. Because all of that has been “taken care of” by universal fault tolerance,’ said Uttley. ‘That’s a decade away. So the key is, what do you do in the intervening time? How do you make progress? Can you do things with some of these earlier systems? And the answer is, yes you can.

However, this requires users to begin to think about their problems differently,’ stresses Uttley. ‘What I mean by that is, don’t think about what the problem is, and how you abstract that into the system. You need to think about what these systems are good at. And how do I use that for these kinds of problems?’ 

Not all qubits are created equally

One critical aspect of the development of these quantum systems, particularly in the early days where coherence and error rates make these systems relatively unstable, is that different hardware architectures are more suited to different problems. A simple example of this would be a large number of qubits with a low coherence versus a much smaller number of qubits with a high coherence time. Another factor is the amount of communication required between qubits.

‘If you’re trying to simulate a molecule, then ostensibly it depends on how that molecule is shaped, believe it or not,’ Uttley said. ‘This is because what happens in a superconducting quantum computer and semiconducting ones are similar, where they have an architectural property that’s called “nearest neighbour”. This means the qubits are physically manufactured on a piece of silicon.’

Uttley gave an example of a grid or several rows of qubits where the qubits can easily communicate with their nearest neighbour, but where communications from one side of the grid to the other take much longer and adds additional error to the system.

‘There are molecules where the shape of the molecule itself is kind of a nearest neighbour interaction,’ Uttley continued. ‘A nearest neighbour molecule running on a nearest neighbour quantum computer actually can work pretty effectively. But if you have complex molecules, where you need these qubits to talk to every other qubit arbitrarily, then our trapped ion hardware works well. This is because we can physically transport our qubits so that any one can talk to any other one without introducing any additional error. 

‘It’s that kind of deep knowledge about both the problem and the way these systems work that allows us to know which hardware or platform will be most suitable for a given problem,’ Uttley concluded.