Designing Systems with AI in the Loop
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
3 min read
Nathan Korda : Mar 24, 2022 9:47:36 AM
(A version of this piece was originally published in techUK as part of #QuantumWeek2021.)
Why are quantum computing devices powerful?
Quantum computing (QC) devices offer an entirely new way to perform computations, driven by quantum mechanics. If we can make them work at a large enough scale, they could herald new ages in drug discovery or materials design, and they will crack our most secure encryption methods. But what makes these devices so powerful?
All classical (normal) computing devices perform computations by flicking switches off and on. Each switch represents the smallest quantity of information possible - a bit - the answer to a yes/no question. It is either off representing “no”, or on representing “yes”. So at any point in a computation on a classical, 10-bit device you can hold a single set of answers to 10 yes/no questions.
You’ve probably heard how parallelising large computations can make them faster. This works by breaking the large computation down into smaller computations which can all be done at the same time. If you want to parallelise a computation in a classical computer, you need to have more bits (switches) to do the different, smaller computations simultaneously - it takes a larger computer, but you can do the whole problem faster. At the heart of the promise of QC is a super-charged form of parallelisation, called quantum parallelism, which doesn’t require the use of more hardware.
Quantum bits, or qubits, are like switches, but they can be to some degree off and to some degree on at the same time - a qubit can hold answers to two yes/no questions at once. This is called a quantum superposition. Things get really interesting when you have more than one qubit. In theory, 10 qubits can represent in some way every possible set of answers to 10 yes/no questions, all at once. Remember, a classical computer with 10 bits can represent only one set of yes/no answers. Just 300 qubits should be able to represent more possible sets of answers than there are atoms in the universe, all at once. If we are clever enough about it, this property of QC allows parallel computations without the need for extra qubits. The advantage is big: QC devices can be exponentially more efficient at computation than classical computing devices, and do computations that are just too big to perform on any practical classical computer.
How do Quantum Computers work, and what is the calibration problem?
To create quantum superpositions that are usable for computation, QC devices need to robustly and precisely control a quantum property of a quantum object: for example, the spins of individual electrons. To realise truly transformational quantum parallelism they must be able to control superposition states that are fully entangled across hundreds of qubits.
So, QC devices require immense precision to operate. They are also difficult to manufacture without significant variation. Even worse, the control they affect over the quantum objects is nearly impossible to isolate from external environmental influences, leading to qubit decoherence. QC devices must be regularly tuned, and their ability to reliably operate regularly benchmarked and verified.
This is what we refer to as the calibration problem for quantum devices.
AI for Quantum qubit calibration
In practice, calibration is a massive blocker to the development of quantum hardware that is capable of entangling 100s of qubits. It is also a major blocker to the future effective operation of, for example, cloud quantum computation services. Labs building or running quantum computers spend hours of specialist lab technicians’ time calibrating their machines every day. Even the Sycamore device that recently demonstrated quantum supremacy took 24 hours to calibrate before the experiment could be run. And since qubits degrade over time, that calibration sequence has to happen every time the device is used.
Recent work has shown that AI can solve this problem. Whether the calibration task is minimising state preparation error, measuring qubit decoherence times, or designing the most reliable qubit gates by shaping microwave pulses, AI can learn the physical expertise of lab technicians from judiciously gathered data, and apply it to achieve automated calibration routines orders of magnitude faster than manual processes. If this success can be generalised and systematised across QC hardware types we could see commercially relevant QC devices years earlier than currently projected, and put to work solving the world’s most important problems.
A cautionary word: a cross-disciplinary approach required
We should remember that whenever we build AI systems to solve a problem of this magnitude we need to take an interdisciplinary approach. We need to be close to the problem and to the experts who understand it better than anyone. Such expertise has tended to reside all together only in Big Tech companies such as Google and IBM, but the United Kingdom has all the expertise it needs to solve this problem.
A recent InnovateUK-funded project will bring this expertise together, joining together leading QC and Machine Learning software companies, Riverlane and Mind Foundry, with cutting edge hardware device manufacturers, Oxford Ionics and SeeQC, as well as world leading RTOs in QC, NPL and the University of Edinburgh. You can read the press release here.
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
Today, women are significantly underrepresented in Defence & National Security. This piece shares insights from some of the women at Mind Foundry...
Why human inspections aren’t enough to address the infrastructure condition crisis. The condition of many of our built assets is approaching a...