Machine Learning Gets a Quantum Speedup

“Until a few years ago, I would think that physicists and computer scientists were living in parallel worlds,” said Eleni Diamanti, a quantum communication expert at Sorbonne University in Paris who was not involved in either study. Now here they were, working together. “It’s a real change of paradigm.”

A Natural Marriage

Much of AI, and machine learning in particular, comes down to automating, and improving on, tedious tasks. “Machine learning is about getting computers to do useful things without explicit programming,” said Vedran Dunjko, a quantum information researcher at Leiden University and a co-author of Saggio’s study. A computer can learn from photos labeled “cat” or “dog,” then quickly sort new pictures into the correct species; other algorithms find subtle patterns that help doctors diagnose cancers on medical scans.

In the past decade, researchers began to theorize how quantum computers might influence machine learning. One unique advantage of quantum computers is a phenomenon called superposition. Where classical bits each toggle between 0 and 1, “qubits” can be a complex combination of both. Quantum algorithms can use superpositions to cut down on the number of computational steps needed to arrive at a correct answer.

Some machine learning tasks, it turns out, are uniquely suited for this kind of work. In 2013, two studies showed how quantum computers could speed up some “unsupervised” learning tasks, where the algorithm must discover patterns on its own. The approach was promising, but theoretical, and impossible to carry out with the tech of the time. “Lots of these machine learning protocols require technology that is getting there, but is not available yet,” said Diamanti.

Researchers think of quantum computing not as a tool that completely replaces classical computing, but as one that complements it. Each type of computer has its strengths, and researchers expect to get an edge if they can find the particular areas where quantum computers excel. The goal is now to find algorithms that use quantum physics to solve problems in a different way — a better way — than a classical computer. And getting quantum computers to outlearn traditional machines means finding AI problems that boil down to mathematical operations congruous with quantum physics.

“Rather than forcefully trying to take on your biggest problem,” said Kristan Temme, a physicist with IBM, researchers should find opportunities that “end up being more in the subtle details.” Finding those natural marriages between the math of AI and the physics of quantum computing is the key to real-life quantum machine learning.

Kernel Trickery

Temme speaks from experience. In 2019, his team at IBM found what they considered a prime example of a problem-solving method compatible with quantum physics — a sort of trick used in statistics, involving something called kernels.

A kernel is a measure of how related two data points are with respect to a particular feature. Think of a simple data set containing three items: BLUE, RED and ORANGE. If you examine them as colors, RED and ORANGE are neighbors. But if you look at the number of characters, BLUE sits between RED and ORANGE. Kernels are like lenses that allow an algorithm to classify data in different ways to find patterns that help distinguish future inputs. Implementing them is a trick to recast information in a new light, Temme said, allowing you to zero in on strong relationships otherwise hidden in data.

Kernels have no inherent connection to quantum physics. But quantum computers manipulate data in a similar way, so Temme suspected that his team could design a quantum algorithm for kernels. And for supervised learning problems in particular — where the system learns from a set of labeled data — the combo could excel at learning and applying patterns.

Temme, along with his IBM colleague Srinivasan Arunachalam and Yunchao Liu, an intern from the University of California, Berkeley, set out to prove that a quantum kernel algorithm could eclipse a classical one. In the summer of 2020 they went back and forth over Zoom, drawing diagrams and speculating about how to use the kernel trick to prove that quantum computers can boost supervised learning. “Those were really heated debates,” Temme said. “We’re all looking at each other in those little blue boxes.” Finally, they landed on a way to make the kernels shine.

Cryptographers sometimes use unidirectional math operations — ones that will easily output a number but cannot be reverse-engineered to reveal the process. For example, a “discrete logarithm” depends on a particular operation that takes two numbers — we’ll call them a and x — and returns results that bounce around unpredictably as a and x change. (The algorithm raises a to the xth power, divides it by some other number n, and outputs the remainder.) Classical computers can’t crack the string of outputs to find x.

Temme and his team showed how, by using quantum kernels, one can learn to find the pattern hiding in the seemingly random output produced by the discrete log problem. The technique uses kernels and superpositions to both reinterpret the data points and quickly estimate how they compare to one another. Initially the data appears random, but the quantum approach finds the right “lens” to reveal its pattern. Data points that share some key trait no longer appear randomly distributed, but come together as neighbors. By making these connections, the quantum kernels help the system learn how to classify the data.