Quantum Computers Cross Critical Error Threshold

In a first, researchers have shown that adding more “qubits” to a quantum computer can make it more resilient. It’s an essential step on the long road to practical applications.

Introduction

How do you construct a perfect machine out of imperfect parts? That’s the central challenge for researchers building quantum computers. The trouble is that their elementary building blocks, called qubits, are exceedingly sensitive to disturbance from the outside world. Today’s prototype quantum computers are too error-prone to do anything useful.

In the 1990s, researchers worked out the theoretical foundations for a way to overcome these errors, called quantum error correction. The key idea was to coax a cluster of physical qubits to work together as a single high-quality “logical qubit.” The computer would then use many such logical qubits to perform calculations. They’d make that perfect machine by transmuting many faulty components into fewer reliable ones.

“That’s really the only path that we know of toward building a large-scale quantum computer,” said Michael Newman(opens a new tab), an error-correction researcher at Google Quantum AI.

This computational alchemy has its limits. If the physical qubits are too failure-prone, error correction is counterproductive — adding more physical qubits will make the logical qubits worse, not better. But if the error rate goes below a specific threshold, the balance tips: The more physical qubits you add, the more resilient each logical qubit becomes.

Now, in a paper(opens a new tab) published today in Nature, Newman and his colleagues at Google Quantum AI have finally crossed the threshold. They transformed a group of physical qubits into a single logical qubit, then showed that as they added more physical qubits to the group, the logical qubit’s error rate dropped sharply.

A Google Quantum AI researcher works on Google’s superconducting quantum computer.

Google Quantum AI

“The whole story hinges on that kind of scaling,” said David Hayes(opens a new tab), a physicist at the quantum computing company Quantinuum. “It’s really exciting to see that become a reality.”

Majority Rules

The simplest version of error correction works on ordinary “classical” computers, which represent information as a string of bits, or 0s and 1s. Any random glitch that flips the value of a bit will cause an error.

You can guard against errors by spreading information across multiple bits. The most basic approach is to rewrite each 0 as 000 and each 1 as 111. Any time the three bits in a group don’t all have the same value, you’ll know an error has occurred, and a majority vote will fix the faulty bit.

But the procedure doesn’t always work. If two bits in any triplet simultaneously suffer errors, the majority vote will return the wrong answer.

To avoid this, you could increase the number of bits in each group. A five-bit version of this “repetition code,” for example, can tolerate two errors per group. But while this larger code can handle more errors, you’ve also introduced more ways things can go wrong. The net effect is only beneficial if each individual bit’s error rate is below a specific threshold. If it’s not, then adding more bits only makes your error problem worse.

As usual, in the quantum world, the situation is trickier. Qubits are prone to more kinds of errors than their classical cousins. It’s also much harder to manipulate them. Every step in a quantum computation is another source of error, as is the error-correction procedure itself. What’s more, there’s no way to measure the state of a qubit without irreversibly disturbing it — you must somehow diagnose errors without ever directly observing them. All of this means that quantum information must be handled with extreme care.

“It’s intrinsically more delicate,” said John Preskill(opens a new tab), a quantum physicist at the California Institute of Technology. “You have to worry about everything that can go wrong.”

At first, many researchers thought quantum error correction would be impossible. They were proved wrong in the mid-1990s, when researchers devised simple examples of quantum error-correcting codes. But that only changed the prognosis from hopeless to daunting.

When researchers worked out the details, they realized they’d have to get the error rate for every operation on physical qubits below 0.01% — only one in 10,000 could go wrong. And that would just get them to the threshold. They would actually need to go well beyond that — otherwise, the logical qubits’ error rates would decrease excruciatingly slowly as more physical qubits were added, and error correction would never work in practice.

Nobody knew how to make a qubit anywhere near good enough. But as it turned out, those early codes only scratched the surface of what’s possible.

The Surface Code

In 1995, the Russian physicist Alexei Kitaev(opens a new tab) heard reports of a major theoretical breakthrough in quantum computing. The year before, the American applied mathematician Peter Shor had devised a quantum algorithm for breaking large numbers into their prime factors. Kitaev couldn’t get his hands on a copy of Shor’s paper, so he worked out his own version(opens a new tab) of the algorithm from scratch — one that turned out to be more versatile than Shor’s. Preskill was excited by the result and invited Kitaev to visit his group at Caltech.

“Alexei is really a genius,” Preskill said. “I’ve known very few people with that level of brilliance.”

That brief visit, in the spring of 1997, was extraordinarily productive. Kitaev told Preskill about two new ideas he’d been pursuing: a “topological” approach to quantum computing that wouldn’t need active error correction at all, and a quantum error-correcting code based on similar mathematics. At first, he didn’t think that code would be useful for quantum computations. Preskill was more bullish and convinced Kitaev that a slight variation(opens a new tab) of his original idea was worth pursuing.

That variation, called the surface code, is based on two overlapping grids of physical qubits. The ones in the first grid are “data” qubits. These collectively encode a single logical qubit. Those in the second are “measurement” qubits. These allow researchers to snoop for errors indirectly, without disturbing the computation.

This is a lot of qubits. But the surface code has other advantages. Its error-checking scheme is much simpler than those of competing quantum codes. It also only involves interactions between neighboring qubits — the feature that Preskill found so appealing.

In the years that followed, Kitaev, Preskill and a handful of colleagues fleshed out the details(opens a new tab) of the surface code. In 2006, two researchers showed(opens a new tab) that an optimized version of the code had an error threshold around 1%, 100 times higher than the thresholds of earlier quantum codes. These error rates were still out of reach for the rudimentary qubits of the mid-2000s, but they no longer seemed so unattainable.

Despite these advances, interest in the surface code remained confined to a small community of theorists — people who weren’t working with qubits in the lab. Their papers used an abstract mathematical framework foreign to the experimentalists who were.

“It was just really hard to understand what’s going on,” recalled John Martinis(opens a new tab), a physicist at the University of California, Santa Barbara who is one such experimentalist. “It was like me reading a string theory paper.”

In 2008, a theorist named Austin Fowler(opens a new tab) set out to change that by promoting the advantages of the surface code to experimentalists throughout the United States. After four years, he found a receptive audience in the Santa Barbara group led by Martinis. Fowler, Martinis and two other researchers wrote a 50-page paper(opens a new tab) that outlined a practical implementation of the surface code. They estimated that with enough clever engineering, they’d eventually be able to reduce the error rates of their physical qubits to 0.1%, far below the surface-code threshold. Then in principle they could scale up the size of the grid to reduce the error rate of the logical qubits to an arbitrarily low level. It was a blueprint for a full-scale quantum computer.

John Martinis (left) and Austin Fowler developed a blueprint for a quantum computer based on the surface code.

From left: Courtesy of John Martinis; Courtesy of Austin Fowler

Of course, building one wouldn’t be easy. Cursory estimates suggested that a practical application of Shor’s factoring algorithm would require trillions of operations. An uncorrected error in any one would spoil the whole thing. Because of this constraint, they needed to reduce the error rate of each logical qubit to well below one in a trillion. For that they’d need a huge grid of physical qubits. The Santa Barbara group’s early estimates suggested that each logical qubit might require thousands of physical qubits.

“That just scared everyone,” Martinis said. “It kind of scares me too.”

But Martinis and his colleagues pressed on regardless, publishing a proof-of-principle experiment(opens a new tab) using five qubits in 2014. The result caught the eye of an executive at Google, who soon recruited Martinis to lead an in-house quantum computing research group. Before trying to wrangle thousands of qubits at once, they’d have to get the surface code working on a smaller scale. It would take a decade of painstaking experimental work to get there.

Crossing the Threshold

When you put the theory of quantum computing into practice, the first step is perhaps the most consequential: What hardware do you use? Many different physical systems can serve as qubits, and each has different strengths and weaknesses. Martinis and his colleagues specialized in so-called superconducting qubits, which are tiny electrical circuits made of superconducting metal on silicon chips. A single chip can host many qubits arranged in a grid — precisely the layout the surface code demands.

The Google Quantum AI team spent years improving their qubit design and fabrication procedures, scaling up from a handful of qubits to dozens, and honing their ability to manipulate many qubits at once. In 2021, they were finally ready to try error correction with the surface code for the first time. They knew they could build individual physical qubits with error rates below the surface-code threshold. But they had to see if those qubits could work together to make a logical qubit that was better than the sum of its parts. Specifically, they needed to show that as they scaled up the code — by using a larger patch of the physical-qubit grid to encode the logical qubit — the error rate would get lower.

They started with the smallest possible surface code, called a “distance-3” code, which uses a 3-by-3 grid of physical qubits to encode one logical qubit (plus another eight qubits for measurement, for a total of 17). Then they took one step up, to a distance-5 surface code, which has 49 total qubits. (Only odd code distances are useful.)

Mark Belan/Quanta Magazine

In a 2023 paper(opens a new tab), the team reported that the error rate of the distance-5 code was ever so slightly lower than that of the distance-3 code. It was an encouraging result, but inconclusive — they couldn’t declare victory just yet. And on a practical level, if each step up only reduces the error rate by a smidgen, scaling won’t be feasible. To make progress, they would need better qubits.

The team devoted the rest of 2023 to another round of hardware improvements. At the beginning of 2024, they had a brand-new 72-qubit chip, code-named Willow, to test out. They spent a few weeks setting up all the equipment needed to measure and manipulate qubits. Then in February, they started collecting data. A dozen researchers crowded into a conference room to watch the first results come in.

“No one was sure what was going to happen,” said Kevin Satzinger(opens a new tab), a physicist at Google Quantum AI who co-led the effort with Newman. “There are a lot of details in getting these experiments to work.”

Then a graph popped up on the screen. The error rate for the distance-5 code wasn’t marginally lower than that of the distance-3 code. It was down by 40%. Over the following months, the team improved that number to 50%: One step up in code distance cut the logical qubit’s error rate in half.

“That was an extremely exciting time,” Satzinger said. “There was kind of an electric atmosphere in the lab.”

The team also wanted to see what would happen when they continued to scale up. But a distance-7 code would need 97 total qubits, more than the total number on their chip. In August, a new batch of 105-qubit Willow chips came out, but by then the team was approaching a hard deadline — the testing cycle for the next round of design improvements was about to begin. Satzinger began to make peace with the idea that they wouldn’t have time to run those final experiments.

“I was sort of mentally letting go of distance-7,” he said. Then, the night before the deadline, two new team members, Gabrielle Roberts and Alec Eickbusch, stayed up until 3 a.m. to get everything working well enough to collect data. When the group returned the following morning, they saw that going from a distance-5 to a distance-7 code had once again cut the logical qubit’s error rate in half. This kind of exponential scaling — where the error rate drops by the same factor with each step up in code distance — is precisely what the theory predicts. It was an unambiguous sign that they’d reduced the physical qubits’ error rates well below the surface-code threshold.

“There’s a difference between believing in something and seeing it work,” Newman said. “That was the first time where I was like, ‘Oh, this is really going to work.’”

The Long Road Ahead

The result has also thrilled other quantum computing researchers.

“I think it’s amazing,” said Barbara Terhal(opens a new tab), a theoretical physicist at the Delft University of Technology. “I didn’t actually expect that they would fly through the threshold like this.”

At the same time, researchers recognize that they still have a long way to go. The Google Quantum AI team only demonstrated error correction using a single logical qubit. Adding interactions between multiple logical qubits will introduce new experimental challenges.

Then there’s the matter of scaling up. To get the error rates low enough to do useful quantum computations, researchers will need to further improve their physical qubits. They’ll also need to make logical qubits out of something much larger than a distance-7 code. Finally, they’ll need to combine thousands of these logical qubits — more than a million physical qubits.

Meanwhile, other researchers have made impressive(opens a new tab) advances(opens a new tab) using different qubit technologies, though they haven’t yet shown that they can reduce error rates by scaling up. These alternative technologies may have an easier time implementing new error-correcting codes that demand fewer physical qubits. Quantum computing is still in its infancy. It’s too early to say which approach will win out.

Martinis, who left Google Quantum AI in 2020, remains optimistic despite the many challenges. “I lived through going from a handful of transistors to billions,” he said. “Given enough time, if we’re clever enough, we could do that.”

About USA Facts News

USA Facts News was launched in 2023 with the slogan “forward with the people,” because that is what we believe in. USA Facts News cares about quality of life, the kind of world we live in, and about people. USA Facts News is more than a newspaper. It is an instigator, an entertainer, a cultural reference point, a finger on the pulse and a daily relationship. We believes that great journalism has the power to make each reader’s life richer and more fulfilling, and all of society stronger and more just.

View all posts by USA Facts News →

Leave a Reply

Your email address will not be published. Required fields are marked *