Quantum Computers in 2026 — What We Really Know, What We Don't Know, and Why It Matters
Nauka i technologia March 31, 2026 21 min

Quantum Computers in 2026 — What We Really Know, What We Don't Know, and Why It Matters

What this article is about — and why it's worth reading

In December 2024, Google announced that their Willow quantum processor performed in five minutes a calculation that the fastest classical supercomputer would need billions of years for[1]. Headlines circled the world. Politicians talked about a "quantum race". The stock market reacted. But what really happened — and what don't these headlines tell us?

This article is not another celebration of the "quantum revolution". It's also not a manifesto of skepticism. It's an attempt to tell you faithfully what's really going on — based on peer-reviewed scientific publications, official manufacturer roadmaps, and verified data. Without oversimplification, but in language understandable to anyone willing to spend half an hour understanding one of the most important technologies of the 21st century.

To understand where we really are with quantum computers, you have to take a step back. Not to the headlines, but to physics. Because a quantum computer is not a faster computer — it's a fundamentally different way of processing information. And this difference is simultaneously the source of enormous potential and enormous difficulty.

Bit versus qubit — the difference that changes everything

A classical computer operates on bits — each bit is either 0 or 1. Eight bits make a byte, billions of bytes make up your phone's memory. All the power of modern computing — from artificial intelligence to video streaming — is based on manipulating enormous strings of zeros and ones, one bit at a time.

A quantum computer uses qubits, which thanks to the phenomenon of superposition can exist in both states simultaneously. That's not the same as "being both zero and one at the same time" — that oversimplification is misleading. Superposition means that a qubit has a certain probability of being zero and a certain probability of being one, and we learn the result only at the moment of measurement. It's a bit like a fair coin in the air — until it lands on the table, it's neither heads nor tails, but has a 50% chance of each.

But that's just the beginning. Qubits can be entangled — which means that the state of one qubit is instantly correlated with the state of another, regardless of the distance between them. Measure one — and instantly you know something about the other, even if they're separated by light-years.

Einstein in 1935 called this "spooky action at a distance". Together with physicists Boris Podolsky and Nathan Rosen, he published the famous EPR paper in Physical Review[8], in which he argued that entanglement proves the incompleteness of quantum mechanics — that there must be hidden variables that explain these correlations without "spookiness". He died in 1955, never accepting the Copenhagen interpretation — and experimental confirmation of entanglement's reality (Bell inequality tests) didn't come until the 1970s and 80s. In 2022, Alain Aspect, John Clauser, and Anton Zeilinger received the Nobel Prize in Physics for these experiments.

These two properties — superposition and entanglement — allow a quantum computer to explore an astronomical number of possibilities simultaneously. Imagine a maze with a billion paths. A classical computer checks them one by one. A quantum computer — thanks to an appropriately designed algorithm — can manipulate probabilities so that paths leading nowhere "cancel each other out" (a phenomenon called quantum interference), and the path to the exit becomes increasingly probable. It's not magic — it's physics. But the difference in efficiency can be colossal.

This sounds promising. The problem is that qubits are unimaginably delicate.

Decoherence — why qubits are so fragile

A qubit is not an abstraction in a PowerPoint presentation — it's a real physical object. It might be a superconducting electrical circuit cooled to about 15–50 millikelvin — tens of times colder than outer space (which is about 2.7 kelvin, or −270°C). It might be a trapped ion held in vacuum by electromagnetic fields. Or a single photon guided through an optical fiber. Each of these implementations has the same fundamental problem: decoherence.

Decoherence is the loss of quantum information when a qubit interacts with its environment. Any contact with the outside world — vibration from a neighboring atom, stray electromagnetic fields, accidental breakage of a Cooper pair in a superconductor, even cosmic radiation penetrating lab walls — causes the qubit to "forget" its quantum state and become an ordinary, classical bit. This process is irreversible and inevitable.

To get a sense of the scale of the problem: according to measurements published in npj Quantum Information[2], a typical superconducting transmon qubit — the basic element of Google and IBM processors — maintains coherence (T1 time) for about 49 microseconds, and the dephasing time (T2*) is about 95 microseconds. That's less than the blink of an eye. And in this time window, the qubit must manage to perform thousands of logical operations, each of which takes about 20–50 nanoseconds. The margin for error is minimal.

What's worse, these values are not constant — they fluctuate over time, requiring continuous recalibration of the hardware[2]. It's like a precision musical instrument going out of tune every few seconds.

There are qubits with longer coherence. Qubits based on trapped ions — used by IonQ and Quantinuum — maintain quantum state for minutes, or even longer. But their logical operations take microseconds instead of nanoseconds — meaning they're about a thousand times slower. It's like choosing between a sprinter who can only run for 10 seconds and a marathoner who moves at a snail's pace. Each approach has its price, and neither gives us today what we need to build a machine capable of solving real problems.

Quantum error correction — a problem that can't be bypassed

Since qubits are so unstable, we need a way to fix errors on the fly — just as classical computers have been fixing errors in RAM or during data transmission for decades. In the classical world, it's simple: you copy a bit three times and check using "majority voting". If two out of three bits say "1", then the original was a one.

But in quantum mechanics, you cannot copy a qubit. This is forbidden by the so-called no-cloning theorem, proved by Wootters and Żurek in 1982[9]. It's not an engineering limitation that we'll overcome someday — it's a fundamental law of physics, a consequence of the linearity of quantum mechanics. You can't get around it any more than you can travel faster than light.

A solution exists, but it's costly: instead of copying a qubit, we encode one logical qubit (the one we want to perform calculations on) into many physical qubits (the ones that actually exist in the processor). Quantum information is spread across multiple qubits in such a way that an error on a single physical qubit can be detected and corrected without destroying the logical state.

The most popular scheme — surface code — requires from a dozen to several hundred physical qubits per logical qubit, depending on the required level of protection. The larger the "code distance" (parameter d), the better the protection — but the more physical qubits you need. For a code with distance 7, you need 72 physical qubits for one logical qubit. For distance 17, which would be needed for serious calculations — hundreds.

But there's a necessary condition for this to work at all: the physical error rate must be below a certain threshold. If physical qubits make too many errors, adding more qubits to the code doesn't help — it actually makes things worse, because each additional qubit is an additional source of noise. Breaking through this threshold "downward" — so that increasing the code actually reduces logical errors — is one of the key milestones in the history of quantum computing.

And here we come to one of the most important achievements of recent years.

December 2024: Google Willow and the breakthrough in error correction

The Willow processor, designed by Google Quantum AI team in Santa Barbara, contains 105 superconducting qubits. In December 2024, the team built surface codes with distances 3, 5, and 7 on it — meaning increasing levels of protection — and demonstrated something the physics community had been waiting for years[1].

First: the error correction threshold was exceeded. Increasing the number of physical qubits in the code (moving from distance 3 to 5, then to 7) actually reduced the logical error rate, instead of increasing it. The suppression factor was 2.14x with each increase in code distance by 2[1]. This is the first time surface code has behaved according to theory on actual hardware, not just in simulation.

Second: the distance-7 logical qubit (composed of 72 physical qubits and 29 ancilla qubits) survived 2.4 times longer than the best single physical qubit in the processor[1]. In other words — encoding information in many qubits not only didn't make things worse, but provided real benefit. This is called "breaking the breakeven threshold".

Third: the system operated stably for over a million error correction cycles, with error decoding in real time[1].

The article describing these results was published in Nature on February 27, 2025 (vol. 638, pp. 920–926)[1].

This is a breakthrough — but a breakthrough that must be understood in context. It was shown that error correction works in principle. But a distance-7 code is just the beginning. Useful quantum calculations would require distance 17 or higher, meaning thousands of physical qubits for dozens of logical ones. From "works in the lab" to "solves real problems" is still a long way off. Nobody hides this — including Google itself.

Where we really are — the roadmap to 2033

It's worth looking at what the largest companies themselves say about their plans — because these forecasts are far more cautious than media headlines. Companies have incentive to look good to investors, so if even their official roadmaps are cautious — that tells you a lot about the actual scale of the challenges.

IBM has the most detailed, publicly available roadmap[10]. For 2026 they plan the Kookaburra processor (1,386 physical qubits), combining a logical processing unit with quantum memory. The goal for 2026 is to demonstrate 12 logical qubits with 244 physical. Their more ambitious machine, Starling (200 logical qubits), is planned for 2028. Three Kookaburra modules connected by quantum links will give a system of 4,158 physical qubits. A fully fault-tolerant quantum computer capable of solving problems "impossible for classical machines" — that is IBM's vision for the 2033 horizon[10].

It's worth noting: IBM built the Condor processor in 2023 with 1,121 qubits — but it was an engineering demonstration (qubit packing density), not a production machine. The processors IBM actually offers customers for computing (Heron series) have 156 qubits[10].

Google, following Willow's success (105 qubits), has on its roadmap building a quantum computer with a million physical qubits — but without specifying a concrete date[11]. Their next goal is to demonstrate "useful quantum advantage" — a computation that has real application and that a classical computer cannot perform in reasonable time.

Microsoft presented the Majorana 1 chip in February 2025, which is supposed to contain 8 topological qubits — theoretically more resistant to decoherence than superconducting or trapped-ion qubits. But Microsoft's claims were met with serious skepticism from the scientific community. Physicists quoted in Nature[14] and Science[13] questioned whether the presented qubits actually function as topological qubits, and accusations of data manipulation have been raised regarding the key publication underlying this approach. This is the earliest stage of development among all approaches — and the most controversial.

Microsoft and Atom Computing (a separate project from Majorana) are jointly building the Magne machine, based on neutral atoms, with 50 logical qubits (about 1,200 physical), planned for early 2027[12]. This would be one of the first quantum computers with enough logical qubits for simple but real computations.

You can see a clear pattern here: companies are talking about dozens of logical qubits in the 2–3 year perspective and hundreds in the 5–8 year perspective. Not thousands, not millions. Anyone claiming that a quantum computer will "soon change the world" — either doesn't understand the scale of the problem, or is trying to sell you something.

What quantum computers can do today — and what they can't

What they can't do

They won't crack your password. They won't replace your laptop. They won't speed up browsing the internet, streaming movies, or any task that classical computers do well. This is key misconception: a quantum computer is not a faster version of a classical computer. It's a machine designed for a completely different class of problems — ones where the mathematical structure allows a "quantum shortcut" through interference and entanglement.

For the vast majority of everyday tasks — text editing, databases, computer games, machine learning — classical computers are and will remain the better tool. A quantum computer won't replace the GPU in training neural networks. It won't speed up your Excel. Even if it stood on your desk (which it won't, since it requires a cryostat the size of a car), you'd have no use for it.

The "quantum supremacy" matter — and why the term is problematic

In 2019, Google announced "quantum supremacy" — their Sycamore processor (53 qubits) performed in 200 seconds a specially constructed task (random sampling of quantum circuits) which according to Google would take the fastest supercomputer 10,000 years. The article was published in Nature[15].

IBM immediately challenged this claim, arguing that the Summit supercomputer could handle this task in 2.5 days — which is indeed much slower than 200 seconds, but far from "10,000 years". And in 2023, a team from USTC (University of Science and Technology of China) completed this same task in 14 seconds — using 1,400 NVIDIA A100 graphics processors. Moreover, it was estimated that the Frontier supercomputer with full memory would do it in just 1.6 seconds[16].

Google's supremacy claim was undermined. This doesn't mean quantum advantage is a myth — but it shows that the boundary between "classically possible" and "classically impossible" is blurred and moves in both directions. Classical algorithms and hardware are also evolving. And the term "supremacy" provokes more controversy than clarity — which is why many scientists prefer the more neutral term "quantum advantage".

Where the first real applications are emerging

Molecular simulation and drug discovery. Here quantum computers have a natural advantage — because molecules themselves are quantum. Chemical bonds, electron interactions, energy states — all of this is described by quantum mechanics. Classical computers must approximate this (since exact simulation requires resources growing exponentially with molecule size). A quantum computer could simulate this natively.

A team from University of Toronto and Insilico Medicine used a hybrid approach (quantum + classical algorithms) to propose inhibitors of the KRAS protein — previously considered "undruggable" in cancer therapy. Fifteen compounds were synthesized in the lab, two showed biological activity. Results were published in Nature Biotechnology in 2024[5].

But you must be precise: in this work, the quantum component played a complementary role — it refined local electronic descriptions in places where quantum effects are critical for molecular binding. The heavy lifting — searching chemical space, molecular docking, candidate ranking — was still done by classical algorithms. There has yet to be shown unambiguous quantum advantage over the best classical methods in drug discovery. Pharmaceutical companies (Boehringer Ingelheim, Roche, AstraZeneca) are running projects with Google Quantum AI and others — but at the research stage, not production[5].

Optimization and logistics. Algorithms such as QAOA (Quantum Approximate Optimization Algorithm) theoretically can help with scheduling, routing, or investment portfolio optimization problems. In practice, today's quantum machines are too noisy to give better results than the best classical heuristics. D-Wave offers "quantum optimizers" (quantum annealers) with over 5,000 qubits — but that's a different class of machines than universal quantum computers, and their advantage over classical solvers is disputed.

The threat to cryptography — when to worry?

This is a question that really stirs emotions — and justly so. Shor's algorithm, published by mathematician Peter Shor in 1994, allows a quantum computer to factor large numbers into primes exponentially faster than any known classical algorithm. Factorization is the foundation of RSA security — the encryption system that protects your bank transactions, emails, medical data, and practically all internet security infrastructure.

RSA-2048 (a 2048-bit key, currently the standard) is based on the fact that a classical computer would need billions of years to factor such a large number. A quantum computer with Shor's algorithm could do it in hours — if it had enough reliable qubits.

How many qubits are needed? Estimates have changed drastically:

  • 2015: about a billion qubits (estimate accounting for then-current error rates)
  • 2019: 20 million noisy qubits and 8 hours of computation[3]
  • May 2025: less than a million noisy qubits and a week of computation[4]

The decline is dramatic — but still far from current capabilities. Today's best universal quantum processors have 100–200 qubits (Google Willow: 105, IBM Heron: 156). IBM built the Condor demonstration chip with 1,121 qubits, but it's not a production machine[10]. The gap between hundreds and a million qubits — and that's much better qubits than today's — remains enormous.

When will RSA become threatened? Nobody knows exactly. But NIST (the U.S. National Institute of Standards and Technology) decided that it's not worth waiting for the answer. In August 2024 it published three ready post-quantum cryptography standards: ML-KEM (encryption), ML-DSA (digital signatures), and SLH-DSA (hash-based signatures)[6]. In March 2025, it selected a fifth algorithm — HQC — as a backup encryption mechanism[7]. Migration to the new standards has already begun — IETF (Internet Engineering Task Force) is incorporating post-quantum algorithms into the TLS protocol, which secures HTTPS connections.

This is a sensible approach — and it's worth understanding why. Data encrypted today can be intercepted and stored, then decrypted a decade from now when quantum computers mature. This scenario — known as the "harvest now, decrypt later" attack — applies especially to data with long periods of sensitivity: state secrets, medical records, intellectual property. Organizations holding such data should treat migration to post-quantum cryptography not as a distant plan, but as an immediate task.

Four paths to a quantum computer

One of the less publicized, but fascinating aspects of this field is how different the approaches to building a quantum computer are. This is not a race on a single track — it's several parallel expeditions into the unknown, each with different strengths, limitations, and risk profiles.

Superconducting qubits (Google, IBM)

The fastest logical operations (20–50 nanoseconds), but short coherence (50–100 microseconds)[2] and the necessity of cooling to near absolute zero in a cryostat costing millions of dollars. They dominate today in terms of qubit count and maturity of the tool ecosystem. This is what Willow and Heron were built on. Their biggest weakness: no two qubits are identical, and each qubit's parameters change over time, requiring continuous recalibration[2].

Trapped ions (IonQ, Quantinuum)

The longest coherence (seconds to minutes) and highest gate fidelities (over 99.9%) — because atoms of a given element are identical by nature. But logical operations take microseconds (1000x slower than superconductors) and scaling above tens of qubits requires complex trap architectures. In 2025 scalable traps of over 200 ions were demonstrated and new techniques for parallel gate execution — but reaching hundreds of logical qubits is still far off.

Neutral atoms (QuEra, Pasqal, Atom Computing)

A promising next-generation platform with natural scalability — atoms held by optical tweezers (laser beams) can be arranged in 2D and 3D arrays of hundreds, even thousands. IEEE Spectrum called 2026 the year of the "big leap" for this technology[12]. Atom Computing demonstrated a system with over 1,000 qubits already in 2023 — though gate quality remains lower than in trapped ions. This is the platform on which Microsoft and Atom Computing are building the Magne machine.

Topological qubits (Microsoft)

Theoretically the most error-resistant because quantum information is spread throughout the system's topology, not localized in a single object. The Majorana 1 chip (February 2025) was supposed to be the first step — but the scientific community questions whether working topological qubits were actually demonstrated. Physicists quoted in Nature[14] and Science[13] raised serious objections, and accusations of data manipulation have been made regarding the key paper. The most ambitious approach and the most uncertain of all — potentially groundbreaking if it works, but fundamental scientific questions remain open.

None of these approaches has won. It's possible the "winner" doesn't exist yet — or that the future belongs to hybrids combining different technologies at different stages of computation.

What follows from this — questions worth asking yourself

A quantum computer is not a faster processor. It's a new way of thinking about computation — inspired by physics at the simplest, deepest level of reality. And that's precisely why this technology is simultaneously so exciting and so difficult.

Today we are roughly where classical computers were in the 1950s: we know it works; we know it has potential; we don't yet know what we'll do with it. The transistor was invented in 1947. The Internet emerged four decades later. No one in 1947 predicted Amazon, Spotify, or that we'd be carrying in our pockets computers a million times more powerful than those that put humans on the Moon.

Maybe it's worth asking yourself not "when will a quantum computer crack my password", but "what problems — today considered unsolvable — will become solvable when this technology matures?". New materials that could halt climate change. Drugs designed atom by atom, tailored to a specific patient. Physical models that let us understand phenomena we can't even simulate today.

Or something no one has thought of yet — because that's always been the case when truly new technology emerges.

Quantum computers are not just around the corner. But they're not science fiction either. They're something far more interesting — an open question at the frontier of physics, mathematics, and engineering. And open questions, as the history of science teaches, have more potential than ready-made answers.

Sources

  1. Google Quantum AI & Collaborators, „Quantum error correction below the surface code threshold", Nature 638, 920–926 (2025). nature.com
  2. Schlör S. et al., „Decoherence benchmarking of superconducting qubits", npj Quantum Information 5, 54 (2019). nature.com
  3. Gidney C., Ekerå M., „How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits", arXiv:1905.09749 (2019). arxiv.org
  4. Gidney C., Ekerå M., „How to factor 2048 bit RSA integers with less than a million noisy qubits", arXiv:2505.15917 (2025). arxiv.org
  5. Liao H. et al., „Quantum-computing-enhanced algorithm unveils potential KRAS inhibitors", Nature Biotechnology (2024). nature.com
  6. NIST, „Post-Quantum Cryptography Standardization" — ML-KEM (FIPS 203), ML-DSA (FIPS 204), SLH-DSA (FIPS 205), August 2024. nist.gov
  7. NIST, „NIST Selects HQC as Fifth Algorithm for Post-Quantum Encryption", March 2025. nist.gov
  8. Einstein A., Podolsky B., Rosen N., „Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?", Physical Review 47, 777 (1935).
  9. Wootters W. K., Żurek W. H., „A Single Quantum Cannot Be Cloned", Nature 299, 802–803 (1982).
  10. IBM Quantum, official roadmap (2025). ibm.com
  11. Google Quantum AI, roadmap. quantumai.google
  12. „Neutral Atom Quantum Computing: 2026's Big Leap", IEEE Spectrum (2026). ieee.org
  13. „Debate erupts around Microsoft's blockbuster quantum computing claims", Science (2025). science.org
  14. „Microsoft claims quantum-computing breakthrough — but some physicists are sceptical", Nature (2025). nature.com
  15. Arute F. et al., „Quantum supremacy using a programmable superconducting processor", Nature 574, 505–510 (2019). nature.com
  16. „Ordinary computers can beat Google's quantum computer after all", Science (2023). science.org
Quantum Computers in 2026 — What We Really Know, What We Don't Know, and Why It Matters — PageForYou.pl