Author: Denis Avetisyan
A critical re-analysis of data from leading NISQ experiments casts doubt on recent assertions of achieving a quantum advantage.

This review examines inconsistencies in fidelity estimations and data analysis from random circuit sampling experiments on near-term quantum devices.
Claims of quantum supremacy rest on complex fidelity estimations, yet rigorous statistical validation remains surprisingly limited. This paper, ‘Further Statistical Study of NISQ Experiments’, revisits and extends analyses of Googleâs 2019 quantum supremacy experiment and other Noisy Intermediate-Scale Quantum (NISQ) devices, leveraging more detailed data to scrutinize predictions based on established error models. Our findings reveal inconsistencies in fidelity calculations and raise concerns about the robustness of current claims regarding quantum advantage. Can more refined statistical methodologies definitively establish a demonstrable and sustained quantum advantage in the NISQ era?
The Inherent Fragility of Quantum States
Quantum computation, while theoretically capable of solving problems intractable for classical computers, currently faces significant hurdles due to the prevalence of errors. These arenât simple glitches, but fundamental limitations arising from the delicate nature of quantum states – qubits are exceptionally susceptible to environmental noise and imperfections in the control systems manipulating them. Even minor disturbances can cause qubits to decohere, losing the quantum information they hold, or introduce errors during the execution of quantum gates – the building blocks of quantum algorithms. Consequently, the potential benefits of quantum computation remain largely unrealized, as error rates currently overshadow the computational gains, demanding substantial advancements in error correction and qubit stability before practical, large-scale quantum computers can become a reality. The pursuit of fault-tolerant quantum computation is therefore central to unlocking the technologyâs transformative potential.
Quantum computation, while holding transformative potential, is fundamentally challenged by errors stemming from various sources throughout the computational process. Imperfections in performing quantum gates – the basic building blocks of quantum circuits – introduce inaccuracies, as does the process of reading out the final state of the qubits. Surprisingly, even periods of qubit inactivity – idle time – contribute to error accumulation. A recent, detailed analysis of experiments claiming âquantum supremacyâ has uncovered significant discrepancies between predicted error rates – or fidelities – and those actually observed in retrieved data. These inconsistencies raise concerns about the reliability of reported results and underscore the critical need for more rigorous error characterization and mitigation strategies before scalable, fault-tolerant quantum computers can become a reality. The fidelity, a measure of how closely a quantum operation approximates its ideal behavior, is thus a key metric under intense scrutiny.
Achieving dependable quantum computation and a demonstrable advantage over classical computers hinges on accurately assessing and mitigating the inherent errors within quantum processors. A recent analysis reveals significant discrepancies between fidelity predictions – theoretical estimates of a quantum circuitâs accuracy – and the actual values obtained from data retrieved from published experiments. These deviations cast doubt on claims of âquantum supremacy,â where a quantum computer purportedly solves a problem intractable for even the most powerful classical machines. The study indicates that reported fidelities may be overly optimistic, potentially stemming from flawed benchmarking procedures or incomplete error characterization. Consequently, a rigorous and standardized approach to error evaluation is essential for validating quantum computations and ensuring the reliability of results before substantial resources are invested in scaling these technologies.

Characterizing Performance: The Random Circuit Sampling Approach
Random Circuit Sampling (RCS) is currently a primary technique for characterizing the performance of noisy intermediate-scale quantum (NISQ) computers. This benchmarking method involves executing a series of randomly generated quantum circuits and analyzing the resulting output bitstrings. The premise is that a perfect quantum computer would produce a uniform distribution of output bitstrings, while deviations from uniformity indicate the presence of errors. By statistically analyzing these deviations, researchers can estimate the fidelity of the quantum device and pinpoint potential sources of error, such as gate inaccuracies or decoherence. RCS is favored because it doesnât require assumptions about specific quantum algorithms or applications, making it a general-purpose diagnostic tool for assessing the capabilities of near-term quantum hardware.
Random Circuit Sampling (RCS) functions as a performance benchmark by executing a large number of randomly generated quantum circuits and analyzing the resulting output distributions. This process allows for the estimation of the probability of obtaining a specific output bitstring, which ideally should follow a uniform distribution for a perfect quantum computer. Deviations from this uniformity indicate the presence of errors, and the pattern of these deviations can be used to identify the dominant sources of error within the quantum hardware and control systems. By varying the circuit depth and complexity, RCS can assess performance across a range of operational regimes and provide insights into the scalability of the quantum system. The method is particularly valuable for near-term devices where fully error-corrected computation is not yet feasible, offering a means to characterize and improve device performance without requiring knowledge of specific error models.
The XEB (Exact cover problem with binomial sampling) fidelity metric, calculated from Random Circuit Sampling (RCS) data, serves as a quantitative benchmark for assessing quantum computer performance by evaluating the probability of obtaining a valid output from a randomly generated circuit. Analysis of XEB fidelity measurements reveals discrepancies between different calculation models; specifically, a refined model developed by USTC demonstrates a fidelity ratio of 0.5-0.7, while Googleâs Formula (77) yields a significantly higher ratio of 1.2-3.7. This inconsistency suggests that the observed XEB fidelity is sensitive to the specific error mitigation techniques and assumptions incorporated into the calculation, and that a standardized approach to XEB analysis is needed for reliable comparison of quantum devices.
![Analysis of Quantinuumâs H2 processor reveals that maximum likelihood estimation (MLE) fidelity, while restricted to the interval [0, 1], correlates with circuit size and provides comparable results to XEB fidelity when calculated with all values, winsorized values, or only values within [0, 1].](https://arxiv.org/html/2512.10722v1/x3.png)
Platforms at the Forefront of Quantum Development
Superconducting qubits are the foundation of quantum processors developed by Google and the University of Science and Technology of China (USTC). Googleâs Sycamore and USTCâs Zuchongzhi processors both leverage these qubits to achieve Random Circuit Sampling (RCS) capabilities. RCS involves executing randomly generated quantum circuits and verifying the output distribution, serving as a benchmark for quantum computational complexity and a demonstration of quantum supremacy. These processors fabricate qubits using superconducting materials, typically aluminum, patterned onto silicon substrates and cooled to temperatures near absolute zero to minimize thermal noise and maintain quantum coherence. The performance of these systems is characterized by metrics such as qubit count, coherence times, and gate fidelity, all of which contribute to the ability to perform complex quantum computations and demonstrate RCS.
Quantinuumâs trapped ion quantum processors demonstrate leading performance in Randomized Compilation for Scalability (RCS) benchmarks. These systems utilize individual ions, held and controlled by electromagnetic fields, as qubits, achieving high fidelity gate operations-currently exceeding 99.9%-and long coherence times. This combination of characteristics allows for complex quantum computations with reduced error rates. Beyond RCS, the high-quality qubits enable applications requiring verifiable quantum randomness, such as the generation of cryptographic keys and unbiased sampling for Monte Carlo simulations; Quantinuum provides a commercially available Certified Random Number Generator (CRNG) based on this technology.
The Harvard/QuEra/MIT collaboration is investigating quantum error correction using the Surface Code, a leading candidate for achieving fault-tolerant quantum computation. This approach aims to protect quantum information from decoherence and gate errors by encoding logical qubits into multiple physical qubits and performing error detection and correction cycles. Concurrent analysis of the Google Sycamore processor indicates a measured increase in the proportion of $|1\rangle$ states of $5.4 \times 10^{-5}$ per gate cycle, representing a significant source of error that the Surface Code seeks to mitigate. This error rate, arising from imperfections in gate operations and qubit coherence, underscores the necessity of robust error correction schemes for scaling quantum processors.

The Trajectory Toward Reliable Quantum Computation
Quantum information is notoriously fragile, susceptible to environmental noise that causes decoherence and errors. Topological qubits offer a fundamentally different approach to preserving quantum states by encoding information not in individual particles, but in the topology of exotic states of matter. Specifically, these qubits leverage Majorana Zero Modes – quasiparticles that are their own antiparticles – which arise in certain superconducting materials. Because information is encoded in the braiding patterns of these modes, rather than the state of a single particle, it becomes remarkably resilient to local disturbances. Unlike conventional qubits where a single perturbation can flip a bit, altering the topology requires a global, coordinated disruption, providing inherent protection against errors and promising significantly improved qubit stability. This topological protection is a crucial step toward building practical, fault-tolerant quantum computers capable of tackling complex problems beyond the reach of classical machines.
The pursuit of stable quantum computation hinges on minimizing the debilitating effects of decoherence and errors. Conventional qubits are notoriously susceptible to environmental noise, rapidly losing the delicate quantum information they encode. However, a new generation of qubits promises a radical departure from this limitation, potentially unlocking the full computational power quantum mechanics offers. These advanced qubits are designed with inherent resilience, drastically reducing error rates and extending the time quantum information can be reliably maintained. This improved stability isn’t merely incremental; it represents a pathway toward building quantum computers capable of performing complex calculations beyond the reach of even the most powerful classical machines, opening doors to breakthroughs in materials science, drug discovery, and artificial intelligence. The promise lies not just in faster computation, but in computation that is demonstrably reliable.
The pursuit of fault-tolerant quantum computation receives a considerable boost from advancements in topological qubits, and recent work from the University of Science and Technology of China (USTC) highlights a promising trajectory. While conventional qubits are notoriously susceptible to environmental noise leading to computational errors, topological qubits encode information in a way that inherently protects it from local disturbances. The USTC model, employing a novel approach to simulating these qubits, achieves a fidelity ratio of 0.8-1 – a result that aligns with, and in some cases surpasses, predictions from leading research groups like Google. This improved accuracy in modeling quantum behavior is not merely a theoretical exercise; it represents a crucial step towards building stable, scalable quantum systems capable of tackling complex problems currently beyond the reach of classical computers, potentially revolutionizing fields like materials science, drug discovery, and artificial intelligence.

The pursuit of demonstrable quantum supremacy, as dissected within this study, reveals a landscape fraught with the subtle decay of initial claims. The analysis of fidelity estimations and experimental data from NISQ devices exposes inconsistencies, highlighting how easily assumptions can erode under scrutiny. This echoes Richard Feynmanâs sentiment: âThe first principle is that you must not fool yourself – and you are the easiest person to fool.â The paper meticulously demonstrates how imperfections in error mitigation and the complexities of random circuit sampling can distort results, suggesting that even seemingly robust demonstrations are susceptible to the passage of time and the inevitable accumulation of discrepancies. Like any architectural endeavor, claims of supremacy require a firm foundation in rigorous validation; without it, they risk becoming fragile and ephemeral.
The Horizon of Imperfection
The pursuit of quantum supremacy, as illuminated by this analysis, resembles less a race to a defined finish line and more an extended mapping of the territory between potential and practical limitation. Claims predicated on exceeding classical computational capacity rest upon estimations of fidelity-a metric itself susceptible to the inherent decay of any complex system. The discrepancies identified here do not invalidate the endeavor, but rather highlight the necessity of acknowledging that every simplification introduced into experimental design accrues a future cost, a debt paid in nuanced error.
Further research will inevitably focus on increasingly elaborate error mitigation strategies. However, the underlying challenge remains: the accumulation of technical debt within the quantum system itself. Each corrective measure, while improving immediate results, adds to the complexity-and therefore the eventual fragility-of the apparatus. The field may benefit less from striving for absolute fidelity and more from developing a robust understanding of how systems degrade, and how to extract meaningful results even as that degradation occurs.
The true metric of progress may not be achieving a fleeting moment of computational dominance, but the ability to chart a course through the inevitable imperfections-to navigate the landscape of error with increasing precision and, perhaps, a touch of graceful acceptance.
Original article: https://arxiv.org/pdf/2512.10722.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- The Most Jaw-Dropping Pop Culture Moments of 2025 Revealed
- 3 PS Plus Extra, Premium Games for December 2025 Leaked Early
- Where Winds Meet: How To Defeat Shadow Puppeteer (Boss Guide)
- Ashes of Creation Rogue Guide for Beginners
- Best Controller Settings for ARC Raiders
- TikToker Madeleine White Marries Andrew Fedyk: See Her Wedding Dress
- Supermanâs Breakout Star Is Part of Another Major Superhero Franchise
- Where Winds Meet: Best Weapon Combinations
- Jim Ward, Voice of Ratchet & Clankâs Captain Qwark, Has Passed Away
- Kylie Jenner Makes Acting Debut in Charli XCXâs The Moment Trailer
2025-12-12 19:56