Squeezing Every Drop: Comparing Quantum State Measurement Techniques

Author: Denis Avetisyan


A new analysis reveals subtle but significant differences in the efficiency of homodyne and heterodyne tomography for characterizing quantum light.

Homodyne and heterodyne detection represent fundamentally different approaches to signal recovery; the former relies on comparing a signal to a local oscillator of the same frequency-effectively measuring amplitude-while the latter mixes the signal with a slightly offset local oscillator to produce an intermediate frequency $f_{IF} = |f_{signal} - f_{local}|$, enabling phase and amplitude information to be extracted through downconversion.
Homodyne and heterodyne detection represent fundamentally different approaches to signal recovery; the former relies on comparing a signal to a local oscillator of the same frequency-effectively measuring amplitude-while the latter mixes the signal with a slightly offset local oscillator to produce an intermediate frequency $f_{IF} = |f_{signal} – f_{local}|$, enabling phase and amplitude information to be extracted through downconversion.

This review compares the performance of homodyne and heterodyne detection in quantum state tomography, highlighting deviations from asymptotic theory observed with heterodyne detection and fewer measurements.

Reconstructing the full quantum state of light is crucial for photonic quantum information processing, yet choosing the optimal measurement strategy remains a significant challenge. This work, ‘Comparing Homodyne and Heterodyne Tomography of Quantum States of Light’, theoretically and numerically analyzes the relative efficiency of homodyne versus heterodyne detection for characterizing non-Gaussian states. Our findings demonstrate that homodyne tomography consistently outperforms heterodyne measurements across all tested states, though the degree of separation is less pronounced than predicted by asymptotic limits. How might these results inform the development of more efficient and practical continuous-variable quantum systems, particularly when dealing with limited measurement resources?


Navigating Quantum States: A Foundation for Rational Inquiry

Quantum information processing demands a departure from the classical methods of describing physical systems. Unlike classical bits representing definite 0 or 1 states, quantum bits, or qubits, exist in a superposition, embodying a probability distribution across multiple states simultaneously. This necessitates new mathematical tools to fully characterize these quantum states, moving beyond simple lists of properties to encompass the probabilistic nature inherent in quantum mechanics. Describing a quantum state requires specifying the amplitudes for all possible outcomes of a measurement, represented mathematically by a complex-valued wave function, $ \Psi $. This wave function contains all accessible information about the system, yet extracting specific properties requires calculating probabilities using this function, fundamentally differing from the deterministic nature of classical descriptions and posing unique challenges for information processing.

The Wigner function offers a unique approach to representing quantum states by mapping them onto a phase space, akin to classical mechanics, and presenting a quasiprobability distribution. This allows for a visual and intuitive understanding of what would otherwise be abstract quantum descriptions. However, it’s crucial to recognize this isn’t a true probability distribution; the Wigner function can yield negative values, violating the fundamental rules of classical probability. Consequently, its utility is most prominent when analyzing quantum states that closely resemble classical behavior, or when investigating specific quantum phenomena where these negative regions provide valuable insight. While immensely helpful, the Wigner function isn’t universally applicable and encounters limitations when characterizing highly non-classical states, demanding alternative methods for a complete quantum description.

Gaussian states represent a cornerstone of quantum information science due to their relative simplicity and prevalence in natural quantum systems. Characterized by a Gaussian distribution in phase space-visualized through the Wigner function-these states possess mathematical properties that allow for analytical treatment, simplifying complex calculations and simulations. This tractability makes them ideal for modeling various physical phenomena, including thermal states arising from interactions with heat baths and coherent states produced by lasers. Furthermore, Gaussian states serve as fundamental building blocks for more complex quantum states and are frequently employed in continuous-variable quantum computing, where information is encoded in the continuous degrees of freedom of quantum fields, like the amplitude and phase of light. The mathematical elegance and physical relevance of Gaussian states ensure their continued importance in both theoretical exploration and practical applications of quantum technologies, offering a pathway to understanding and harnessing the power of quantum mechanics.

Simulations across common optical states with an 11-dimensional Hilbert space demonstrate estimation errors for ten experiments, validating the state definitions detailed in Section III.3.
Simulations across common optical states with an 11-dimensional Hilbert space demonstrate estimation errors for ten experiments, validating the state definitions detailed in Section III.3.

Beyond Classical Limits: The Demand for Full State Characterization

The realization of quantum advantages in computational and communication protocols frequently necessitates the use of non-Gaussian quantum states. While Gaussian states are efficiently representable and manipulable, many quantum algorithms, such as those employing cluster states or Gottesman-Kitaev-Preskill (GKP) states, fundamentally rely on non-Gaussian resources to achieve speedups or enhanced security. Specifically, non-Gaussian states enable functionalities like heralded entanglement generation, which is crucial for fault-tolerant quantum computation, and the encoding of quantum information in degrees of freedom that are inaccessible to classical systems. Furthermore, certain quantum communication protocols, including quantum secret distribution schemes beyond the BB84 protocol, depend on the higher dimensionality and complex correlations present in non-Gaussian states to enhance key rates and improve security against eavesdropping attacks.

Characterizing non-Gaussian quantum states presents a substantial analytical challenge due to the high dimensionality of their parameter spaces. Unlike Gaussian states, which are fully defined by first and second-order moments, non-Gaussian states require the estimation of an increased number of parameters to fully describe their quantum state. This necessitates measurement techniques capable of extracting sufficient statistical information from the quantum system. Traditional methods often prove inadequate, demanding the development of advanced techniques such as quantum state tomography, homodyne detection with higher-order statistics, or optimized local oscillator quadrature phase of light, all requiring careful calibration and analysis to achieve accurate parameter estimation. The complexity increases significantly with higher-dimensional Hilbert spaces, requiring exponentially increasing measurement resources to maintain precision.

Maximum Likelihood Estimation (MLE) is a statistical method used to determine the values of parameters within a probability distribution that best explain a set of observed data. In the context of quantum state characterization, MLE aims to find the parameters that maximize the likelihood of obtaining the measured outcomes. This process fundamentally relies on the Classical Fisher Information ($FCI$), which quantifies the amount of information that an observable measurement provides about an unknown parameter. $FCI$ serves as a key metric within MLE, defining the precision with which parameters can be estimated; higher $FCI$ values indicate greater precision. The likelihood function, constructed from the measurement probabilities, is typically maximized numerically to obtain the MLE estimates, and the inverse of the $FCI$ provides a lower bound on the variance of these estimates.

The accuracy of parameter estimation using Maximum Likelihood Estimation (MLE) is fundamentally constrained by the Cramér-Rao Lower Bound (CRLB). This bound establishes a minimum variance for any unbiased estimator, defining a theoretical limit on the precision achievable in determining state parameters. Consequently, efficient measurement strategies are required to approach this limit and maximize the information gained from experimental data. Simulations investigating this relationship were performed with Hilbert space dimensions ranging up to $d = 11$. These simulations demonstrated that performance in parameter estimation, as measured by the variance of the estimator, is significantly affected by increasing dimensionality, highlighting the need for tailored measurement schemes to mitigate the effects of higher-dimensional state spaces and achieve optimal estimation accuracy.

Simulation results across Hilbert space dimensions of 2 to 6 demonstrate that the estimated Wigner functions (circles represent individual trials, solid lines show the mean error over ten trials) approach the Cramer-Rao lower bound (CRLB, dashed lines).
Simulation results across Hilbert space dimensions of 2 to 6 demonstrate that the estimated Wigner functions (circles represent individual trials, solid lines show the mean error over ten trials) approach the Cramer-Rao lower bound (CRLB, dashed lines).

Extracting Quantum Information: Advanced Measurement Techniques

Homodyne detection is a quantum measurement technique that projects a quantum state onto a single quadrature of the electromagnetic field. This measurement yields a classical voltage proportional to that quadrature’s amplitude. State reconstruction from homodyne data is typically accomplished via the Wigner function, which represents the quasi-probability distribution of the quantum state in phase space. The Wigner function allows for the estimation of the state’s parameters by effectively converting the quantum state estimation problem into a classical parameter estimation problem. Data acquired via homodyne detection provides information about one observable, necessitating careful data processing and statistical inference to reconstruct the full quantum state.

Heterodyne detection provides a complete characterization of a quantum state by simultaneously measuring both quadrature amplitudes of the electromagnetic field. Unlike homodyne detection, which measures a single quadrature, heterodyne detection captures the full information needed to reconstruct the Wigner function and, more commonly, to directly determine the Husimi $Q$ function. The Husimi $Q$ function is a quasi-probability distribution offering a positive definite representation of the quantum state, simplifying state estimation and analysis. This simultaneous measurement approach theoretically allows for a more complete state reconstruction, though practical implementations can be susceptible to increased noise and calibration challenges compared to homodyne detection, as demonstrated in simulation results showing slower convergence to the Cramér-Rao Lower Bound.

Generalized Gell-Mann matrices provide a parameterization scheme for quantum states that streamlines the maximum likelihood estimation (MLE) process and facilitates robust state tomography. Unlike direct estimation of density matrix elements, which requires $N^2$ parameters for an $N$-dimensional state, utilizing Gell-Mann matrices reduces the parameter space by leveraging the Lie algebraic structure of quantum states. This reduction in dimensionality improves the efficiency of MLE algorithms, decreasing computational cost and accelerating convergence. Furthermore, the inherent properties of Gell-Mann matrices enhance the conditioning of the estimation problem, leading to more stable and accurate state reconstruction, particularly in the presence of experimental noise or incomplete data. This approach is especially valuable for high-dimensional systems where traditional parameterization methods become intractable.

Characterization of non-classical states, such as squeezed vacuum states exhibiting reduced noise in one quadrature, relies heavily on advanced measurement techniques. Simulations comparing homodyne and heterodyne detection demonstrate a significant performance disparity; homodyne measurements consistently yield estimation errors that converge to the Cramér-Rao Lower Bound (CRLB), representing the theoretical minimum achievable error. Conversely, heterodyne measurements exhibit substantially larger estimation errors that fail to converge to the CRLB, even with data acquisition from up to $10^9$ copies of the quantum state. This indicates a fundamental limitation in the ability of heterodyne detection to accurately estimate parameters for these states under the conditions tested, highlighting the superior performance of homodyne detection for this specific application.

Simulation results across Hilbert space dimensions of 7 to 11 demonstrate that the estimation error, measured by the Frobenius norm and averaged over ten trials (solid lines), consistently approaches the Cramér-Rao lower bound (dashed lines) for random non-Gaussian states, as evidenced by the close match between estimated Wigner functions and ground truth (a) and their corresponding errors (b).
Simulation results across Hilbert space dimensions of 7 to 11 demonstrate that the estimation error, measured by the Frobenius norm and averaged over ten trials (solid lines), consistently approaches the Cramér-Rao lower bound (dashed lines) for random non-Gaussian states, as evidenced by the close match between estimated Wigner functions and ground truth (a) and their corresponding errors (b).

Beyond Qubits: The Potential of Continuous Variable Quantum Optics

Continuous Variable Quantum Optics (CVQO) diverges from traditional qubit-based quantum information processing by leveraging the infinite and nuanced possibilities within the amplitude and phase of electromagnetic fields, such as light. Instead of discrete, binary states, CVQO encodes quantum information onto continuous variables, allowing for a richer and more flexible representation. This approach treats optical fields not as composed of individual photons, but as possessing continuously varying properties, akin to a classical wave, but governed by the laws of quantum mechanics. By manipulating these continuous degrees of freedom, researchers can perform quantum computations and communication tasks, opening pathways to advanced technologies like highly sensitive sensors and secure communication protocols. The inherent nature of these variables allows for the potential to describe many-body quantum systems with greater efficiency, making CVQO a powerful and increasingly relevant field within quantum information science.

At the heart of Continuous Variable Quantum Optics (CVQO) lies the concept of the Qmode, a foundational element that distinguishes it from qubit-based quantum information processing. Unlike qubits which exist in discrete $0$ or $1$ states, a Qmode represents a single degree of freedom – often the amplitude and phase of an electromagnetic field – capable of simultaneously existing in a superposition of multiple photon numbers. This allows for the encoding of quantum information using continuous variables, effectively treating light not as individual particles, but as a wave with fluctuating properties. The Qmode isn’t limited to a single photon; it can accommodate any number, enabling the exploration of quantum phenomena with complex states and opening avenues for manipulating quantum information in ways not easily accessible with discrete systems. This inherent flexibility makes the Qmode a versatile building block for various CVQO applications, from secure communication to quantum computation.

Quantum Key Distribution (QKD) represents a pivotal application of Continuous Variable Quantum Optics (CVQO), offering communication security rooted in the laws of physics. Unlike classical cryptography, which relies on computational complexity, QKD leverages the inherent uncertainty of quantum mechanics to guarantee secure key exchange. In CVQO-based QKD protocols, information is encoded onto the continuous degrees of freedom of light – typically its amplitude and phase – and transmitted between parties. Any attempt by an eavesdropper to intercept and measure this quantum information inevitably disturbs the signal, alerting the legitimate communicators to the intrusion. This disturbance arises because measuring a quantum state alters it, a fundamental principle preventing undetected eavesdropping. Consequently, CVQO facilitates the creation of cryptographic keys with guaranteed security, establishing a robust foundation for secure communication networks and protecting sensitive data from unauthorized access, and surpassing the limitations of classical encryption methods.

Gottesman-Knill-Preskill (GKP) encoding represents a pivotal advancement in continuous-variable quantum optics by constructing quantum states with non-classical displacement, enabling robust error correction schemes crucial for maintaining quantum information integrity. These states, possessing a unique resilience to photon loss, are foundational for building fault-tolerant quantum computers. Complementing this error mitigation strategy, techniques like Gaussian Boson Sampling showcase the potential for achieving quantum advantage – demonstrating computational tasks intractable for classical computers. Recent investigations further illuminate the nuances of quantum state estimation; specifically, results indicate that homodyne detection consistently provides a greater Classical Fisher Information than heterodyne detection, signifying a more precise determination of the quantum state parameters and enhancing the efficiency of quantum information processing. This improved precision is vital for optimizing quantum protocols and realizing the full potential of continuous-variable quantum systems.

The pursuit of accurate quantum state tomography, as detailed in this comparison of homodyne and heterodyne techniques, reveals a familiar tension. Data, in the form of measurement statistics, doesn’t inherently reveal truth – it merely constrains possibilities. This study demonstrates that while homodyne detection aligns with established asymptotic theory, heterodyne results deviate, especially with limited measurements. As Albert Einstein observed, “The most incomprehensible thing about the world is that it is comprehensible.” The discrepancy isn’t necessarily a failure of the heterodyne method, but rather a signal that the model-the assumed relationship between measurements and the underlying quantum state-requires further refinement. The observed deviation highlights the importance of rigorous testing and iterative model improvement, acknowledging that even beautiful correlations require contextual validation before definitive conclusions can be drawn.

Where Do We Go From Here?

The persistent outperformance of homodyne tomography, while unsurprising given the established theoretical framework, begs a less optimistic question: are the discrepancies observed with heterodyne detection simply a consequence of insufficient optimization, or do they point to a fundamental mismatch between the asymptotic assumptions underpinning much of continuous-variable quantum optics and the realities of experimental implementation? The data suggests the latter, though acknowledging that chasing perfect agreement with a limit designed for infinite samples feels… inefficient. It is tempting to refine the heterodyne protocols, but perhaps the effort would be better spent on developing more robust metrics for quantifying the actual information gained, rather than attempting to force results into a pre-defined mold.

Further investigation should move beyond simple state reconstruction and consider the impact of these detection differences on downstream quantum information processing tasks. Does the “noise” inherent in a less-efficient heterodyne measurement ultimately limit achievable fidelities, or can it be cleverly exploited? The pursuit of “optimal” detection schemes often overlooks the practical constraints of real-world experiments. A more fruitful avenue might involve developing methods for reliably estimating the information content of a finite set of measurements, regardless of the detection technique employed.

Ultimately, the field appears poised for a shift. Not necessarily away from heterodyne detection, but towards a more nuanced understanding of its limitations. If everything fits perfectly, one should always suspect a poorly defined question. The goal isn’t merely to build better tomographs, but to construct a more honest appraisal of what can be reliably known.


Original article: https://arxiv.org/pdf/2512.17031.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-23 05:02