General estimation theory We have shown that it is possible to win over the shot noise in optical interferometry, by using states with specific quantum features, like states with well-defined number of photons or squeezed states. In these examples, the estimation was obtained through measurement of the difference of photon numbers in the outgoing arms of the interferometer. It is not clear whether these are the best possible measurements, or whether better bounds can be obtained by using other incoming states. One may ask whether it is possible to find general bounds and strategies for reaching them, which could be applied to many different systems, and could eventually help us to identify which are the best states and the best measurements for achieving the best possible precision. This is the aim of this series of lectures: to develop, and apply to examples, a general estimation theory, capable not only to consider unitary evolutions of closed systems, like the one described here for the optical interferometer, but also open (noisy) systems.
General estimation theory 1. What are the best possible measurements? 2. What are the best incoming states, in order to get better precision? 3. Is it possible to find general bounds and strategies for reaching them, which could be applied to many different systems?
Parameter estimation in classical and quantum physics 1. Prepare probe in suitable initial state 2. Send probe through process to be investigated 3. Choose suitable measurement 4. Associate each experimental result j with estimation [ ] 2 〉 j X = X true → Merit quantifier δ X ≡ 〈 X est ( j ) − X Merit quantifier Unbiased estimator X est = X true , d X est / dX X = X true = 1 → Unbiased estimator δ X 2 = ∆ 2 X = D [ X est � h X est i ] 2 E Then variance of X est (average ! is taken over all experimental results) Estimator depends only on the experimental data.
Classical parameter estimation H. Cramér C. R. Rao R. A. Fisher Cramér-Rao bound for unbiased estimators: 2 ⎛ ⎞ ⎡ ⎤ ( ) d ln P j X ⎣ ⎦ ∑ ( ) ≡ ( ) ⎜ ⎟ Δ X ≥ 1/ N F ( X ) X = X true , F X P j X ⎜ ⎟ dX ⎝ ⎠ j Fisher N → Number of repetitions of the experiment information ( ) → probability of getting an experimental result j P j X � 2 ∂ ln p ( ξ | X ) Z or yet, for continuous measurements: F ( X ) ≡ d ξ p ( ξ | X ) ∂ X where are the measurement results ξ (Average over all experimental results)
Derivation of Cramér-Rao relation: See lectures by L. Davidovich at College de France, 2016: http://www.if.ufrj.br/~ldavid/eng/show_arquivos.php?Id=5 Exercises 1. Show that � 2 � 2 ∂ ln p ( ξ | X ) ∂ p ( ξ | X ) Z Z 1 F ( X ) ≡ d ξ p ( ξ | X ) = d ξ p ( ξ | X ) ∂ X ∂ X ⌧ ∂ 2 # 2 " p p ( ξ | X ) � Z ∂ = 4 = − ∂ X 2 ln p ( ξ | X ) d ξ ∂ X with similar expressions for a discrete set of measurements. For instance, # 2 " p P k ( X ) d X F ( X ) = dX k 2. Let us consider several identical and independent measurements, so that the probability distribution is . Show p ( ~ ⇠ | X ) = p ( ⇠ 1 | X ) · · · p ( ⇠ N | X ) that F ( N ) ( X ) = NF ( X )
Understanding the Fisher information (1) Márcio Mendes Taddei, Ph. D. thesis, Federal University of Rio de Janeiro, available at arXiv:1407.4343v1 [quant-ph] The gravitational field is measured by undergraduate students, via an inclined- plane experiment, in two labs, situated at Huáscaran (Peruvian Andes) and the Artic Sea, so g true is different in both cases. Their precision is one decimal place. The same measurement is made by higher-precision satellites, with one additional decimal place. g true =9.76392 m/s 2 g true =9.83366 m/s 2
Understanding the Fisher information (2) The higher precision of the satellite experiments implies that it is easier to distinguish the true values of g from the Pk of these measurements. Important question: How much does the outcome distribution change by a change of the underlying true value of the parameter? I show now that the Fisher information is a measure of this change. The distance between two probability distributions {P k } for a given set {k} of outcomes, which differ because they belong to two different values x and x’ of the parameter, can be defined by the Hellinger expression D H : s 1 i 2 hp X p D H ( x, x 0 ) = P k ( x ) − P k ( x 0 ) 2 Then, k d � 2 H ( x, x + dx ) = 1 = 1 i 2 hp X X p p D 2 dx 2 P k ( x + dx ) − P k ( x ) P k ( x ) 2 2 dx k k and F(X) as a measure of change of H = F ( x ) D 2 H ( x, x + dx ) = ds 2 dx 2 the probability distribution! 8
Understanding the Fisher information (3) The expression for the Hellinger distance can be written in terms of the fidelity between the two distributions: s 1 i 2 q hp X p p D H ( x, x 0 ) = P k ( x ) − P k ( x 0 ) = 1 − Φ H ( x, x 0 ) 2 k where # 2 "X p (=1 for x=x’) Φ H ( x, x 0 ) = P k ( x ) P k ( x 0 ) k Therefore: Φ H ( x, x 0 ) = 1 − F ( x ) dx 2 4 p F ( x ) → Speed of change 2
I.2 - Quantum parameter estimation 42
Quantum parameter estimation The general idea is the same as before: one sends a probe through a parameter-dependent dynamical process and one measures the final state to determine the parameter. The precision in the determination of the parameter depends now on the distinguishability between quantum states corresponding to nearby values of the parameter. 43
Example: Optical interferometry ( ) 2 = exp − α 1 − e i δθ ( ) 2 α α e i δθ Standard limit (shot noise) ( ) ⎡ ⎤ ≈ exp − n δθ 2 ⎦ ⇒ δθ ≈ 1/ n ⎣ Heisenberg Possible method to increase precision for the same average number limit: of photons: Use NOON states [J. J. Bolinguer et al., PRA 54 , R4649 (1996); J. P. Dowling, PRA 57, 4736 (1998)] ( ) / ( ) / ( ) ( ) = ( ) = N ,0 + e iN θ 0, N ψ N N ,0 + 0, N 2 → ψ N , θ n = N 2, cos 2 ( N δθ / 2) = 0 2 = cos 2 N δθ / 2 ⇥ ( ) ψ N , δθ ( ) ( ) ⇒ δθ ≈ 1/ N ψ N ⇤ ⇒ δθ = π /N HEISENBERG LIMIT — Precision is better, for the same amount of resources (average number of photons)! 44
Quantum Fisher Information (Helstrom, Holevo, Braunstein and Caves) ( ) = Tr ( ) ˆ ( ) ⎡ ⎤ 2 p ξ | X ρ X ⎛ ⎞ d ln p ξ | X ⎡ ⎤ ˆ ( ) ≡ E ξ ⎣ ⎦ ⎣ ⎦ ( ) ∫ F X ;{ ˆ d ξ p ξ | X E ξ } ⎜ ⎟ ⎝ ⎠ dX ∫ d ξ ˆ = ˆ POVM 1 E ξ This corresponds to a given quantum measurement. Ultimate lower bound for : optimize over all quantum measurements h ( ∆ X est ) 2 i so that ( ) { } Q ( X ) = max E ξ F } F X ; E ξ Quantum Fisher Information {
Quantum Fisher information for pure states (See notes for derivation) Initial state of the probe: | ψ (0) � Final X-dependent state: , unitary operator. | ψ ( X ) � = ˆ ˆ U ( X ) | ψ (0) � U ( X ) Then (Helstrom 1976): i 2 h F Q ( X ) = 4 ⇤ ( ∆ ˆ ⇤ ( ∆ ˆ H ( X ) � ⇤ ˆ ˆ H ) 2 ⌅ 0 , H ) 2 ⌅ 0 ⇥ ⇤ ψ (0) | H ( X ) ⌅ 0 | ψ (0) ⌅ where H ( X ) ≡ i d ˆ U † ( X ) ˆ ˆ U ( X ) dX If , independent of X, then U ( X ) = exp( i ˆ ˆ OX ) ˆ H = ˆ ˆ O O ⇒ Should maximize the variance to δ x ≥ 1/ 2 ν Δ ˆ H 2 get better precision!
Another expression for the quantum Fisher information From i 2 h F Q ( X ) = 4 ⇤ ( ∆ ˆ ⇤ ( ∆ ˆ H ( X ) � ⇤ ˆ ˆ H ) 2 ⌅ 0 , H ) 2 ⌅ 0 ⇥ ⇤ ψ (0) | H ( X ) ⌅ 0 | ψ (0) ⌅ H ( X ) ≡ i d ˆ U † ( X ) and ˆ ˆ U ( X ) dX it follows that " 2 # � � d h ψ ( X ) | d | ψ ( X ) i d h ψ ( X ) | � � F Q ( X ) = 4 � | ψ ( X ) i � � dX dX dX � � Exercise: Show this!
Geometrical interpretation of the quantum Fisher information Remember that, for classical probability distributions, one had # 2 "X Φ H ( x, x 0 ) = 1 − F ( x ) p Φ H ( x, x 0 ) = P k ( x ) P k ( x 0 ) , dx 2 4 k Using the expressions of the probabilities in terms of Ê k , the Bures fidelity between two density operators and is defined as ˆ ˆ ρ σ # 2 # 2 "X "X q ρ ˆ σ ˆ p Φ B (ˆ ρ , ˆ σ ) = min Tr(ˆ E k )Tr(ˆ E k ) = min P k (ˆ ρ ) P k (ˆ σ ) { ˆ { ˆ E k } E k } k k 2 1/2 ˆ ( ) This can be shown to be equal to: Φ B ˆ ρ 1 , ˆ ˆ ρ 2 ˆ 1/2 ( ) ≡ Tr ρ 2 ρ 1 ρ 1 Minimization of leads to maximization of F(x), thus yielding the quantum Φ H Fisher information. Q / 2 → speed F ( ) 2 (pure states) ( ) ≡ Tr 2 1/2 ˆ Bures' Fidelity: Φ B ˆ ρ 1 , ˆ ρ 2 ρ 1 ρ 2 ˆ ρ 1 = ψ 1 ψ 2 ˆ 1/2 2 F ( ) , ˆ ( ) ( ) ( ) ( ) ⎡ ⎤ ⇒ Φ B ⎡ ρ X ρ X + δ X ⎦ = 1 − δ X ⎤ ⎡ ρ X ⎤ ⎦ / 4 + O δ X 4 ˆ ˆ ⎣ ⎣ ⎢ ⎥ Q
Recommend
More recommend