optimal quantum learning and multiround reference frame
play

OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT - PowerPoint PPT Presentation

OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT Giulio Chiribe lm a Joint works with G M DAriano, P Perinotti, A Bisio, and S Facchini Quantum Information Theory Group Pavia University work supported by the EC project


  1. OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT Giulio Chiribe lm a Joint works with G M D’Ariano, P Perinotti, A Bisio, and S Facchini Quantum Information Theory Group Pavia University work supported by the EC project CORNER DEX - SMI W orkshop on Quantum Statistical Inference, National Institute for Informatics, Tokyo, 2 - 4 March 2009

  2. OUTLINE • Optimal quantum learning of a unitary transformation from finite examples (arXiv:0903.0543v1 ) • Optimal correction of an unknown rotation (a little variation on the theme of quantum learning) • Multi-round and adaptive alignment of reference frames equivalence of backward communication with forward communication of charge-conjugate particles

  3. OPTIMAL QUANTUM LEARNING: WHAT IS IT ABOUT

  4. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f y N y 2 y 1 . . .

  5. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f x N y N y 1 y 2 x 1 x 2 . . . . . .

  6. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f x N y N y 1 y 2 x 1 x 2 . . . . . . Subsequently, we are asked to compute f on a new point x, without using the black box

  7. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f x N y N y 1 y 2 x 1 x 2 . . . . . . Subsequently, we are asked to compute f on a new point x, without using the black box x

  8. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f x N y N y 1 y 2 x 1 x 2 . . . . . . Subsequently, we are asked to compute f on a new point x, without using the black box f ( x ) = ?

  9. LEARNING AN UNKNOWN FUNCTION Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points x 1 , . . . , x N y 1 , . . . , y N getting outcomes f x N y N y 1 y 2 x 1 x 2 . . . . . . Subsequently, we are asked to compute f on a new point x, without using the black box f ( x ) = ? In classical computer science, statistical learning provides several efficient solutions for this problem

  10. CLASSICAL NETWORKS FOR LEARNING Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration f y 1 . . . . . . f y N

  11. CLASSICAL NETWORKS FOR LEARNING Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration f x 1 y 1 . . . . . . . . . f y N x N

  12. CLASSICAL NETWORKS FOR LEARNING Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration f x 1 y 1 . . . . . . . . . f y N x N To learn better, one could use a sequential network: f f f g 3 g 2 g 1 where are known functions g 1 , g 2 , . . . , g N

  13. OPTIMIZATION PROBLEM Find the optimal strategy to learn an unknown function f ∈ F 0 This means: • find the best network F = g N ◦ f ◦ · · · ◦ g 2 ◦ f ◦ g 1 ◦ f → Y = F ( X ) → • find the best input X ˆ f • for outcome Y, find the optimal guess Y → ˆ f •Difference with estimation of the function f ˆ Estimation corresponds to the special case f ∈ F 0 F 0 In general, the optimal guess does not have to be in .

  14. FROM CLASSICAL TO QUANTUM LEARNING • Unknown function f unknown quantum channel E → • Classical network quantum network → → ρ in • Input X quantum state ρ out → • Output Y quantum state E E E E C N C 1 C N − 1 ρ in

  15. GUESSING A CHANNEL FROM A STATE • Classical guess Quantum “guess” → ρ out → ˆ Y → ˆ E f Physical implementation of the quantum guess: retrieving channel R ρ out It retrieves the unknown transformation from the output state ρ and performs it on a new state ρ ˆ R E = ρ ρ out

  16. GUESSING A CHANNEL FROM A STATE • Classical guess Quantum “guess” → ρ out → ˆ Y → ˆ E f Physical implementation of the quantum guess: retrieving channel R ρ out It retrieves the unknown transformation from the output state ρ and performs it on a new state ρ ˆ E R E = ρ ρ ρ out → Target: implementing the unknown channel with maximum fidelity

  17. OPTIMAL QUANTUM LEARNING Find the optimal strategy to learn an unknown channel E ∈ E 0 This means: • find the best network → N = C N ◦ E ◦ · · · ◦ C 2 ◦ E ◦ C 1 ◦ E → ρ out = N ( ρ in ) ρ in • find the best input R • find the optimal retrieving channel ˆ E ( ρ ) = R ( ρ ⊗ ρ out ) → Figure of merit: input-output fidelity � F ( E , ˆ d ϕ F ( E ( ϕ ) , ˆ E ) = E ( ϕ )) � � 1 1 1 2 ) 2 σρ F ( ρ, σ ) = Tr ( ρ 2

  18. “MEASURE-AND-PREPARE” SCHEMES • Particular scheme to retrieve the unknown transformation: -perform a measurement on the output state, ˆ -for outcome Y perform channel E Y In this case, the retrieving channel is: Tr[ P Y ρ out ] ˆ � R meas ( ρ ⊗ ρ out ) = E Y ( ρ ) Y • Particular measure-and-prepare scheme: estimation of the channel E ∈ E 0 ˆ E Y ∈ E 0 In this case, one has Estimation {measure-and-prepare schemes} {retrieving channels} ∈ ⊂

  19. LEARNING AN UNKNOWN UNITARY Consider the case where the set of channels E 0 is a group of unitary transformations. U U U C U C N R C 1 ρ in = Assuming a uniform prior for the unknown unitaries, we have the average fidelity � F = d U F ( U , C U )

  20. HOW TO OPTIMIZE A QUANTUM NETWORK: QUANTUM COMBS

  21. CHOI-JAMIOLKOWSKI OPERATORS Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) � C = ( C ⊗ I )( | I � �� � I | ) | I � � = | n �| n � n C | I � � For a unitary channel: ( U ⊗ I )( | I � �� � I | ) = | U � �� � U | | U � � = ( U ⊗ I ) | I � �

  22. CHOI-JAMIOLKOWSKI OPERATORS Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) � C = ( C ⊗ I )( | I � �� � I | ) | I � � = | n �| n � n C | I � � For a unitary channel: ( U ⊗ I )( | I � �� � I | ) = | U � �� � U | | U � � = ( U ⊗ I ) | I � �

  23. CHOI-JAMIOLKOWSKI OPERATORS Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) � C = ( C ⊗ I )( | I � �� � I | ) | I � � = | n �| n � n C = | I � � For a unitary channel: ( U ⊗ I )( | I � �� � I | ) = | U � �� � U | | U � � = ( U ⊗ I ) | I � �

  24. CHOI-JAMIOLKOWSKI OPERATORS Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) � C = ( C ⊗ I )( | I � �� � I | ) | I � � = | n �| n � n C C = | I � � For a unitary channel: ( U ⊗ I )( | I � �� � I | ) = | U � �� � U | | U � � = ( U ⊗ I ) | I � �

  25. LINK PRODUCT Convenient representation of composition of linear maps: link product ⇒ F cb ∗ E ba := Tr b [( F cb ⊗ I a )( I c ⊗ E τ b ba )] F ◦ E ⇐ F cb ∗ E ba = E ba ∗ F cb up to permutation of Hilbert spaces GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

  26. LINK PRODUCT Convenient representation of composition of linear maps: link product ⇒ F cb ∗ E ba := Tr b [( F cb ⊗ I a )( I c ⊗ E τ b ba )] F ◦ E ⇐ a b b c E F F cb ∗ E ba = E ba ∗ F cb up to permutation of Hilbert spaces GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

  27. LINK PRODUCT Convenient representation of composition of linear maps: link product ⇒ F cb ∗ E ba := Tr b [( F cb ⊗ I a )( I c ⊗ E τ b ba )] F ◦ E ⇐ a b b c E F F cb ∗ E ba = E ba ∗ F cb up to permutation of Hilbert spaces GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

  28. LINK PRODUCT Convenient representation of composition of linear maps: link product ⇒ F cb ∗ E ba := Tr b [( F cb ⊗ I a )( I c ⊗ E τ b ba )] F ◦ E ⇐ a b b c E F = F cb ∗ E ba = E ba ∗ F cb up to permutation of Hilbert spaces GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

  29. LINK PRODUCT Convenient representation of composition of linear maps: link product ⇒ F cb ∗ E ba := Tr b [( F cb ⊗ I a )( I c ⊗ E τ b ba )] F ◦ E ⇐ c F b’ a b b c E F = b E a F cb ∗ E ba = E ba ∗ F cb up to permutation of Hilbert spaces GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

Recommend


More recommend