relative entropy optimization in quantum information
play

Relative entropy optimization in quantum information Omar Fawzi - PowerPoint PPT Presentation

Relative entropy optimization in quantum information Omar Fawzi ICMP 2018, Montr eal 1/11 Quantum relative entropie s For classical states (i.e., prob. distributions) P and Q on X P ( x ) log P ( x ) D ( P Q ) := Q ( x ) x X 2/11


  1. Relative entropy optimization in quantum information Omar Fawzi ICMP 2018, Montr´ eal 1/11

  2. Quantum relative entropie s For classical states (i.e., prob. distributions) P and Q on X P ( x ) log P ( x ) � D ( P � Q ) := Q ( x ) x ∈X 2/11

  3. Quantum relative entropie s For classical states (i.e., prob. distributions) P and Q on X P ( x ) log P ( x ) � D ( P � Q ) := Q ( x ) x ∈X For quantum states ρ and σ on C d , multiple choices: Matrix logarithm [Umegaki, 1962] 1 D ( ρ � σ ) := tr[ ρ log ρ ] − tr[ ρ log σ ] 2 Matrix logarithm in a different way [Belavkin, Stasewski, 1982] � � ρ 1 / 2 σ − 1 ρ 1 / 2 �� D BS ( ρ � σ ) := tr ρ log Optimize over all measurements [Donald, 1986] 3 tr[ M x ρ ] log tr[ M x ρ ] � D M ( ρ � σ ) := sup tr[ M x σ ] { M x } x ∈X PSD , � x M x =id x ∈X Most common is Umegaki’s: hypothesis testing interpretation [Hiai, Petz, 1991, The Proper Formula for Relative Entropy and its Asymptotics in Quantum Probability ] 2/11

  4. Quantum relative entropie s For classical states (i.e., prob. distributions) P and Q on X P ( x ) log P ( x ) � D ( P � Q ) := Q ( x ) x ∈X For quantum states ρ and σ on C d , multiple choices: Matrix logarithm [Umegaki, 1962] 1 D ( ρ � σ ) := tr[ ρ log ρ ] − tr[ ρ log σ ] 2 Matrix logarithm in a different way [Belavkin, Stasewski, 1982] � � ρ 1 / 2 σ − 1 ρ 1 / 2 �� D BS ( ρ � σ ) := tr ρ log Optimize over all measurements [Donald, 1986] 3 tr[ M x ρ ] log tr[ M x ρ ] � D M ( ρ � σ ) := sup tr[ M x σ ] { M x } x ∈X PSD , � x M x =id x ∈X Most common is Umegaki’s: hypothesis testing interpretation [Hiai, Petz, 1991, The Proper Formula for Relative Entropy and its Asymptotics in Quantum Probability ] ... but others can also be useful too. 2/11

  5. Quantum relative entropie s D M ( ρ � σ ) ≤ D ( ρ � σ ) ≤ D BS ( ρ � σ ) Most important property: Joint convexity D ((1 − t ) ρ 0 + t ρ 1 � (1 − t ) σ 0 + t σ 1 ) ≤ (1 − t ) D ( ρ 0 � σ 0 ) + t D ( ρ 1 � σ 1 ) Classical relative entropy D ( P � Q ): simple application of convexity of x �→ x log x Quantum relative entropies: D : consequence of Lieb’s concavity theorem [Lieb, 1973] D BS : consequence of concavity of matrix geometric mean [Fujii, Kamei, 1989] D M : follows easily from the classical case as sup of convex functions Operational consequence: Data processing inequality, for N completely positive trace preserving map D ( N ( ρ ) �N ( σ )) ≤ D ( ρ � σ ) Another appealing consequence: For S convex, min ( ρ,σ ) ∈S D ( ρ � σ ) is a convex problem 3/11

  6. Quantities based on relative entropy optimization Relative entropy of entanglement 1 E R ( ρ AB ) = min D ( ρ AB � σ AB ) σ AB ∈ Sep AB More generally, relative entropy of resource E ( ρ ) = min σ ∈F D ( ρ � σ ) in a resource theory where F are the free states Quantifies amount of resource in state ρ Quantum channel capacities, e.g., entanglement assisted capacity 2 N ( ρ ) = tr E ( U ρ U ∗ ) with U isometry A → B ⊗ E C ea ( N ) = max − D ( σ BE � σ B ⊗ id E ) − D ( σ B � id B ) s.t. σ BE = N ( ρ A ) , ρ A ∈ D( A ) D of recovery of ρ ABC : quantifies how well C can be locally recovered 3 R :L( B ) → L( BC ) CPTP D ( ρ ABC � ( I A ⊗ R )( ρ AB )) min Running example: recoverability 4/11

  7. Recoverability I ( A : C | B ) ρ = D ( ρ AB � id A ⊗ ρ B ) − D ( ρ ABC � id A ⊗ ρ BC ) Motivation: Operational properties of states ρ ABC with I ( A : C | B ) ρ ≤ ǫ near-saturation of data processing inequality for D “approximate quantum Markov chains” Surprisingly, there is a state ρ ABC with I ( A : C | B ) ρ ≤ 1 d and ρ ABC is 1 4 -far from exact Markov states [Ibinson, Linden, Winter, 2006] and [Christandl, Schuch, Winter, 2012] 5/11

  8. Recoverability I ( A : C | B ) ρ = D ( ρ AB � id A ⊗ ρ B ) − D ( ρ ABC � id A ⊗ ρ BC ) Motivation: Operational properties of states ρ ABC with I ( A : C | B ) ρ ≤ ǫ near-saturation of data processing inequality for D “approximate quantum Markov chains” Surprisingly, there is a state ρ ABC with I ( A : C | B ) ρ ≤ 1 d and ρ ABC is 1 4 -far from exact Markov states [Ibinson, Linden, Winter, 2006] and [Christandl, Schuch, Winter, 2012] But, the state ρ ABC is approximately recoverable [Fawzi, Renner, 2014] building on [Li, Winter, 2012] , ..., [Berta, Seshadreesan, Wilde, 2014] : R :L( B ) → L( BC ) CPTP D ( ρ ABC � ( I A ⊗ R )( ρ AB )) ≤ ǫ min enyi divergence of order 1 for D = − 2 log F (aka sandwiched R´ 2 ) 5/11

  9. Recoverability Let D rec ( ρ ABC ) = min R :L( B ) → L( BC ) CPTP D ( ρ ABC � ( I A ⊗ R )( ρ AB )) We saw that D rec ( ρ ABC ) ≤ I ( A : C | B ) ρ for D = − 2 log F [Fawzi, Renner, 2014] Note that − 2 log F ≤ D M ≤ D ≤ D BS The inequality is true with D = D classically Can it be improved in quantum case with D = D M , D , D BS ? 6/11

  10. Recoverability Let D rec ( ρ ABC ) = min R :L( B ) → L( BC ) CPTP D ( ρ ABC � ( I A ⊗ R )( ρ AB )) We saw that D rec ( ρ ABC ) ≤ I ( A : C | B ) ρ for D = − 2 log F [Fawzi, Renner, 2014] Note that − 2 log F ≤ D M ≤ D ≤ D BS The inequality is true with D = D classically Can it be improved in quantum case with D = D M , D , D BS ? YES for D M as shown in [Brandao, Harrow, Oppenheim, Strelchuck, 2014] NO for D as shown in [Fawzi, Fawzi, 2017] 6/11

  11. Recoverability Let D rec ( ρ ABC ) = min R :L( B ) → L( BC ) CPTP D ( ρ ABC � ( I A ⊗ R )( ρ AB )) We saw that D rec ( ρ ABC ) ≤ I ( A : C | B ) ρ for D = − 2 log F [Fawzi, Renner, 2014] Note that − 2 log F ≤ D M ≤ D ≤ D BS The inequality is true with D = D classically Can it be improved in quantum case with D = D M , D , D BS ? YES for D M as shown in [Brandao, Harrow, Oppenheim, Strelchuck, 2014] NO for D as shown in [Fawzi, Fawzi, 2017] Why does D M behave better here? → additivity property of D rec under tensor M product, not satisfied by D rec 6/11

  12. Additivity of optimized relative entropies I Consider D opt ( ρ ) := min σ ∈C D ( ρ � σ ) , where C convex set of states 7/11

  13. Additivity of optimized relative entropies I Consider D opt ( ρ ) := min σ ∈C D ( ρ � σ ) , where C convex set of states 7/11

  14. Additivity of optimized relative entropies I Consider D opt ( ρ ) := min σ ∈C D ( ρ � σ ) , where C convex set of states Both D = D and D = D M are super-additive on tensor product states D ( ρ 1 ⊗ ρ 2 � σ 1 ⊗ σ 2 ) ≥ D ( ρ 1 � σ 1 ) + D ( ρ 2 � σ 2 ) Does this property transfer to D opt ? 7/11

  15. Additivity of optimized relative entropies I Consider D opt ( ρ ) := min σ ∈C D ( ρ � σ ) , where C convex set of states Both D = D and D = D M are super-additive on tensor product states D ( ρ 1 ⊗ ρ 2 � σ 1 ⊗ σ 2 ) ≥ D ( ρ 1 � σ 1 ) + D ( ρ 2 � σ 2 ) Does this property transfer to D opt ? Super-additivity of D opt on tensor product states: σ 12 ∈C 12 D ( ρ 1 ⊗ ρ 2 � σ 12 ) = D opt ( ρ 1 ⊗ ρ 2 ) min ? ≥ D opt ( ρ 1 ) + D opt ( ρ 2 ) σ 1 ∈C 1 D ( ρ 1 � σ 1 ) + min σ 2 ∈C 2 D ( ρ 2 � σ 2 ) = min 7/11

  16. Using variational formulas Idea: Use variational formulas for D D ( ρ � σ ) = sup tr[ ρ log ω ]+1 − tr exp (log σ + log ω ) [Petz, 1988] ω> 0 D M ( ρ � σ ) = sup tr[ ρ log ω ] + 1 − tr[ σω ] [Hiai, Petz ’93, Berta, Fawzi, Tomamichel ’15] ω> 0 Remarks: Golden-Thompson inequality → D M ≤ D The formula for D M → efficient computation for D M 8/11

  17. Using variational formulas Idea: Use variational formulas for D D ( ρ � σ ) = sup tr[ ρ log ω ]+1 − tr exp (log σ + log ω ) [Petz, 1988] ω> 0 D M ( ρ � σ ) = sup tr[ ρ log ω ] + 1 − tr[ σω ] [Hiai, Petz ’93, Berta, Fawzi, Tomamichel ’15] ω> 0 Remarks: Golden-Thompson inequality → D M ≤ D The formula for D M → efficient computation for D M Back to showing additivity D ( ρ � σ ) = sup ω> 0 f ( ρ, σ, ω ) Using Sion’s minimax theorem: D opt ( ρ ) = min σ ∈C sup f ( ρ, σ, ω ) = sup σ ∈C f ( ρ, σ, ω ) min ω> 0 ω> 0 8/11

  18. Using variational formulas Idea: Use variational formulas for D D ( ρ � σ ) = sup tr[ ρ log ω ]+1 − tr exp (log σ + log ω ) [Petz, 1988] ω> 0 D M ( ρ � σ ) = sup tr[ ρ log ω ] + 1 − tr[ σω ] [Hiai, Petz ’93, Berta, Fawzi, Tomamichel ’15] ω> 0 Remarks: Golden-Thompson inequality → D M ≤ D The formula for D M → efficient computation for D M Back to showing additivity D ( ρ � σ ) = sup ω> 0 f ( ρ, σ, ω ) Using Sion’s minimax theorem: D opt ( ρ ) = min σ ∈C sup f ( ρ, σ, ω ) = sup σ ∈C f ( ρ, σ, ω ) min ω> 0 ω> 0 For D = D M , min σ ∈C f ( ρ, σ, ω ) = min σ ∈C tr[ ρ log ω ] + 1 − tr[ σω ] Semidefinite program (if C is nice) → use strong duality ¯ σ ∈C f ( ρ, σ, ω ) = max min f ( ρ, ¯ σ, ω ) σ ∈ ¯ ¯ C ω 8/11

Recommend


More recommend