partial trace regression and low rank kraus decomposition
play

Partial Trace Regression and Low-Rank Kraus Decomposition Hachem - PowerPoint PPT Presentation

Partial Trace Regression and Low-Rank Kraus Decomposition Hachem Kadri 1 , St ephane Ayache 1 , Riikka Huusari 2 , Alain Rakotomamonjy 3 , 4 , Liva Ralaivola 4 1 Aix-Marseille University, CNRS, LIS, Marseille, France 2 Helsinki Institute for


  1. Partial Trace Regression and Low-Rank Kraus Decomposition Hachem Kadri 1 , St´ ephane Ayache 1 , Riikka Huusari 2 , Alain Rakotomamonjy 3 , 4 , Liva Ralaivola 4 1 Aix-Marseille University, CNRS, LIS, Marseille, France 2 Helsinki Institute for Information Technology, Aalto University, Espoo, Finland 3 Universit´ e Rouen Normandie, LITIS, Rouen, France 4 Criteo AI Lab, Paris

  2. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  3. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number Low-Rank Estimation [Koltchinskii et al., 2011] ℓ � �� 2 + λ � B � 1 � � B ⊤ X i � B = arg min y i − tr B i =1 H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  4. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number PSD-contrained Estimation [Slawski et al., 2015] � �� 2 ℓ � � B ⊤ X i � B = arg min y i − tr B ∈ S + i =1 p H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  5. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number Relevant to: → Matrix completion → Phase retrieval → Quantum state tomography → . . . H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  6. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number Relevant to: → Matrix completion �P Ω ( B ∗ ) − P Ω ( B ) � 2 s.t. rank( B ) = r arg min B H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  7. Trace Regression � � B ⊤ y = tr ∗ X + ǫ Generalization of linear regression to matrix input X → Spatio-temporal data, covariance descriptors, . . . Output y is a real number Relevant to: → Matrix completion � � P Ω ( B ∗ ) ij − tr( B ⊤ E ij ) � 2 s.t. rank( B ) = r arg min B ( i,j ) ∈ Ω H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 2/17

  8. Partial Trace Regression Generalizes Trace Regression to the case when both inputs and outputs are matrices . Domain adaptation EEG data Figure from [Liu et al., 2019] Inspiration: partial trace , CP maps and Kraus decomposition in quantum computing Covariance matrix Figure from [Williamson et al., 2012] H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 3/17

  9. Notational Conventions M p := M p ( R ) the space of all p × p real matrices M p ( M q ) the space of p × p block matrices whose i, j entry is an element of M q L ( M p , M q ) the space of linear maps from M p to M q H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 4/17

  10. From Trace to Partial Trace Trace   . . . • • • •   . . .  • • • •    tr = • . . . . ...   . . . . . . . .   . . . • • • • H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 5/17

  11. From Trace to Partial Trace Trace of a block matrix   . . . . . . • • • • • •   . . . . . . ... ...  . . . . . .  . . . . . . . . .     . . . . . .  • • • • • •        . . .  . . .  . . . tr = •            . . . . . .  • • • • • •   . . . . . . ... ...   . . . . . . . . .  . . . . . .  . . . . . . • • • • • • H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 5/17

  12. From Trace to Partial Trace Partial-trace   . . . . . . • • • • • •   . . . . . . ... ... . . . . . .  . . .  . . . . . .     . . . . . .  • • • • • •        . . . • • •   . . . . . .  . . .   ...  . . . . . . tr m =     . . .     . . . • • •      . . . . . .  • • • • • •   . . . . . .  ... ...  . . . . . . . . .  . . . . . .  . . . . . . • • • • • • The partial trace operation applied to m × m -blocks of a qm × qm matrix gives a q × q matrix as an output. H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 5/17

  13. Partial Trace Regression � � A ∗ XB ⊤ Y = tr m + ǫ ∗ Matrix Input X ∈ M p and Matrix Output Y ∈ M q A ∗ , B ∗ ∈ M qm × p are the unknown parameters of the model We recover the trace regression model when q = 1 Learning the Model Parameters Our solution: Kraus representation of completely positive maps H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 6/17

  14. Positive and Completely Positive Maps [Bhatia, 2009] Positive maps Φ ∈ L ( M p , M q ) is positive if for all M ∈ S + p , Φ( M ) ∈ S + q H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 7/17

  15. Positive and Completely Positive Maps [Bhatia, 2009] m-Positive maps Φ ∈ L ( M p , M q ) is m m m -positive if Φ m : M m ( M p ) → M m ( M q ) defined as     Φ( A 11 ) Φ( A 12 ) A 11 A 12 . . . . . . . .  ...   ...  . . Φ m  :=    . . A m 1 A mm Φ( A m 1 ) Φ( A mm ) is positive. H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 7/17

  16. Positive and Completely Positive Maps [Bhatia, 2009] Completely positive maps Φ is completely positive if it is m -positive for any m ≥ 1 H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 7/17

  17. A Positive But Not Completely Positive Map Example: the transpose map Define Φ : M 2 → M 2 by Φ( A ) = A ⊤ . Then Φ 1 ≥ ≥ ≥ 0 but Φ 2 � � � 0 . Φ       1 0 0 1 1 0 0 0         0 0 0 0 0 0 1 0       = Φ 2             0 0 0 0 0 1 0 0   1 0 0 1 0 0 0 1 H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 8/17

  18. A Completely Positive Map Example Let V ∈ M q × p . Define Φ : M p → M q by Φ( A ) = V AV ⊤ . Then Φ is completely positive . �� �� � � V A 11 V ⊤ V A 12 V ⊤ A 11 A 12 Φ 2 = V A 21 V ⊤ V A 22 V ⊤ A 21 A 22 � � A 11 A 12 ( I 2 ⊗ V ⊤ ) = ( I 2 ⊗ V ) A 21 A 22 H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 9/17

  19. Stinespring Representation Stinespring’s Theorem 1955 � AXA ⊤ � Let Φ ∈ L ( M p , M q ) . Φ writes as Φ( X ) = tr m for some A ∈ M qm × p if and only if Φ is completely positive . Partial trace regression ↔ Learning a completely positive map Partial trace version of the PSD-contrained trace regression Efficient optimization via Kraus decomposition H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 10/17

  20. Kraus Representation Choi’s Theorem 1975, Kraus Decomposition 1971 Let Φ ∈ L ( M p , M q ) be a completely positive linear map. Then there exist A j ∈ M q × p , 1 ≤ j ≤ r , with r ≤ pq such that r � A j XA ⊤ ∀ X ∈ M p , Φ( X ) = j . j =1 Learning a completely positive map ↔ Finding a Kraus decomposition Small values of r correspond to low-rank Kraus representation H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 11/17

  21. Back to Partial Trace Regression Low-Rank Kraus Estimation l r � � � Y i , � A j X i A ⊤ arg min ℓ j A j ∈ M q × p i =1 j =1 H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 12/17

  22. Back to Partial Trace Regression Generalization Bound F = { Φ : M p → M q : Φ is completely positive and its Kraus rank is equal to r } Under some assumptions on ℓ , for any δ > 0 , with probability at least 1 − δ , the following holds for all h ∈ F , � � � � � � pqr log( 8 epq 1 r ) log( l � log pqr ) δ R ( h ) ≤ ˆ R ( h ) + γ + γ l 2 l H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 12/17

  23. Back to Matrix Completion (Block) PSD Matrix Completion Let Φ : M p → M q be a linear mapping. Then the following conditions are equivalent: 1 Φ is completely positive . 2 The block matrix M ∈ M p ( M q ) defined by M ij = Φ( E ij ) , 1 ≤ i, j ≤ p, is positive , where E ij are the matrix units of M p . H. Kadri & al., Partial Trace Regression and Low-Rank Kraus Decomposition, 13/17

Recommend


More recommend