optimal comparison of weak and strong moments of random
play

Optimal comparison of weak and strong moments of random vectors with - PowerPoint PPT Presentation

Optimal comparison of weak and strong moments of random vectors with applications (joint work with Rafa l Lata la) Piotr Nayar Institute of Mathematics University of Warsaw 16/09/2019, Jena Theorem (Lata la, N., 2019) Let X be a


  1. Optimal comparison of weak and strong moments of random vectors with applications (joint work with Rafa� l Lata� la) Piotr Nayar Institute of Mathematics University of Warsaw 16/09/2019, Jena

  2. Theorem (Lata� la, N., 2019) Let X be a random vector in R n and ∅ � = T ⊆ R n . Then for p ≥ 2 � � 1 / p � n + p ≤ 2 √ e ( E |� t , X �| p ) 1 / p . |� t , X �| p E sup sup p t ∈ T t ∈ T

  3. Theorem (Lata� la, N., 2019) Let X be a random vector in R n and ∅ � = T ⊆ R n . Then for p ≥ 2 � � 1 / p � n + p ≤ 2 √ e ( E |� t , X �| p ) 1 / p . |� t , X �| p E sup sup p t ∈ T t ∈ T Remark. This result is optimal (up to a universal constant) and equality is achieved for any rotationally invariant random vector X and T = B n 2 .

  4. Theorem (Lata� la, N., 2019) Let X be a random vector in R n and ∅ � = T ⊆ R n . Then for p ≥ 2 � � 1 / p � n + p ≤ 2 √ e ( E |� t , X �| p ) 1 / p . |� t , X �| p E sup sup p t ∈ T t ∈ T Remark. This result is optimal (up to a universal constant) and equality is achieved for any rotationally invariant random vector X and T = B n 2 . Proof inspired by the Welch bound proof of Datta, Stephen and Douglas (2012).

  5. Application I - p-summing constant. Theorem (Lata� la, N., 2019) Let X be a random vector in R n and ∅ � = T ⊆ R n . Then for p ≥ 2 � � 1 / p � n + p ≤ 2 √ e ( E |� t , X �| p ) 1 / p . |� t , X �| p E sup sup p t ∈ T t ∈ T

  6. Application I - p-summing constant. Theorem (Lata� la, N., 2019) (strong vs. weak moments) Let X be a random vector in ( R n , � · � ). Then for p ≥ 2 � n + p ( E � X � p ) 1 / p ≤ 2 √ e ( E |� t , X �| p ) 1 / p . sup p � t � ∗ ≤ 1

  7. Application I - p-summing constant. Corollary (Lata� la, N., 2019) Let ( F , � · � ) be a Banach space of dimension n . Then for any vectors x 1 , . . . , x l ∈ F we have   1 / p   1 / p l � n + p l ≤ 2 √ e � � � x j � p | � t , x j � | p sup .     p � t � ∗ ≤ 1 j =1 j =1

  8. Application I - p-summing constant. Corollary (Lata� la, N., 2019) Let ( F , � · � ) be a Banach space of dimension n . Then for any vectors x 1 , . . . , x l ∈ F we have   1 / p   1 / p l � n + p l ≤ 2 √ e � � � x j � p | � t , x j � | p sup .     p � t � ∗ ≤ 1 j =1 j =1 The best constant π p ( F ) in this inequality is called the p-summing � 2 ) = ( E | U 1 | p ) − 1 / p ≈ n + p constant of F . We have π p ( l n p . Therefore π p ( F ) ≤ c π p ( l dim F ) . 2

  9. Application I - p-summing constant. Corollary (Lata� la, N., 2019) Let ( F , � · � ) be a Banach space of dimension n . Then for any vectors x 1 , . . . , x l ∈ F we have   1 / p   1 / p l � n + p l ≤ 2 √ e � � � x j � p | � t , x j � | p sup .     p � t � ∗ ≤ 1 j =1 j =1 The best constant π p ( F ) in this inequality is called the p-summing � 2 ) = ( E | U 1 | p ) − 1 / p ≈ n + p constant of F . We have π p ( l n p . Therefore π p ( F ) ≤ c π p ( l dim F ) . 2 Question. Is it true that π p ( F ) ≤ π p ( l dim F )? 2

  10. Application II - concentration of measure theory. Theorem (Lata� la, N., 2019 ) Every centered log-concave probability measure on R n satisfies the optimal concentration inequality in the sense of Lata� la and Wojtaszczyk with a constant ∼ n 5 / 12 .

  11. Application II - concentration of measure theory. Theorem (Lata� la, N., 2019 ) Every centered log-concave probability measure on R n satisfies the optimal concentration inequality in the sense of Lata� la and Wojtaszczyk with a constant ∼ n 5 / 12 . Remark. Previous best bound ∼ n 1 / 2 was due to Lata� la.

  12. Basic linear algebra facts.

  13. Basic linear algebra facts. Lemma 1 (rank factorization) Suppose A is a k × l matrix of rank at most n . Then A can be written as a product A = TX , where T is k × n and X is n × l :      t 1  .  .   x 1 · · · · n = ( � t i , x j � ) i ≤ k , j ≤ l A = TX = x l .     t k � �� � n

  14. Basic linear algebra facts. Lemma 1 (rank factorization) Suppose A is a k × l matrix of rank at most n . Then A can be written as a product A = TX , where T is k × n and X is n × l :      t 1  .  .   x 1 · · · · n = ( � t i , x j � ) i ≤ k , j ≤ l A = TX = x l .     t k � �� � n Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a of A can be written as n � v ( s ) λ s . a = s =1

  15. Basic linear algebra facts. Lemma 1 (rank factorization) Suppose A is a k × l matrix of rank at most n . Then A can be written as a product A = TX , where T is k × n and X is n × l :      t 1  .  .   x 1 · · · · n = ( � t i , x j � ) i ≤ k , j ≤ l A = TX = x l .     t k � �� � n Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a ( j ) of A can be written as n � a ( j ) = v ( s ) λ ( j ) s . s =1

  16. Basic linear algebra facts. Lemma 1 (rank factorization) Suppose A is a k × l matrix of rank at most n . Then A can be written as a product A = TX , where T is k × n and X is n × l :      t 1  .  .   x 1 · · · · n = ( � t i , x j � ) i ≤ k , j ≤ l A = TX = x l .     t k � �� � n Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a ( j ) of A can be written as n � a ( j ) v ( s ) λ ( j ) = s . i i s =1

  17. Basic linear algebra facts.

  18. Basic linear algebra facts. Lemma 2 (rank of Hadamard product), Peng-Waldron, 2002 Suppose A = ( a ij ) is a k × l matrix of rank at most n . Let m be a positive integer. Then A ◦ m := ( a m � n + m − 1 � ij ) has rank at most . m

  19. Basic linear algebra facts. Lemma 2 (rank of Hadamard product), Peng-Waldron, 2002 Suppose A = ( a ij ) is a k × l matrix of rank at most n . Let m be a positive integer. Then A ◦ m := ( a m � n + m − 1 � ij ) has rank at most . m Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a = ( a 1 , . . . , a k ) of A can be written as n n � � v ( s ) v ( s ) λ s , a = a i = that is λ s . i s =1 s =1

  20. Basic linear algebra facts. Lemma 2 (rank of Hadamard product), Peng-Waldron, 2002 Suppose A = ( a ij ) is a k × l matrix of rank at most n . Let m be a positive integer. Then A ◦ m := ( a m � n + m − 1 � ij ) has rank at most . m Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a = ( a 1 , . . . , a k ) of A can be written as n n � � v ( s ) v ( s ) λ s , a = a i = that is λ s . i s =1 s =1 n � v ( s 1 ) v ( s 2 ) · . . . · v ( s m ) a m i = λ s 1 λ s 2 · . . . · λ s m . i i i s 1 , s 2 ,..., s m =1

  21. Basic linear algebra facts. Lemma 2 (rank of Hadamard product), Peng-Waldron, 2002 Suppose A = ( a ij ) is a k × l matrix of rank at most n . Let m be a positive integer. Then A ◦ m := ( a m � n + m − 1 � ij ) has rank at most . m Proof. There exist vectors v (1) , . . . , v ( n ) such that every column a = ( a 1 , . . . , a k ) of A can be written as n n � � v ( s ) v ( s ) λ s , a = a i = that is λ s . i s =1 s =1 n � v ( s 1 ) v ( s 2 ) · . . . · v ( s m ) a m i = λ s 1 λ s 2 · . . . · λ s m . i i i s 1 , s 2 ,..., s m =1 We conclude that � � a m ∈ span ( v ( s 1 ) · . . . · v ( s m ) ) i =1 ,..., k , 1 ≤ s 1 ≤ . . . ≤ s m ≤ n . i i

  22. Lemma 3 (case p = 2) Let X be a random vector in R n and let us take ∅ � = T ⊆ R n . Then |� t , X �| 2 ≤ n sup E |� t , X �| 2 . E sup t ∈ T t ∈ T

  23. Lemma 3 (case p = 2) Let X be a random vector in R n and let us take ∅ � = T ⊆ R n . Then |� t , X �| 2 ≤ n sup E |� t , X �| 2 . E sup t ∈ T t ∈ T Let C be the covariance matrix of a symmetric, bounded and non-degenerate X .

  24. Lemma 3 (case p = 2) Let X be a random vector in R n and let us take ∅ � = T ⊆ R n . Then |� t , X �| 2 ≤ n sup E |� t , X �| 2 . E sup t ∈ T t ∈ T Let C be the covariance matrix of a symmetric, bounded and We can assume sup t ∈ T E |� t , X �| 2 = 1 and non-degenerate X . T = { t ∈ R n : E |� t , X �| 2 ≤ 1 } = { t ∈ R n : � Ct , t � ≤ 1 } = { t ∈ R n : | C 1 / 2 t | ≤ 1 } .

  25. Lemma 3 (case p = 2) Let X be a random vector in R n and let us take ∅ � = T ⊆ R n . Then |� t , X �| 2 ≤ n sup E |� t , X �| 2 . E sup t ∈ T t ∈ T Let C be the covariance matrix of a symmetric, bounded and We can assume sup t ∈ T E |� t , X �| 2 = 1 and non-degenerate X . T = { t ∈ R n : E |� t , X �| 2 ≤ 1 } = { t ∈ R n : � Ct , t � ≤ 1 } = { t ∈ R n : | C 1 / 2 t | ≤ 1 } . Then we have |� t , X �| 2 = E |� t , X �| 2 = E |� C 1 / 2 t , C − 1 / 2 X �| 2 E sup sup sup t ∈ T | C 1 / 2 t |≤ 1 | C 1 / 2 t |≤ 1 = E | C − 1 / 2 X | 2 = n .

  26. Proposition (case p = 2 m ) Let X be a random vector in R n and let us take T ⊆ R n . Suppose m is a positive integer. Then � n + m − 1 � |� t , X �| 2 m ≤ E |� t , X �| 2 m . E sup sup m t ∈ T t ∈ T

Recommend


More recommend