For matrix A ( p × p ) with real eigenvalues, define F A , the empirical distribution function of the eigenvalues of A , to be F A ( x ) ≡ (1 /p ) · (number of eigenvalues of A ≤ x ) . For and p.d.f. G the Stieltjes transform of G is defined as 1 � z ∈ C + ≡ { z ∈ C : ℑ z > 0 } . m G ( z ) ≡ λ − z dG ( λ ) , Inversion formula � b G { [ a, b ] } = (1 /π ) lim ℑ m G ( ξ + iη ) dξ η → 0 + a ( a, b continuity points of G). Notice m F A ( z ) = (1 /p ) tr ( A − zI ) − 1 . 1
Theorem [S. (1995)]. Assume a) For n = 1 , 2 , . . . X n = ( X n ij ), n × N , X n ij ∈ C , i.d. for all n, i, j, 1 1 | 2 = 1. independent across i, j for each n , E | X 1 1 1 − E X 1 b) N = N ( n ) with n/N → c > 0 as n → ∞ . c) T n n × n random Hermitian nonnegative definite, with F T n con- verging almost surely in distribution to a p.d.f. H on [0 , ∞ ) as n → ∞ . d) X n and T n are independent. Let T 1 / 2 be the Hermitian nonnegative square root of T n , and n let B n = (1 /N ) T 1 / 2 n T 1 / 2 (obviously F B n = F (1 /N ) X n X ∗ n T n ). X n X ∗ n n Then, almost surely, F B n converges in distribution, as n → ∞ , to a (nonrandom) p.d.f. F , whose Stieltjes transform m ( z ) ( z ∈ C + ) satisfies 1 � ( ∗ ) m = t (1 − c − czm ) − z dH ( t ) , in the sense that, for each z ∈ C + , m = m ( z ) is the unique solution to ( ∗ ) in { m ∈ C : − 1 − c + cm ∈ C + } . z 2
We have F (1 /N ) X ∗ T X = (1 − n N ) I [0 , ∞ ) + n N F (1 /N ) XX ∗ T a.s. − → (1 − c ) I [0 , ∞ ) + cF ≡ F. Notice m F and m F satisfy 1 − c + 1 1 � c m F ( z ) = m F ( z ) = − zm F t − z dH ( t ) . cz Therefore, m = m F solves z = − 1 t � m + c 1 + tmdH ( t ) . 3
Facts on F: 1. The endpoints of the connected components (away from 0) of the support of F are given by the extrema of f ( m ) = − 1 � t m + c 1 + tmdH ( t ) m ∈ R [Marˇ cenko and Pastur (1967), S. and Choi (1995)]. 2. F has a continuous density away from the origin given by 1 cπ ℑ m ( x ) 0 < x ∈ support of F where m ( x ) = z ∈ C + → x m F ( z ) lim solves x = − 1 � t m + c 1 + tmdH ( t ) . (S. and Choi 1995). 3. F ′ is analytic inside its support, and when H is discrete, has infinite slopes at boundaries of its support [S. and Choi (1995)]. 4. c and F uniquely determine H . D a.s. 5. F − → H as c → 0 (complements B n − → T n as N → ∞ , n fixed). 4
0.5 0.4 0.3 0,2 0 . 1 0.0; f '1/3 ,'
o I P I I I o + (j UI N O r o [ " " N I J + ( d = ll I \ ) . t s o + o O
I O O r r) tl . N F \ A L i l tl N ' t s o F o o
⇒ F = F c , where, for 0 < c ≤ 1, F ′ T n = I n = c ( x ) = f c ( x ) = 1 � ( x − b 1 )( b 2 − x ) b 1 < x < b 2 , 2 πcx 0 otherwise, where b 1 = (1 − √ c ) 2 , b 2 = (1 + √ c ) 2 , and for 1 < c < ∞ , � x F c ( x ) = (1 − (1 /c )) I [0 , ∞ ) ( x ) + f c ( t ) dt. b 1 Marˇ cenko and Pastur (1967) Grenander and S. (1977) n ) − 1 , X n n × N ′ con- Multivariate F matrix: T n = ((1 /N ′ ) X n X ∗ taining i.i.d. standardized entries, n/N ′ → c ′ ∈ (0 , 1) = ⇒ F = F c,c ′ ,where, for 0 < c ≤ 1, F ′ c,c ′ ( x ) = f c,c ′ ( x ) = (1 − c ′ ) � ( x − b 1 )( b 2 − x ) b 1 < x < b 2 , 2 πx ( xc ′ + c ) where � 2 � 2 � � 1 − (1 − c )(1 − c ′ ) 1 − (1 − c )(1 − c ′ ) � 1 − � 1 + b 1 = , b 2 = , 1 − c ′ 1 − c ′ and for 1 < c < ∞ , � x F c,c ′ ( x ) = (1 − (1 /c )) I [0 , ∞ ) ( x ) + f c,c ′ ( t ) dt. b 1 S. (1985) 8
Let, for any d > 0 and d.f. G , F d,G denote the limiting spectral d.f. of (1 /N ) X ∗ TX corresponding to limiting ratio d and limiting F T n G . Theorem [Bai and S. (1998)]. Assume: a) X ij , i, j = 1 , 2 , ... are i.i.d. random variables in C with E X 11 = 0, E | X 11 | 2 = 1, and E | X 11 | 4 < ∞ . b) N = N ( n ) with c n = n/N → c > 0 as n → ∞ . c) For each n T n is an n × n Hermitian nonnegative definite satisfying D H n ≡ F T n − → H , a p.d.f. d) � T n � , the spectral norm of T n is bounded in n . e) B n = (1 /N ) T 1 / 2 n T 1 / 2 , T 1 / 2 X n X ∗ any Hermitian square root of n n n T n , B n = (1 /N ) X ∗ n T n X n , where X n = ( X ij ), i = 1 , 2 , . . . , n , j = 1 , 2 , . . . , N . f) The interval [ a, b ] with a > 0 lies in an open interval outside the support of F c n ,H n for all large n . Then P ( no eigenvalue of B n appears in [ a, b ] for all large n ) = 1 . 9
Theorem [Bai and S. (1999)]. Assume (a)–(f) of the previous the- orem. 1) If c [1 − H (0)] > 1, then x 0 , the smallest value in the support of F c,H , is positive, and with probability one λ B n → x 0 as n → ∞ . N The number x 0 is the maximum value of the function z ( m ) = − 1 t � m + c 1 + tmdH ( t ) for m ∈ R + . 2) If c [1 − H (0)] ≤ 1, or c [1 − H (0)] > 1 but [ a, b ] is not contained in [0 , x 0 ] then m F c,H ( b ) < 0. Let for large n integer i n ≥ 0 be such that λ T n λ T n i n > − 1 /m F c,H ( b ) and i n +1 < − 1 /m F c,H ( a ) (eigenvalues arranged in non-increasing order). Then P ( λ B n λ B n i n > b i n +1 < a for all large n ) = 1 . and 10
From the work of X. Mestre (2008): For fixed n , N , and H n = F T n , let m = m ( z ) = m F cn,Hn ( z ). Then z = z ( m ) = − 1 � t m + c n 1 + tmdH n ( t ) = 1 m ( c n − 1) − c n � 1 dH n ( t ) t + 1 m 2 m = 1 m ( c n − 1) − c n m 2 m H n ( − 1 m ) . Suppose T n has positive eigenvalue t 1 with multiplicity n 1 . Then on any contour in C positively oriented, encircling only eigenvalue t 1 of T n we have − n 1 � ym H n ( y ) dy = − n 1 � � 1 y λ − y dH n ( λ ) dy n 1 2 πi n 1 2 πi = n 1 � � y − λdydH n ( λ ) = n y � λdH n ( λ ) = t 1 . n 1 2 πi n 1 { t 1 } 11
Substitute m = − 1 y . Then t 1 = n 1 1 m ) 1 � mm H n ( − 1 m 2 dm n 1 2 πi � 1 = n 1 1 1 � � m ( c n − 1) − z ( m ) dm n 1 c n 2 πi m � z ( m ) = − N 1 m dm, n 1 2 πi the contour contained in the negative real portion of C , encircling − 1 t 1 and no other − 1 t j , t j an eigenvalue of T n . Suppose exact separation occurs for the eigenvalues of B n for all n large, associated with t 1 . Then the contour can be chosen so that it intersects the real line at two places m a < m b for which x a = z ( m a ) and x b = z ( m b ) are outside the support of F c n ,H n , and [ x a , x b ] contains only the support of F c n ,H n associated with t 1 . Then, with substitution m = m ( z ) we have � zm ′ ( z ) t 1 = − N 1 m ( z ) dz, n 1 2 πi the contour, C , only containing the support of F c n ,H n associated with t 1 . 12
Let m n = m F (1 /N ) X ∗ nTnXn . We have, with probability 1, max | m ( z ) − m n ( z ) | , | m ′ ( z ) − m ′ n ( z ) | → 0 , sup z ∈ C as n → ∞ . Thus � zm ′ − N 1 n ( z ) m n ( z ) dz n 1 2 πi can be taken as an estimate of t 1 . This quantity equals N � � , λ j − µ j n 1 λ j ∈ [ x a ,x b ] µ j ∈ [ x a ,x b ] where λ j ’s are the eigenvalues of B n , µ j ’s are the zeros of m n ( z ). We have n λ j − z + N − n m n ( z ) = 1 1 1 � − z = 0 N N j =1 n ⇒ 1 λ j � ⇐ λ j − z = 1 . N j =1 The solutions are the eigenvalues of the matrix Diag( λ 1 , . . . , λ n ) − N − 1 ss ∗ , where s = ( √ λ 1 , . . . , √ λ n ) ∗ . 13
Population eigenvalues 1 3 10 Estimates . 9946 2 . 9877 10 . 0365 14
Theorem [Bai, S. (2009)]. Replace assumption a) in S. (1995) with: For n = 1 , 2 , . . . X n = ( X n ij ), n × N , X n ij ∈ C are independent with common mean, unit variance, such that for any η > 0 ij | ≥ η √ n )) → 0 1 � E ( | X n ij | 2 I ( | X n η 2 nN ij as n → ∞ . Then the conclusion of S. (1995) remains true. Theorem [Couillet, S., Bai, Debbah (to appear in IEEE Transac- tions on Information Theory )]. Replace assumption a) in Bai and S. (1998) with: 1) X ij , i, j = 1 , 2 , ... are independent random variables in C with E X 1 1 = 0 and E | E 1 1 | 2 = 1. 2) There exists a K > 0 and a random variable X with finite fourth moment such that, for any x > 0 1 � P ( | X ij | > x ) ≤ K P ( | X | > x ) n 1 n 2 i ≤ n 1 ,j ≤ n 2 for any positive integers n 1 , n 2 . 3) There is a positive function ψ ( x ) ↑ ∞ as x → ∞ , and M > 0, such that E [ | X ij | 2 ψ ( | X ij | )] ≤ M. max ij Then the conlusions of Bai and S. (1998,1999) remain true. 15
Extension to power estimation of multiple signal sources in multi- antenna fading channels (Couillet, S., Bai, Debbah): Consider K entities transmitting data. Transmitter k ∈ { 1 , . . . , K } has (unknown) transmission power P k with n k antennas. They transmit data to N sensing devices (receiver). The multiple an- tenna channel matrix between transmitter k and the receiver is √ denoted by H k ∈ C N × n k , where the entries of NH k are i.i.d. standardized. At time instant m ∈ { 1 , . . . , M } , transmitter k emits signal x ( m ) ∈ k C n k , entries independent and standardized, independent for differ- ent m ’s. At the same time the receive signal is impaired by additive noise σw ( m ) ∈ C N ( σ > 0), the entries of w ( m ) are i.i.d. standard- ized (independent across m ). Therefore at time m the receiver senses the signal K y ( m ) = P k H k x ( m ) � � + σw ( m ) . k k =1 16
Recommend
More recommend