the structure of the worst noise in gaussian vector
play

The Structure of the Worst Noise in Gaussian Vector Broadcast - PowerPoint PPT Presentation

The Structure of the Worst Noise in Gaussian Vector Broadcast Channels Wei Yu University of Toronto March 19, 2003 DIMACS Workshop on Network Information Theory Outline Sum capacity of Gaussian vector broadcast channels. Complete


  1. The Structure of the Worst Noise in Gaussian Vector Broadcast Channels Wei Yu University of Toronto March 19, 2003 DIMACS Workshop on Network Information Theory

  2. Outline • Sum capacity of Gaussian vector broadcast channels. • Complete characterization of the worst-noise. • Efficient numerical solution for the dual channel. • Does duality extend beyond the power constrained channels? DIMACS Workshop on Network Information Theory 1

  3. Gaussian Vector Broadcast Channel • Non-degraded broadcast channel: Z n ˆ W 1 ∈ 2 nR 1 W 1 ( Y n Y n 1 ) 1 X n H ˆ Y n W K ( Y n K ) W K ∈ 2 nR K K • Capacity region is still unknown. – Sum capacity C = max { R 1 + · · · + R K } is recently solved. DIMACS Workshop on Network Information Theory 2

  4. Marton’s Achievability Region • For a broadcast channel p ( y 1 , y 2 | x ) : R 1 ≤ I ( U 1 ; Y 1 ) R 2 ≤ I ( U 2 ; Y 2 ) R 1 + R 2 ≤ I ( U 1 ; Y 1 ) + I ( U 2 ; Y 2 ) − I ( U 1 ; U 2 ) for some auxiliary random variables p ( u 1 , u 2 ) p ( x | u 1 , u 2 ) . • For the Gaussian broadcast channel: I ( U 2 ; Y 2 ) − I ( U 1 ; U 2 ) is achieved with precoding. DIMACS Workshop on Network Information Theory 3

  5. Writing on Dirty Paper Gaussian Channel ... with Transmitter Side Information Z ∼ N (0 , Szz ) S ∼ N (0 , Sss ) Z ∼ N (0 , Szz ) X Y X Y C = 1 2 log | S xx + S zz | C = 1 2 log | S xx + S zz | | S zz | | S zz | • Capacities are the same if S is known non-causally at the transmitter. C = max p ( u,x | s ) I ( U ; Y ) − I ( U ; S ) = max p ( x ) I ( X ; Y | S ) DIMACS Workshop on Network Information Theory 4

  6. Precoding for the Broadcast Channel Z n 1 ˆ X n 1 ( W 1 , X n W 1 ( Y n W 1 ∈ 2 nR 1 Y n 2 ) H 1 1 ) 1 X n Z n 2 ˆ W 2 ∈ 2 nR 2 X n Y n W 2 ( Y n 2 ( W 2 ) H 2 2 ) 2 2 log | H 1 S 1 H T I ( X 1 ; Y 1 | X 2 ) = 1 1 + S z 1 z 1 | R 1 = | S z 1 z 1 | 2 log | H 2 S 2 H T 2 + H 2 S 1 H T = 1 2 + S z 2 z 2 | R 2 = I ( X 2 ; Y 2 ) | H 2 S 1 H T 2 + S z 2 z 2 | DIMACS Workshop on Network Information Theory 5

  7. Converse: Sato’s Outer Bound • Broadcast capacity does not depend on noise correlation: Sato (’78). z ′ z ′ z 1 1 1 x 1 y 1 x 1 y 1 x 1 y 1 z ′ z ′ ≤ z 2 = 2 2 x 2 y 2 x 2 y 2 x 2 y 2 � �� � � p ( z 1 ) = p ( z ′ 1 ) 2 ) , not necessarily p ( z 1 , z 2 ) = p ( z ′ 1 , z ′ if 2 ) . p ( z 2 ) = p ( z ′ • So, sum capacity C ≤ min S zz max S xx I ( X ; Y ) . DIMACS Workshop on Network Information Theory 6

  8. Three Proofs of the Sum Capacity Result 1. Decision-Feedback Equalization approach (Yu, Cioffi) 2. Uplink-Downlink duality approach (Viswanath, Tse) 3. Convex duality approach (Jindal, Vishwanath, Goldsmith) DIMACS Workshop on Network Information Theory 7

  9. DFE Approach z ′ ∆ − 1 G − T H T Decision x H � �� � feedforward filter I − G • Decision-feedback at the receiver is equivalent to transmitter precoding. • (Non-Singular) Worst Noise ⇐ ⇒ Diagonal feedforward filter Fix S xx , min S zz I ( X ; Y ) is achievable. DIMACS Workshop on Network Information Theory 8

  10. Uplink-Downlink Duality Approach Z 1 ∼ N (0 , Q ) Z 2 ∼ N (0 , I ) H T X 1 H Y 1 X 2 Y 2 E [ X T E [ X T 1 X 1 ] ≤ P 2 QX 2 ] ≤ P • Uplink and downlink channels are duals. • The noise covariance and input constraint are duals. • Worst-noise gives an input constraint that decouples the inputs. C = max S xx min S zz I ( X ; Y ) DIMACS Workshop on Network Information Theory 9

  11. Convex Duality Approach Z 1 H T Z X ′ H 1 Y 1 1 1 Z 2 Y ′ X X ′ H T H 2 Y 2 2 2 P P • Sato’s bound: C ≤ min S zz max S xx I ( X ; Y ) . S x ′ x ′ I ( X ′ ; Y ′ ) . • Broadcast/Multiple-Access duality: C ≥ max S x ′ x ′ I ( X ′ ; Y ′ ) . • Convex duality: max S xx min S zz I ( X ; Y ) = max DIMACS Workshop on Network Information Theory 10

  12. Objective • Completely characterize the worst-noise. – Duality through minimax. – Worst-noise through duality. • Efficient numerical solution for the dual channel. • Does duality extend beyond the power constrained channel? DIMACS Workshop on Network Information Theory 11

  13. Minimax Capacity • Gaussian vector broadcast channel sum capacity is the solution of 2 log | HS xx H T + S zz | 1 max S xx min | S zz | S zz subject to tr( S xx ) ≤ P � I � ⋆ S zz = ⋆ I S xx , S zz ≥ 0 • The minimax problem is convex in S zz , concave in S xx . – How to solve this minimax problem? DIMACS Workshop on Network Information Theory 12

  14. Duality through Minimax • Two KKT conditions must be satisfied simultaneously: H T ( HS xx H T + S zz ) − 1 H = λI � � Ψ 1 0 zz − ( HS xx H T + S zz ) − 1 = S − 1 0 Ψ 2 • For the moment, assume that H is invertible. H T S − 1 zz H − λI = H T Ψ H ⇒ H ( H T Ψ H + λI ) − 1 H T = S zz ⇒ This is a “water-filling” condition for the dual channel. DIMACS Workshop on Network Information Theory 13

  15. Power Constraint in the Dual Channel • Interpretation of dual variable: λ = ∂C ∂P , Ψ i = − ∂C . ∂S z i z i �� � – Thus, capacity is preserved if λ ∆ P = Ψ i ∆ S z i z i i 2 log | HS xx H T + S zz | • Capacity C = min max 1 . | S zz | – Thus, capacity is preserved if ∆ P = ∆ S z i z i . P 1 � i Ψ i Therefore, = P . λ DIMACS Workshop on Network Information Theory 14

  16. Construct the Dual Channel KKT condition: H ( H T DH + I ) − 1 H T = 1 λ S zz • where D = Ψ /λ is diagonal, trace( D ) = � i Ψ i /λ = P . � � I ⋆ • S zz = . Thus, constraint on D : trace( D 1 ) + trace( D 2 ) ≤ P . ⋆ I Z X ′ H T 1 1 E [ X ′ 1 X ′ T 1 ] = D 1 E [ X ′ 2 X ′ T Y ′ 2 ] = D 2 trace( D 1 ) + trace( D 2 ) ≤ P X ′ H T 2 2 DIMACS Workshop on Network Information Theory 15

  17. Yet Another Derivation for Duality The duality between broadcast channel and multiple-access channel: 2 log | HS xx H T + S zz | 2 log | H T DH + I | 1 1 max S xx min max | S zz | | I | S zz D s . t . tr( S xx ) ≤ P s . t . tr( D ) ≤ P � � I ⋆ S zz = D is diagonal ⋆ I S xx , S zz ≥ 0 D ≥ 0 KKT conditions for minimax = ⇒ KKT condition for max. DIMACS Workshop on Network Information Theory 16

  18. Worst-Noise Through Minimax • Solve the dual multiple access channel problem with power constraint P . Obtain (Ψ , λ ) . Then: H ( H T Ψ H + λI ) − 1 H T S zz = ( λI ) − 1 − ( H T Ψ H + λI ) − 1 S xx = • What if H is not invertible, or S zz is singular? DIMACS Workshop on Network Information Theory 17

  19. Decision-Feedback Equalization with Singular Noise � � Ψ 1 0 zz − ( HS xx H T + S zz ) − 1 = • With non-singular noise: S − 1 . 0 Ψ 2 • If H is low-rank, S zz can be singular. Z Linear Estimation/DFE X H is not unique if | Sz | = 0 . m × n m -dimensional n > m DIMACS Workshop on Network Information Theory 18

  20. Necessary and Sufficient Condition for Diagonalization • Suppose that the worst-noise | S zz | = 0 , let z U T , S zz = US ˜ z ˜ where S zz is n × n , S ˜ z is m × m , m < n . z ˜ • It is always possible to write H = U ˜ H . • There exists a DFE with diagonal feedforward filter if and only if � Ψ 1 � 0 H T + S ˜ z ) − 1 = U T z − ( ˜ HS xx ˜ S − 1 U z ˜ z ˜ ˜ 0 Ψ 2 DIMACS Workshop on Network Information Theory 19

  21. Singular Worst-Noise • It can be verified that the diagonalization condition is satisfied by: S (0) H ( H T Ψ H + λI ) − 1 H T = zz ( λI ) − 1 − ( H T Ψ H + λI ) − 1 S xx = • However: S (0) zz does not necessarily have 1 ’s on the diagonal.   I ⋆ ⋆ S (0)  . zz = ⋆ I ⋆  ⋆ ⋆ ⋆ DIMACS Workshop on Network Information Theory 20

  22. Characterization of the Worst-Noise Theorem 1. The following steps solve the worst noise in y = Hx + z : 1. Find the optimal (Ψ , λ ) in the dual multiple access channel. 2. Form S (0) zz = H ( H T Ψ H + λI ) − 1 H T , S xx = ( λI ) − 1 − ( H T Ψ H + λI ) − 1 . 3. If S xx is not full rank, reduce the rank of H , and repeat 1-2. 4. The class of worst-noise is precisely S (0) zz + S ′ zz .       I ⋆ ⋆ 0 0 0 I ⋆ ⋆  +  =  . ⋆ I ⋆ 0 0 0 ⋆ I ⋆    ⋆ ⋆ ⋆ 0 0 ⋆ ⋆ ⋆ I DIMACS Workshop on Network Information Theory 21

  23. Worst-Noise is Not Unique • The same S xx water-fills the entire class of S (0) zz + S ′ zz . � � �� � � �� S ′ S ′ 0 0 S ˜ 0 • S (0) z ˜ z = [ U | U ′ ] 11 12 [ U | U ′ ] T , zz + + S ′ S ′ S ′ 0 0 0 21 22 zz 12 S ′− 1 – where S ′ 11 − S ′ 22 S ′ 21 = 0 . – The entire class of worst-noise is related by linear estimation: z + z ′ 1 | z ′ E [˜ 2 ] = ˜ z. • The class of ( S xx , S zz ) that satisfies the KKT condition is precisely: ( S xx , S (0) zz + S ′ zz ) DIMACS Workshop on Network Information Theory 22

  24. Outline • Complete characterization of the worst-noise. – Duality through minimax. – Worst-noise through duality. • Efficient numerical solution for the dual channel. • Does duality extend beyond the power constrained channel? DIMACS Workshop on Network Information Theory 23

Recommend


More recommend