on the scalar gaussian interference channel
play

On the Scalar Gaussian Interference Channel Chandra Nair, & - PowerPoint PPT Presentation

On the Scalar Gaussian Interference Channel Chandra Nair, & David Ng The Chinese University of Hong Kong ITA 2018 13 Feb, 2018 Question Does Han-Kobayashi achievable region with Gaussian signaling exhaust the capacity region of the


  1. On the Scalar Gaussian Interference Channel Chandra Nair, & David Ng The Chinese University of Hong Kong ITA 2018 13 Feb, 2018

  2. Question Does Han-Kobayashi achievable region with Gaussian signaling exhaust the capacity region of the scalar Gaussian interference channel? Chandra Nair and David Ng GIC 13 Feb, 2018 2 / 15

  3. Question Does Han-Kobayashi achievable region with Gaussian signaling exhaust the capacity region of the scalar Gaussian interference channel? This talk Perhaps it may ◮ We establish some evidence towards this end ◮ Conjecture an information inequality, which if true, would establish the optimality for the Z-interference channel Chandra Nair and David Ng GIC 13 Feb, 2018 2 / 15

  4. Scalar Gaussian Interference Channel Z n 1 X n Y n 1 1 ˆ M 1 Encoder 1 + Decoder 1 M 1 a b Y n 2 ˆ M 2 Encoder 2 + Decoder 2 M 2 X n 2 Z n 2 (Some) known results about the capacity region ◮ Determined a ≥ 1 , b ≥ 1 (Sato ’79) ◮ Corner Points (Sato ’81, Costa ’85, Sason ’02, Polyanskiy-Wu ’15) ◮ Maximum rate-sum a (1 + b 2 P 2 ) + b (1 + a 2 P 1 ) ≤ 1 (3 groups ’09) ◮ Han–Kobayashi region within 0.5 bits per dimension (Etkin, Tse, Wang ’07) Chandra Nair and David Ng GIC 13 Feb, 2018 3 / 15

  5. As a side note Investigations on this problem have led to ◮ Costa’s discovery: concavity of entropy power ◮ Use of HWI to establish converses (Polyanskiy-Wu ’15) ◮ Use of "genies" To establish converses/bounds (Kramer, Etkin-Tse-Wang, ...) As a tool for proving sub-additivity/tensorization Chandra Nair and David Ng GIC 13 Feb, 2018 4 / 15

  6. On the Han–Kobayashi achievable region Background ◮ 1981: Han and Kobayashi proposed an achievable region (HK-IB) for memoryless interference channels ◮ 2015: HK-IB was shown to be strictly sub-optimal for some channels (with: Xia, Yazdanpanah) Result: 2-letter extension of HK-IB outperformed HK-IB Difficulty: Evaluating HK-IB (1-letter and 2-letter) Channels: Clean Z-interference channels Chandra Nair and David Ng GIC 13 Feb, 2018 5 / 15

  7. On the Han–Kobayashi achievable region Background ◮ 1981: Han and Kobayashi proposed an achievable region (HK-IB) for memoryless interference channels ◮ 2015: HK-IB was shown to be strictly sub-optimal for some channels (with: Xia, Yazdanpanah) Result: 2-letter extension of HK-IB outperformed HK-IB Difficulty: Evaluating HK-IB (1-letter and 2-letter) Channels: Clean Z-interference channels Natural Questions How about if one restricts to the special case: scalar Gaussian interference channels? ◮ Is HK-IB (with Gaussian signaling) optimal? ◮ Or does k -letter extensions (with Gaussian signaling), or in other words do correlated Gaussian input vectors improve the region? Remark : There is a paper (2016) that claims such an improvement but it ignores the role of "power control" (which was known to improve on naive region since 1985; see also Costa - ITA 2010) Chandra Nair and David Ng GIC 13 Feb, 2018 5 / 15

  8. On the Han–Kobayashi achievable region Background ◮ 1981: Han and Kobayashi proposed an achievable region (HK-IB) for memoryless interference channels ◮ 2015: HK-IB was shown to be strictly sub-optimal for some channels (with: Xia, Yazdanpanah) Result: 2-letter extension of HK-IB outperformed HK-IB Difficulty: Evaluating HK-IB (1-letter and 2-letter) Channels: Clean Z-interference channels Natural Questions How about if one restricts to the special case: scalar Gaussian interference channels? ◮ Is HK-IB (with Gaussian signaling) optimal? ◮ Or does k -letter extensions (with Gaussian signaling), or in other words do correlated Gaussian input vectors improve the region? Remark : There is a paper (2016) that claims such an improvement but it ignores the role of "power control" (which was known to improve on naive region since 1985; see also Costa - ITA 2010) ◮ Main Result : No improvement in going to correlated Gaussians Cheng and Verdu had such a result for αI ( X k 1 ; Y k 1 ) + I ( X k 2 ; Y k 2 ) (1993) We had a similar result for Z-interference ( b = 0 ) last year. Chandra Nair and David Ng GIC 13 Feb, 2018 5 / 15

  9. H–K IB with Gaussian signaling ( k -letter) Non-negative rate pairs R 1 , R 2 satisfying � � � I + ( K Q U 1 + K Q V 1 ) + b 2 K Q   � � 1 V 2 � R 1 ≤ E Q  log � �  � I + b 2 K Q 2 k � � V 2 � � � � I + ( K Q U 2 + K Q V 2 ) + a 2 K Q   � � 1 V 1 � E Q  log R 2 ≤  � � � I + a 2 K Q 2 k � � V 1 � � � � �  � I + ( K Q U 1 + K Q V 1 ) + b 2 ( K Q U 2 + K Q � I + K Q V 2 + a 2 K Q  � V 2 ) � � � 1 V 1 � � R 1 + R 2 ≤ E Q  log + log  � � � � 2 k � I + b 2 K Q � I + a 2 K Q � � � � V 2 V 1 � � � I + ( K Q � U 2 + K Q V 2 ) + a 2 ( K Q U 1 + K Q � � � I + K Q V 1 + b 2 K Q �   V 1 ) � � � � 1  1 V 2 � � R 1 + R 2 ≤ E Q log + log  � � I + a 2 K Q � � � I + b 2 K Q � 2 k 2 k � � � � V 1 V 2 � � � � � � � I + K Q V 1 + b 2 ( K Q U 2 + K Q � I + K Q V 2 + a 2 ( K Q U 1 + K Q   V 2 ) V 1 ) � � � � 1 � � R 1 + R 2 ≤ E Q  log + log � � � �  � I + b 2 K Q � I + a 2 K Q 2 k � � � � V 2 V 1 � � � � � � � � � I + ( K Q U 1 + K Q V 1 ) + b 2 ( K Q U 2 + K Q � I + K Q V 1 + b 2 K Q � I + K Q V 2 + a 2 ( K Q U 1 + K Q   V 2 ) V 1 ) � � � � � � 1 V 2 � � � 2 R 1 + R 2 ≤ E Q  log + log + log  � � � � � � � I + b 2 K Q � I + b 2 K Q � I + a 2 K Q 2 k � � � � � � V 2 V 2 V 1 � � � � � � � � �  � I + ( K Q U 2 + K Q V 2 ) + a 2 ( K Q U 1 + K Q � I + K Q V 2 + a 2 K Q � I + K Q V 1 + b 2 ( K Q U 2 + K Q  V 1 ) V 2 ) 1 � � � � � � V 1 � � � R 1 + 2 R 2 ≤ E Q  log + log + log  � � � � � � 2 k � I + a 2 K Q � I + a 2 K Q � I + b 2 K Q � � � � � � V 1 V 1 V 2 � � � � � �� for some K q U 1 , K q V 1 , K q U 2 , K q K Q U 1 + K Q V 2 � 0 satisfying E Q tr ≤ kP 1 and V 1 � � �� K Q U 2 + K Q E Q tr ≤ kP 2 , and some “time-sharing" variable Q . V 2 Chandra Nair and David Ng GIC 13 Feb, 2018 6 / 15

  10. Result: k -letter region is identical to 1 -letter region Note : Dealing with optimizers of a non-convex optimization problem Chandra Nair and David Ng GIC 13 Feb, 2018 7 / 15

  11. Result: k -letter region is identical to 1 -letter region Note : Dealing with optimizers of a non-convex optimization problem Proof: Define ˆ K q { λ i ( K q � � V 1 := diag V 1 ) } K q ˆ { λ i ( K q U 1 + K q V 1 ) − λ i ( K q � � U 1 := diag V 1 ) } K q ˆ { λ n +1 − i ( K q � � V 2 := diag V 2 ) } ˆ K q { λ n +1 − i ( K q U 2 + K q V 2 ) − λ n +1 − i ( K q � � U 2 := diag V 1 ) } . where λ 1 ( A ) ≤ · · · ≤ λ k ( A ) denote the eigenvalues of a k × k Hermitian matrix A , and diag( { a i } ) indicates a diagonal matrix with diagonal entries a 1 , .., a k . These choices dominate the inequalities term-by-term. This "observation" and feasibility of these choices relies on two well-known results. Difficulty : Making this guess (came after a few months of failed other approaches) There were multiple solutions to KKT conditions, for instance. Chandra Nair and David Ng GIC 13 Feb, 2018 7 / 15

  12. Two results Theorem (Courant-Fischer min-max theorem) Let A be a k × k Hermitian matrix. Then we have x T Ax = x T Ax, λ i ( A ) = inf sup sup inf x ∈ V V ⊆ R k x ∈ V V ⊆ R k � x � =1 dim V = i � x � =1 dim V = n − i +1 where V denotes subspaces of the indicated dimension. Corollary Let A, B be k × k Hermitian matrices with B � 0 . Then λ i ( A + B ) ≥ λ i ( A ) for i = 1 , · · · , k . Chandra Nair and David Ng GIC 13 Feb, 2018 8 / 15

  13. Two results Theorem (Courant-Fischer min-max theorem) Let A be a k × k Hermitian matrix. Then we have x T Ax = x T Ax, λ i ( A ) = inf sup sup inf x ∈ V V ⊆ R k x ∈ V V ⊆ R k � x � =1 dim V = i � x � =1 dim V = n − i +1 where V denotes subspaces of the indicated dimension. Corollary Let A, B be k × k Hermitian matrices with B � 0 . Then λ i ( A + B ) ≥ λ i ( A ) for i = 1 , · · · , k . Theorem (Fiedler ’71) Let A, B be k × k Hermitian matrices. Suppose λ k ( A ) + λ k ( B ) ≥ 0 . Then k k � � ( λ i ( A ) + λ i ( B )) ≤ | A + B | ≤ ( λ i ( A ) + λ k +1 − i ( B )) i =1 i =1 Chandra Nair and David Ng GIC 13 Feb, 2018 8 / 15

  14. What next? Obvious : Do Gaussian inputs optimize HK-IB? Observations ◮ Timesharing variable Q is a cause of trouble ◮ Without Q , there are P 1 , P 2 for which non-Gaussian distributions outperform Gaussian distribution Using perturbations based on Hermite Polynomials (Abbe-Zhang 09) Chandra Nair and David Ng GIC 13 Feb, 2018 9 / 15

Recommend


More recommend