Gaussian Multiple and Random Access in the Finite Blocklength Regime Recep Can Yavas California Institute of Technology June 21-26, 2020 Joint work with Victoria Kostina and Michelle Effros ISIT 2020 This work was supported in part by the National Science Foundation (NSF) under grant CCF-1817241. 1 / 30
Talk Plan We present two achievability results for 1 Gaussian Multiple Access Channel (MAC) 2 Gaussian Random Access Channel (RAC) 2 / 30
Gaussian Multiple Access Channel (MAC) k � 2 ≤ nP k for k = 1 , . . . , K Maximal power constraint on the codewords: � X n Notation: [ M ] = { 1 , . . . , M } , x A = ( x a : a ∈ A ) 3 / 30
MAC Code Definition Definition ( K -transmitter MAC) An ( n , M 1 , . . . , M K , ǫ, P 1 , . . . , P K ) code for the K -transmitter MAC consists of K encoding functions f k : [ M k ] → R n , k ∈ [ K ] a decoding function g : R n → [ M 1 ] × · · · × [ M K ] with maximal power constraint � f k ( m k ) � 2 ≤ nP k for m k ∈ [ M k ] , k ∈ [ K ] and 1 � � g ( Y n K ) � = m [ K ] | X n � P k = f k ( m k ) ∀ k ∈ [ K ] ≤ ǫ K � M k m [ K ] ∈ [ M 1 ] ×···× [ M K ] k = 1 average probability of error 4 / 30
Prior art: Point-to-point (P2P) Gaussian Channel ( K = 1 ) Channel: M ∗ ( n , ǫ, P ) � { max M : an ( n , M , ǫ, P ) code exists. } . nV ( P ) Q − 1 ( ǫ )+ 1 log M ∗ ( n , ǫ, P ) = nC ( P ) − � 2 log n + O ( 1 ) V ( P )= P ( P + 2 ) C ( P )= 1 2 log( 1 + P ) 2 ( 1 + P ) 2 third-order term (capacity) (dispersion) Achievability ( ≥ ): [Tan-Tomamichel 15’] Converse ( ≤ ): [Polyanskiy et al. 10’] 5 / 30
The Lesson from P2P Channel We can achieve nV ( P ) Q − 1 ( ǫ ) + 1 log M ∗ ( n , ǫ, P ) = nC ( P ) − � 2 log n + O ( 1 ) by using 6 / 30
Motivation (MAC) We are interested in refining the achievable third-order term for the Gaussian MAC in the finite blocklength regime. For the point-to-point case, it is known that the third-order term + 1 / 2 log n is optimal. We want to show that + 1 / 2 log n 1 is achievable for the Gaussian MAC. 7 / 30
Gaussian MAC - Main Result Theorem For any ǫ ∈ ( 0 , 1 ) and any P 1 , P 2 > 0 , an ( n , M 1 , M 2 , ǫ, P 1 , P 2 ) code for the two-transmitter Gaussian MAC exists provided that log M 1 ∈ n C ( P 1 , P 2 ) − √ nQ inv ( V ( P 1 , P 2 ) , ǫ ) + 1 log M 2 2 log n 1 + O ( 1 ) 1 . log M 1 M 2 C ( P 1 ) = capacity vector C ( P 1 , P 2 ) = C ( P 2 ) C ( P 1 + P 2 ) V ( P 1 , P 2 ) = 3 × 3 positive-definite dispersion matrix Q inv ( V , ǫ ) = multidimensional counterpart of inverse Q-function � z ∈ R d : P [ Z ≤ z ] ≥ 1 − ǫ � Q inv ( V , ǫ ) � where Z ∼ N ( 0 , V ) component-wise 8 / 30
What does Q inv ( V , ǫ ) look like? z ∈ R d : P [ Z ≤ z ] ≥ 1 − ǫ � � Q inv ( 1 , ǫ ) � { x : x ≥ Q − 1 ( ǫ ) } Q inv ( V , ǫ ) � PDF of N(0, 1) 0.4 0.35 0.3 0.25 0.2 Area = 0.95 0.15 0.1 0.05 0 -3 -2 -1 0 1 2 3 P [ N ( 0 , V ) ≤ ( z 1 , z 2 )] = 0 . 95 9 / 30
Example Achievable region for P 1 = 2 , P 2 = 1 and ǫ = 10 − 3 : 0.6 0.5 0.4 R 2 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 R 1 10 / 30
Comparison with the literature Our third-order term improves! n C ( P 1 , P 2 ) − √ nQ inv ( V ( P 1 , P 2 ) , ǫ )+ 1 2 log n 1 + O ( 1 ) 1 � n 1 / 4 � > O 1 [MolavianJazi-Laneman 15’] n 1 / 4 log n � � > O 1 [Scarlett et al. 15’] Proof techniques: Our bound : Spherical codebook + Maximum-likelihood decoder [MolavianJazi-Laneman 15’] : Spherical codebook + threshold decoder [Scarlett et al. 15’] : Constant composition codes + Quantization 11 / 30
Encoding and decoding Encoding : independently generate M k codewords for k = 1 , 2: [Shannon 49’] used spherical codebook to bound error exponent of the P2P Gaussian channel. Decoding : Mutual information density 2 ( y n | x n 1 , x n 2 ) P Y n 2 | X n 1 , X n ı 1 , 2 ( x n 1 , x n 2 ; y n ) � log 2 ( y n ) P Y n Maximum likelihood (ML) Decoder: g ( y n ) = arg max m 1 , m 2 ı 1 , 2 ( f 1 ( m 1 ) , f 2 ( m 2 ); y n ) 12 / 30
Main Tool: Random-Coding Union (RCU) Bound P2P case: proved in [Polyanskiy et al. 10’] Using the ML decoder, for a general MAC: Theorem (New RCU bound for MAC) For arbitrary input distributions P X 1 and P X 2 , there exists a ( M 1 , M 2 , ǫ ) -MAC code such that � � ı 1 ( ¯ � � ǫ ≤ E min 1 , ( M 1 − 1 ) P X 1 ; Y 2 | X 2 ) ≥ ı 1 ( X 1 ; Y 2 | X 2 ) | X 1 , X 2 , Y 2 ı 2 ( ¯ � � + ( M 2 − 1 ) P X 2 ; Y 2 | X 1 ) ≥ ı 2 ( X 2 ; Y 2 | X 1 ) | X 1 , X 2 , Y 2 � �� ı 1 , 2 ( ¯ X 1 , ¯ � + ( M 1 − 1 )( M 2 − 1 ) P X 2 ; Y 2 ) ≥ ı 1 , 2 ( X 1 , X 2 ; Y 2 ) | X 1 , X 2 , Y 2 , where P X 1 , ¯ X 2 , Y 2 ( x 1 , ¯ x 1 , x 2 , ¯ x 2 , y ) = X 1 , X 2 , ¯ P X 1 ( x 1 ) P X 1 (¯ x 1 ) P X 2 ( x 2 ) P X 2 (¯ x 2 ) P Y 2 | X 1 X 2 ( y | x 1 , x 2 ) . Crucial in refining the third-order term to 1 2 log n 13 / 30
Key Challenge Modified mutual information density r.v.: ı 1 ( X n 1 ; Y n 2 | X n ˜ 2 ) − n C ( P 1 , P 2 ) ı 2 � ı 2 ( X n 2 ; Y n 2 | X n ˜ ˜ 1 ) ı 1 , 2 ( X n 1 , X n 2 ; Y n ˜ 2 ) 2 ( y n | x n 1 , x n 2 ; y n ) � log P Y n 2 ) 2 | X n 1 , X n ı 1 , 2 ( x n 1 , x n ˜ with Q Y n 2 ∼ N ( 0 , ( 1 + P 1 + P 2 ) I n ) Q Y n 2 ( y n ) Lemma (New Berry-Esséen type bound) Let D ∈ R 3 be a convex, Borel measurable set and Z ∼ N ( 0 , V ( P 1 , P 2 )) . Then � 1 � � � � ≤ C 0 � � √ n ˜ ı 2 ∈ D − P [ Z ∈ D ] √ n � P � � [MolavianJazi-Laneman 15’, Prop. 1] showed a weaker upper bound with � � 1 using CLT for functions = ⇒ affects the third-order term O n 1 / 4 We use a different technique to prove this lemma. 14 / 30
Proof of Lemma Problem: We cannot use Berry-Esséen theorem directly since X n 1 and X n 2 are not i.i.d. Solution: ı 2 |� X n 1 , X n Conditional dist. ˜ 2 � = q is a sum of independent r.v.s Apply the multidimensional Berry-Esséen theorem to that sum of independent vectors after conditioning on the inner product � X n 1 , X n 2 � . Then integrate the probabilities over q . 15 / 30
Extension to K -transmitter ( P k = P , M k = M ∀ k ∈ [ K ] ) Theorem For any ǫ ∈ ( 0 , 1 ) , and P > 0 , an ( n , M 1 , ǫ, P 1 ) -MAC code for the K -transmitter Gaussian MAC exists provided that n ( V ( KP ) + V cr ( K , P )) Q − 1 ( ǫ ) + 1 � K log M ≤ nC ( KP ) − 2 log n + O ( 1 ) . V cr ( K , P ) is the cross dispersion term V cr ( K , P ) = K ( K − 1 ) P 2 2 ( 1 + KP ) 2 . 16 / 30
Talk Plan We present two achievability results for 1 Gaussian Multiple Access Channel (MAC) 2 Gaussian Random Access Channel (RAC) 17 / 30
Random access Random access solutions such as ALOHA, treating interference as noise, or orthogonalization methods (TDMA/FDMA) perform poorly. We want to design a random access communication strategy that does not require the knowledge of transmitter activity and still does not cause a performance loss compared to k -MAC. 18 / 30
Rateless Gaussian RAC Communication There are K transmitters in total. A subset of those with size k are active. Nobody knows the active transmitters. No probability of being active is assigned to transmitters. 19 / 30
Rateless Gaussian RAC Communication Identical encoding and list decoding as in [Polyanskiy 17’] Average probability of error ≤ ǫ k for k = 0 , . . . , K New: Gaussian RAC, maximal power constraint: � f ( m ) nk � 2 ≤ n k P for all k and m 20 / 30
Rateless Gaussian RAC Communication Rateless coding scheme that we defined in the context of DMCs [Effros, Kostina, Yavas, “Random access channel coding in the finite blocklength regime", 18’] Predetermined decoding times: n 0 , . . . , n K 21 / 30
Communication Process 22 / 30
RAC Code Definition Definition � { n k , ǫ k } K � An k = 0 , M , P -RAC consists of an encoder function f decoding functions { g k } K k = 0 such that Maximal power constraints are satisfied: � f ( m ) n k � 2 ≤ n k P for m ∈ { 1 , . . . , M } , k ∈ { 1 , . . . , K } and �� 1 � � � � � � � π � { g t ( Y n t g k ( Y n k � X n k [ k ] = f ( m [ k ] ) n k � P k ) � = e } k ) � = m [ k ] ≤ ǫ k � M k m [ k ] ∈ [ M ] k t < k the average probability of error in decoding k messages at time n k 23 / 30
Gaussian RAC - Main Result Theorem For any K < ∞ , ǫ k ∈ ( 0 , 1 ) and any P > 0 , an ( M , { ( n k , ǫ k ) } K k = 0 , P ) -code for the Gaussian RAC exists provided that n k ( V ( kP ) + V cr ( k , P )) Q − 1 ( ǫ k )+ 1 � k log M ≤ n k C ( kP ) − 2 log n k + O ( 1 ) for all k ∈ [ K ] , for some positive constant C . The same first, second, and third-order terms as in Gaussian MAC with known number of transmitters! 24 / 30
Gaussian RAC - Encoding To satisfy the maximal power constraints for all decoding times simultaneously, we set the input distribution as: 25 / 30
Feasible codeword set for Gaussian RAC n 1 = 2 , n 2 = 3 , P = 1 3 : 1 1If we use this input dist. for the Gaussian MAC, we achieve the same first three order terms. 26 / 30
Recommend
More recommend