type based distributed estimation over multiaccess
play

Type-Based Distributed Estimation over Multiaccess Channels G - PowerPoint PPT Presentation

Type-Based Distributed Estimation over Multiaccess Channels G okhan Mergen Joint work with Prof. Lang Tong School of Electrical and Computer Engineering Cornell University, Ithaca, NY Motivation: Sensor Networks 2 Fusion centers Sense


  1. Type-Based Distributed Estimation over Multiaccess Channels G¨ okhan Mergen Joint work with Prof. Lang Tong School of Electrical and Computer Engineering Cornell University, Ithaca, NY

  2. Motivation: Sensor Networks 2 Fusion centers • Sense a physical phenomena, transmit it to a fusion center. • Fusion centers estimate the field param- eters, and deliver the estimate.

  3. Problem Formulation 3 Fusion center Fusion center • Estimate θ ∈ R . • Observation X 1 , · · · , X n are i.i.d. con- ditioned on θ . Sensor 1 Sensor 1 Assumptions: Sensor n Sensor n Sensor 2 Sensor 2 X 1 X n • X i takes values in { 1 , · · · , k } . X 2 • X i ∼ p θ , where p θ is a probability mass function. θ p θ (2) p θ p θ (1) p θ (3) 1 2 3 x

  4. Problem Formulation Cont’d 4 • Sensor i has a set of k channel wave- Fusion center forms s i, 1 , · · · , s i,k . h 1 • Upon observing X i , it transmits s i,X i . h 2 h n Sensor 1 s i, 1 ( t ) Sensor n s i, 3 ( t ) s i, 2 ( t ) Sensor 2 X 1 X n X 2 t t t Received signal: z = � n i =1 h i s i,X i + w. • Channel gains h 1 , · · · , h n ∈ R are i.i.d. • The noise is white N (0 , σ 2 ) . • Energy constraint || s i,j || 2 ≤ E .

  5. Problem Formulation Cont’d 5 Estimator: Fusion center • The parameter θ is estimated based h 1 on the received signal z . h 2 h n • ˆ θ ( z ) is the estimate, ˆ θ is the estimator. Sensor 1 Sensor n Sensor 2 Objective: X 1 X n X 2 Design the channel waveforms and the estimator to minimize the Mean Square Error (MSE) E { (ˆ θ − θ ) 2 } .

  6. Classical Approach 6 • Collision among users is “bad!” • Solution: orthogonalize transmis- sions. time • Can be done by time/frequency/code frequency division. Advantages: • Allows us to use the standard layered approach. • Well understood. Rather easy to implement. Caveat: • The bandwidth requirement is significant for large n . • Neglects the dependency among sensor data.

  7. Proposed Approach: TBMA 7 • Nodes transmit simultaneously √ 3 E with Pulse Position Modulation √ (TBMA). 2 E √ z= Noise Eδ X i , where δ 1 , · · · , δ k are • s i,X i = √ E orthonormal pulses. • When all h i = 1 , time z = histogram+noise . Advantages: • Uses much less bandwidth/time than orthogonal approaches. • The MSE with TBMA is asymptotically optimal as n →∞ . Remark: • Any set of orthonormal δ 1 , · · · , δ k can be used.

  8. Outline 8 • Introduction • Performance analysis � Fundamental limits – TBMA with deterministic h i – TBMA with random h i – Orthogonal allocation • Transmitter channel side information • Conclusion

  9. Fundamental Limits 9 Cramer-Rao Bound: Let ˆ θ be an unbiased estimator based on X 1 , · · · , X n . Then, 1 E { (ˆ θ − θ ) 2 } ≥ nI ( θ ) , (1) ( dp θ ( i ) /dθ ) 2 where I ( θ ) = � k is the Fisher information in X i . i =1 p θ ( i ) Asymptotic Efficiency: There exists a class of estimators based on X 1 , · · · , X n satisfying 1 θ . ˆ for large n. = N ( θ, nI ( θ )) , (2) → θ and √ n (ˆ Notation “ . p d = ” means ˆ I ( θ ) ) as n →∞ . 1 θ − θ ) →N (0 , θ

  10. Fundamental Limits Cont’d 10 Key observations: • To achieve asymptotic efficiency, an estimator need not have access to all data X 1 , · · · , X n . • Knowledge of a sufficient statistic is actually enough.

  11. Outline 11 • Introduction • Performance analysis – Fundamental limits � TBMA with deterministic h i – TBMA with random h i – Orthogonal allocation • Transmitter channel side information • Conclusion

  12. TBMA with Deterministic h i 12 • A sufficient statistic is empirical √ 3 E measure: √ p := histogram 2 E ˜ . n z= Noise √ • Scale the received signal: E p + N (0 , σ 2 z √ y := = ˜ n 2 E ) . En time � �� � ( ∗ ) Questions: • How bad is ( ∗ ) ? • What estimator should be used?

  13. TBMA with Deterministic h i Cont’d 13 Answers: p . (i) = N ( p θ , 1 n Σ) for large n , where Σ = Diag ( p θ ) − p θ p T θ . ˜ p + N (0 , σ 2 = N ( p θ , 1 y . ⇒ y = ˜ n 2 E ) n Σ) . (ii) Maximum-likelihood estimator (MLE) based on y is prohibitive. Let y = N ( p θ , 1 n Σ) , then its pdf is   ( p θ ( i ) − y i ) 2 n � k + log � k i =1 p θ ( i ) i =1 p θ ( i )  g ( y ) . f ( y 1 , · · · , y k | θ ) = exp  − 2 k ( p θ ( i ) − y i ) 2 � Given y , minimize for asymptotic MLE. p θ ( i ) i =1

  14. TBMA with Deterministic h i Cont’d 14 Theorem 1: The proposed estimator ˆ θ minimizing k ( p θ ( i ) − y i ) 2 � M ( θ ) := (3) p θ ( i ) i =1 θ . with respect to θ ∈ R satisfies ˆ nI ( θ ) ) for large n . 1 = N ( θ, Remarks: • The asymptotic performance of TBMA is as if the fusion center has direct access to X i ’s. • No unbiased ˆ θ , even the ones with direct access to X i ’s, can do better than this. • The theorem holds independent of the noise power σ 2 . • The σ 2 determines the speed of convergence to the asymptotic MSE.

  15. Outline 15 • Introduction • Performance analysis – Fundamental limits – TBMA with deterministic h i � TBMA with random h i – Orthogonal allocation • Transmitter channel side information • Conclusion

  16. TBMA with Random h i 16 √ • Assume h i has non-zero mean h := 3 E E ( h i ) , and σ 2 h := Var( h i ) . √ 2 E z= Noise • Define y = Ehn . z √ √ E • Observe y . = N ( p θ , 1 n Σ) , where σ 2 time h 2 ) Diag ( p θ ) − p θ p T Σ = (1 + h θ . � � − n ( y − p θ ) T Σ − 1 ( y − p θ ) + log | Σ | f ( y | θ ) ∝ exp . 2

  17. TBMA with Random h i 17 Theorem 2: The estimator ˆ θ minimizing M ( θ ) = ( y − p θ ) T Σ − 1 ( y − p θ ) , (4) with respect to θ ∈ R satisfies σ 2 = N ( θ, 1 + h θ . ˆ h 2 for large n. nI ( θ ) ) , Remarks: σ 2 • The performance loss due to channel randomness is ∝ (1 + h 2 ) . h • When h ≈ 0 , the loss is significant. • At the extreme case h = 0 , the MSE does not go to zero even though n →∞ . This is true for all unbiased ˆ θ .

  18. Outline 18 • Introduction • Performance analysis – Fundamental limits – TBMA with deterministic h i – TBMA with random h i � Orthogonal allocation • Transmitter channel side information • Conclusion

  19. Performance of Orthogonal Allocation 19 • Let s 1 , · · · , s k ∈ C m , || s i || 2 ≤ 1 , be constellation points. √ • Sensor i transmits the waveform corresponding to Es X i . • Signal received from the i ’th sensor: √ z ( i ) = h i Es X i + v ( i ) , (5) where v ( i ) ∼ N (0 , σ 2 I ) . • When h i = 1 , the z (1) , · · · , z ( n ) are i.i.d. with Gaussian mixture density: k √ � z ( i ) ∼ Es j , σ 2 I ) . p j N ( j =1

  20. Performance of Orthogonal Allocation Cont’d 20 Asymptotic Performance: • For any unbiased ˆ θ , 1 E { (ˆ θ − θ ) 2 } ≥ nJ ( θ ) , (6) �� � 2 � d log f ( z ( i ) ) where J ( θ ) = E z ( i ) is the Fisher information in z ( i ) . dθ θ . • The MLE based on z (1) , · · · , z ( n ) satisfies ˆ nJ ( θ ) ) . 1 = N ( θ, Remarks: • For the best asymptotic performance J ( θ ) should be maximized with respect to s 1 , · · · , s k . • For h i = 1 , the antipodal constellation maximizes J ( θ ) for k = 2 . • In general, the optimal constellation depends on the family { p θ : θ ∈ R } .

  21. Performance of Orthogonal Allocation Cont’d 21 Fisher Information in z (i) ; Bernoulli(0.8) Fisher Information in z (i) ; Poisson(1) 7 1.4 I( θ ) (Fisher information in X i ) 6 1.2 I( θ ) (Fisher information in X i ) 5 1 Fisher information Fisher information J( θ ) 4 0.8 3 0.6 J( θ ) Simplex 2 0.4 Orthogonal BPSK 1 0.2 0 0 −20 0 20 40 −20 −10 0 10 20 30 40 SNR (dB) SNR (dB)

  22. A Numerical Example 22 Identical channels (h i =1); Bernoulli(0.8) 0 10 TDMA(SNR=−10dB) TDMA(SNR=0dB) TBMA(SNR=−10dB) TBMA(SNR=0dB) Direct access + ML Mean squared error −1 10 Asymptotic perf. −2 10 −3 10 1 2 4 8 16 32 64 No. of nodes

  23. Numerical Example - 2 23 Rician fading; E/ σ 2 =0dB, Bernoulli(0.8) 0 10 TBMA (K=0.01) TDMA (K=0.01) TDMA (K=1) TBMA (K=1) Directaccess + ML Mean squared error −1 10 −2 10 −3 10 1 2 4 8 16 32 64 No. of nodes • The channel is Rician distributed, i.e., � � K 1 h i = K + 1 + K + 1 CN (0 , 1) , where K > 0 is a deterministic number ( K = 0 ⇒ Rayleigh).

  24. Outline 24 • Introduction • Performance analysis – Fundamental limits – TBMA with deterministic h i – TBMA with random h i – Orthogonal allocation � Transmitter channel side information • Conclusion

  25. 25 Channel Side Information (CSI) at the Transmitter • In certain cases, the transmitter nodes may be able to learn their channel states before the transmission. • Transmitter CSI can be utilized to solve the problem of zero-mean h i . • Let h i := r i e jρ i . • The i ’th node transmits P ( r i ) e − jρ i √ Eδ X i in TBMA, where P ( · ) is a power control rule satisfying E r i [ P 2 ( r i )] ≤ 1 .

Recommend


More recommend