Exact Expressions in Source and Channel Coding Problems Using Integral Representations Speaker: Igal Sason Joint Work with Neri Merhav EE Department, Technion - Israel Institute of Technology 2020 IEEE International Symposium on Information Theory ISIT 2020 Online Virtual Conference June 21-26, 2020 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 1 / 18
Motivation In information-theoretic analyses, one frequently needs to calculate: Expectations or, more generally, ρ -th moments, for some ρ > 0 ; Logarithmic expectations of sums of i.i.d. positive random variables. N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 2 / 18
Motivation In information-theoretic analyses, one frequently needs to calculate: Expectations or, more generally, ρ -th moments, for some ρ > 0 ; Logarithmic expectations of sums of i.i.d. positive random variables. Commonly Used Approaches Resorting to bounds (e.g., Jensen’s inequality). A modern approach for logarithmic expectations is to use the replica method, which is a popular (but non–rigorous) tool, borrowed from statistical physics with considerable success. N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 2 / 18
Motivation In information-theoretic analyses, one frequently needs to calculate: Expectations or, more generally, ρ -th moments, for some ρ > 0 ; Logarithmic expectations of sums of i.i.d. positive random variables. Commonly Used Approaches Resorting to bounds (e.g., Jensen’s inequality). A modern approach for logarithmic expectations is to use the replica method, which is a popular (but non–rigorous) tool, borrowed from statistical physics with considerable success. Purpose of this Work Pointing out an alternative approach, by using integral representations, and demonstrating its usefulness in information-theoretic analyses. N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 2 / 18
Useful Integral Representation for the Logarithm � ∞ e − u − e − uz ln z = d u, Re( z ) ≥ 0 . u 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 3 / 18
Useful Integral Representation for the Logarithm � ∞ e − u − e − uz ln z = d u, Re( z ) ≥ 0 . u 0 Proof � 1 d v ln z = ( z − 1) 1 + v ( z − 1) 0 � 1 � ∞ e − u [1+ v ( z − 1)] d u d v = ( z − 1) 0 0 � ∞ � 1 e − uv ( z − 1) d v d u e − u = ( z − 1) 0 0 � ∞ � 1 − e − u ( z − 1) � e − u = d u u 0 � ∞ e − u − e − uz = d u. u 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 3 / 18
Useful Integral Representation for the Logarithm � ∞ e − u − e − uz ln z = d u, Re( z ) ≥ 0 . u 0 Logarithmic Expectation � ∞ � � d u e − u − M X ( − u ) E { ln X } = u , 0 � e uX � where M X ( u ) := E is the moment-generating function (MGF). N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 3 / 18
Useful Integral Representation for the Logarithm � ∞ e − u − e − uz ln z = d u, Re( z ) ≥ 0 . u 0 Logarithmic Expectation � ∞ � � d u e − u − M X ( − u ) E { ln X } = u , 0 � e uX � where M X ( u ) := E is the moment-generating function (MGF). Logarithmic Expectation of a sum of i.i.d. random variables Let X 1 , . . . , X n be i.i.d. random variables, then � ∞ � � d u e − u − M n E { ln( X 1 + . . . + X n ) } = X 1 ( − u ) u . 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 3 / 18
Example 1: Logarithms of Factorials n � ln( n !) = ln k k =1 � ∞ n � ( e − u − e − uk ) d u = u 0 k =1 � d u � ∞ � n − 1 − e − un e − u = u . 1 − e − u 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 4 / 18
Example 1: Logarithms of Factorials n � ln( n !) = ln k k =1 � ∞ n � ( e − u − e − uk ) d u = u 0 k =1 � d u � ∞ � n − 1 − e − un e − u = u . 1 − e − u 0 Example 2: Entropy of Poisson Random Variable N ∼ Poisson( λ ) H ( N ) = λ − E { N } ln λ + E { ln N ! } � � � ∞ λ − 1 − e − λ (1 − e − u ) = λ ln e d u e − u λ + u . 1 − e − u 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 4 / 18
ρ -th moment for all ρ ∈ (0 , 1) � ∞ e − u − M X ( − u ) ρ E { X ρ } = 1 + d u, u 1+ ρ Γ(1 − ρ ) 0 where Γ( · ) denotes Euler’s Gamma function: � ∞ t u − 1 e − t d t, Γ( u ) := u > 0 . 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 5 / 18
ρ -th moment for all ρ ∈ (0 , 1) � ∞ e − u − M X ( − u ) ρ E { X ρ } = 1 + d u, u 1+ ρ Γ(1 − ρ ) 0 where Γ( · ) denotes Euler’s Gamma function: � ∞ t u − 1 e − t d t, Γ( u ) := u > 0 . 0 ρ -th moment of the sum of i.i.d. RVs for all ρ ∈ (0 , 1) If { X i } n i =1 are i.i.d. nonnegative real-valued random variables, then �� n � ρ � � ∞ e − u − M n � X 1 ( − u ) ρ X i = 1 + d u. E u 1+ ρ Γ(1 − ρ ) 0 i =1 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 5 / 18
Passage to Logarithmic Expectations Since x ρ − 1 ln x = lim , x > 0 , ρ ρ → 0 then, swapping limit and expectation (based on the Monotone Convergence Theorem) gives E { X ρ } − 1 E { ln X } = lim ρ ρ → 0 + � ∞ e − u − M X ( − u ) = d u. u 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 6 / 18
Extension to Fractional ρ -th moments with ρ > 0 ⌊ ρ ⌋ � 1 α ℓ E { X ρ } = 1 + ρ B ( ℓ + 1 , ρ + 1 − ℓ ) ℓ =0 � ⌊ ρ ⌋ � � ∞ � ( − 1) j α j � � + ρ sin( πρ ) Γ( ρ ) d u e − u − M X ( − u ) u j u ρ +1 , π j ! 0 j =0 where for all j ∈ { 0 , 1 , . . . , } j ( − 1) j − ℓ M ( ℓ ) � � ( X − 1) j � 1 X (0) α j := E = B ( ℓ + 1 , j − ℓ + 1) , j + 1 ℓ =0 and B ( · , · ) denotes the Beta function: � 1 t u − 1 (1 − t ) v − 1 d t, B ( u, v ) := u, v > 0 . 0 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 7 / 18
Moments of Estimation Errors Let X 1 , . . . , X n be i.i.d. random variables with an unknown expectation θ to be estimated, and consider the simple estimator, n � θ n = 1 � X i . n i =1 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 8 / 18
Moments of Estimation Errors Let X 1 , . . . , X n be i.i.d. random variables with an unknown expectation θ to be estimated, and consider the simple estimator, n � θ n = 1 � X i . n i =1 Let �� � 2 D n := θ n − θ and ρ ′ := ρ 2 . Then, �� � � ρ � � � �� D ρ ′ E θ n − θ = E . n N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 8 / 18
Moments of Estimation Errors (Cont.) By our formula, if ρ > 0 is a non–integral multiple of 2, then ⌊ ρ/ 2 ⌋ � �� � � ρ � 2 α ℓ �� θ n − θ = � � E 2 + ρ B ℓ + 1 , ρ/ 2 + 1 − ℓ ℓ =0 � ⌊ ρ/ 2 ⌋ � � πρ � � ρ � � ( − 1) j α j � � ∞ � + ρ sin Γ d u e − u − M D n ( − u ) u j 2 2 u ρ/ 2+1 , 2 π j ! 0 j =0 where j ( − 1) j − ℓ M ( ℓ ) � D n (0) 1 α j = B ( ℓ + 1 , j − ℓ + 1) , j ∈ { 0 , 1 , . . . } , j + 1 ℓ =0 � ∞ � ω � 1 e − jωθ φ n e − ω 2 / (4 u ) d ω, 2 √ πu M D n ( − u ) = ∀ u > 0 , X 1 n −∞ and φ X 1 ( ω ) := E { e jωX 1 } ( ω ∈ R ) is the characteristic function of X 1 . N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 9 / 18
Moments of Estimation Errors: Example Consider the case where { X i } n i =1 are i.i.d. Bernoulli random variables with P { X 1 = 1 } = θ, P { X 1 = 0 } = 1 − θ where the characteristic function is given by � e juX � � � e ju − 1 φ X ( u ) := E = 1 + θ , u ∈ R . N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 10 / 18
Moments of Estimation Errors: Example Consider the case where { X i } n i =1 are i.i.d. Bernoulli random variables with P { X 1 = 1 } = θ, P { X 1 = 0 } = 1 − θ where the characteristic function is given by � e juX � � � e ju − 1 φ X ( u ) := E = 1 + θ , u ∈ R . An Upper Bound via a Concentration Inequality �� � � ρ � �� ≤ K ( ρ, θ ) · n − ρ/ 2 , E θ n − θ which holds for all n ∈ N , ρ > 0 and θ ∈ [0 , 1] , with � ρ � � � ρ/ 2 . K ( ρ, θ ) := ρ Γ 2 θ (1 − θ ) 2 N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 10 / 18
Moments of Estimation Errors: Plots -1 10 θ n − θ | -2 10 E | � Exact Upper bound -3 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 θ � � �� � versus its upper bound as functions of θ with n = 1000 . Figure: E θ n − θ N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 11 / 18
Moments of Estimation Errors: Plots -1 10 Exact Upper bound -2 10 -3 10 1 2 3 4 10 10 10 10 � � �� � versus its upper bound as functions of n with θ = 1 Figure: E θ n − θ 4 . N. Merhav & I. Sason ISIT 2020 June 21-26, 2020 12 / 18
Recommend
More recommend