linear prediction analysis of speech sounds
play

Linear Prediction Analysis of Speech Sounds Berlin Chen 2003 - PowerPoint PPT Presentation

Linear Prediction Analysis of Speech Sounds Berlin Chen 2003 References: 1. X. Huang et. al., Spoken Language Processing, Chapters 5, 6 2. J. R. Deller et. al., Discrete-Time Processing of Speech Signals, Chapters 4-6 3. J. W. Picone,


  1. Linear Prediction Analysis of Speech Sounds Berlin Chen 2003 References: 1. X. Huang et. al., Spoken Language Processing, Chapters 5, 6 2. J. R. Deller et. al., Discrete-Time Processing of Speech Signals, Chapters 4-6 3. J. W. Picone, “Signal modeling techniques in speech recognition,” proceedings of the IEEE, September 1993, pp. 1215-1247

  2. Linear Predictive Coefficients (LPC) • An all-pole filter with a sufficient number of poles is a good approximation to model the vocal tract ( filter ) for speech signals ( ) X z 1 1 ( ) Vocal Tract Parameters = = = H z ( ) ( ) p E z A z − − a , a ,..., a k 1 a z [ ] ∑ 1 2 p k e n = k 1 [ ] [ ] [ ] [ ] p x n ∴ = − + x n a x n k e n ∑ k = k 1 [ ] [ ] ~ p = − x n a x n k ∑ k = k 1 – It predicts the current sample as a linear combination of its several past samples • Linear predictive coding, LPC analysis, auto-regressive modeling 2

  3. Short-Term Analysis: Algebra Approach • Estimate the corresponding LPC coefficients as those that minimize the total short-term prediction error ( minimum mean squared error ) [ ] [ ] [ ] ( ) ~ 2 = = − 2 E e n x n x n ≤ ≤ − ∑ ∑ , 0 n N 1 m m m m n n 2 Framing/Windowing,   [ ] [ ] p = − − x n a x n j   ∑ ∑ The total short-term m j m   = n j 1 prediction error for a specific frame m   2   [ ] [ ] p ∂ − −   x n a x n j   ∑ ∑ m j m   ∂   E =  n j 1  = = ∀ ≤ ≤ m 0 , 1 i p Take the derivative ∂ ∂ a a i i     [ ] [ ] [ ] p − − − = ∀ ≤ ≤  x n a x n j  x n i 0 , 1 i p ∑ ∑   m j m m     = n j 1 [ ] e m n The error vector is orthogonal { [ ] [ ] } to the past vectors − = ∀ ≤ ≤ e n x n i 0 , 1 i p ∑ m m n This property will be used later on! 3

  4. Short-Term Analysis: Algebra Approach ∂ E m ∂ a i     [ ] [ ] [ ] p − − − = ∀ ≤ ≤ x n a x n j x n i 0, 1 i p   ∑ ∑   m j m m     = n j 1   [ ] [ ] [ [ ] [ ] ] p ⇒ − − = − ∀ ≤ ≤ a x n i x n j x n i x n , 1 i p ∑ ∑ ∑   j m m m m   = n j 1 n [ [ ] [ ] ] [ [ ] [ ] ] p ⇒ − − = − ∀ ≤ ≤ a x n i x n j x n i x n , 1 i p ∑ ∑ ∑ j m m m m = j 1 n n To be used in next page ! Define correlatio n coefficien ts : [ ] [ [ ] [ ] ] φ = − − i , j x n i x n j ∑ m m m n [ ] [ ] p ⇒ ∑ φ = φ ∀ ≤ ≤ a i , j i , 0 , 1 i p j m m = j 1 ⇒ Φ a = Ψ 4

  5. Short-Term Analysis: Algebra Approach ≤ ≤ a j , 1 j p • The minimum error for the optimal, 2   [ ] [ ] [ ] [ ] [ ] ( ) p ~ = = − 2 = − − 2 E e n x n x n  x n a x n j  ∑ ∑ ∑ ∑ m m m m m j m   = n n n j 1     [ ] [ ] [ ] [ ] [ ] p p p = 2 − − + − − x n 2 x n a x n j a x n j a x n k     ∑ ∑ ∑ ∑ ∑ ∑ m m j m j m k m     = = = n n j 1 n j 1 k 1   [ ] [ ] p p − − a x n j a x n k   ∑ ∑ ∑ j m k m equal   = = n j 1 k 1  [ ] [ ]  ( ) p p = − − a a x n j x n k   ∑ ∑ ∑ j k m m   = = j 1 k 1 n Use the property derived [ ] [ ] p in the previous page ! = − a x n j x n ∑ ∑ j m m = j 1 n [ ] [ ] [ ] ( ) p = 2 − − Total Prediction Error E x n a x n x n j ∑ ∑ ∑ m m j m m = n j 1 n The error can be monitored to [ ] [ ] p = φ − φ 0 , 0 a 0 , j ∑ help establish p m j m = j 1 5

  6. Short-Term Analysis: Geometric Approach • Vector Representations of Error and Speech Signals [ ] [ ] [ ] p = ∑ − + ≤ ≤ x n a x n k e n , 0 n N- 1 m k m m = k 1 [ ] [ ] [ ] [ ] [ ]    − − −  a     x 1 x 2 ... x p e 0 x 0 1 m m m m m         [ ] [ ] [ ] [ ] [ ] − − − a x 1 1 x 1 2 ... x 1 p e 1 x 1         2 m m m m m         . . . ... . . . + = the past         . . ... . . . . vectors         are as column         . . . ... . . .   vectors       [ ] [ ] [ ] [ ] [ ]  − − − − − − − −  x N 1 1 x N 1 2 ... x N 1 p   a   e N 1   x N 1         p m m m m m [ ] ( ) a = This property 1 2 P X x x .... x e x m m m m m has been shown + = Xa e x [ ] [ ] [ ] ( ) previously m m T = e e 0 , e 1 ,..., e N - 1 m m m m T = e is minimal if X e 0 ( [ ] [ ] [ ] ) m m = i T x x - i , x 1 - i ,..., x N - 1 - i ( ) m m m ⇒ − = T X x Xa 0 m m ⇒ T = T X Xa X x m ( ) i = e , x 0 − 1 ⇒ = T T a X X X x m m m ∀ ≤ ≤ , 1 i p The prediction error vector must be orthogonal to the past vectors 6

  7. Short-Term Analysis: Autocorrelation Method [ ] • is identically zero outside 0 ≤ n ≤ N-1 x m n • The mean-squared error is calculated within n =0~ N -1+ p [ ] [ ] + ≤ ≤ −  x n mL w n , 0 n N 1 [ ] L : Frame Period , the length = x m n  of time between successive 0 , otherwise  frames [ ] x n 0 mL mL+N-1 shift [ ] [ ] ~ = + x m n x n mL 0 N-1 Framing/Windowing [ ] [ ] [ ] ~ = x n x n w n m m 0 N-1 7

  8. Short-Term Analysis: Autocorrelation Method • The mean-squared error will be: Why? − + − + N 1 p N 1 p [ ] [ ] [ ] ( ) ~ ∑ ∑ 2 = = − 2 E e n x n x n m m m m = = n 0 n 0 [ ] [ ] e m n x m n 0 0 N+P-1 N+P-1 N-1 N-1 ∂ E m Take the derivative: ∂ a i [ ] [ ] [ ] p x m n ⇒ φ = φ ∀ ≤ ≤ a i , j i , 0 , 1 i p ∑ j m m = j 1 N-1 0 [ ] [ ] [ ] + − N p 1 φ = − − i , j x n i x n j ∑ [ ] m m m − x m n j = n 0 [ ] [ ] − + N 1 j = − − x n i x n j ∑ N-1+p 0 N-1+j m m j = [ ] n i − x m n i ( ) − − − [ ] [ ] ( ) N 1 i j = + − x n x n i j ∑ m m 8 = n 0 0 i N-1+i

  9. Short-Term Analysis: Autocorrelation Method • Alternatively, [ ] [ ] [ ] φ = − – Where is the autocorrelation function of i , j R i j x m n m [ ] [ ] [ ] − − N 1 k – And = + R k x n x n k ∑ m m m = n 0 • Therefore: [ ] [ ] = − Why? R k R k m m [ ] [ ] p φ = φ ∀ ≤ ≤ a i , j i , 0 , 1 i p ∑ j m m = j 1 [ ] [ ] p ⇒ = ∀ ≤ ≤ a R k R k , 1 i p ∑ j m m = j 1 [ ] [ ] [ ] [ ] A Toeplitz Matrix: −     a   R 0 R 1 ... R p 1 R 1 1 m m m m       [ ] [ ] [ ] [ ] symmetric and all elements − a R 1 R 0 ... R p 2 R 2       2 m m m m of the diagonal are equal       . . . ... . . =       . . ... . . .             . . . ... . .       [ ] [ ] [ ] [ ]  − −    a   R P 1 x P 2 ... R 0 R p       p 9 m m m m

  10. Short-Term Analysis: Autocorrelation Method • Levinson-Durbin Recursion 1.Initialization [ ] ( ) = E 0 R 0 m 2. Iteration. For i =1…, p do the following recursion [ ] [ ] − ( ) i 1 − − − R i a i 1 R i j ∑ m j m ( ) = j 1 = k i ( ) − E i 1 ( ) ( ) A new, higher order coefficient = a i i k i is produced at each iteration i ( ) ( ) ( ) ( ) = − − − ≤ ≤ a i a i 1 k i a i 1 , for 1 j i- 1 − j j i j ( ) ( [ ] ( ) ( ) ) ( ) = − 2 − ≤ ≤ E i 1 k i E i 1 , where - 1 k i 1 3. Final Solution: ( ) = ≤ ≤ a a p for 1 j p j j 10

  11. Short-Term Analysis: Covariance Method [ ] • is not identically zero outside 0 ≤ n ≤ N-1 x m n – Window function is not applied • The mean-squared error is calculated within n =0~ N -1 [ ] x n 0 mL mL+N-1 shift [ ] [ ] = + x m n x n mL 0 N-1 • The mean-squared error will be: − − N 1 N 1 [ ] [ ] [ ] ( ) ∑ ∑ ~ 2 = = − 2 E e n x n x n m m m m = = n 0 n 0 11

Recommend


More recommend