linear prediction analysis of speech sounds
play

Linear Prediction Analysis of Speech Sounds Berlin Chen 2004 - PowerPoint PPT Presentation

Linear Prediction Analysis of Speech Sounds Berlin Chen 2004 References: 1. X. Huang et. al., Spoken Language Processing , Chapters 5, 6 2. J. R. Deller et. al., Discrete-Time Processing of Speech Signals , Chapters 4-6 3. J. W. Picone,


  1. Linear Prediction Analysis of Speech Sounds Berlin Chen 2004 References: 1. X. Huang et. al., Spoken Language Processing , Chapters 5, 6 2. J. R. Deller et. al., Discrete-Time Processing of Speech Signals , Chapters 4-6 3. J. W. Picone, “Signal modeling techniques in speech recognition,” proceedings of the IEEE , September 1993, pp. 1215-1247

  2. Linear Predictive Coefficients (LPC) • An all-pole filter with a sufficient number of poles is a good approximation to model the vocal tract ( filter ) for speech signals ( ) X z 1 1 ( ) Vocal Tract Parameters = = = H z ( ) ( ) p E z A z − a , a ,..., a − k 1 a z ∑ [ ] 1 2 p k e n = k 1 [ ] [ ] [ ] [ ] p x n ∴ = − + x n a x n k e n ∑ k = k 1 [ ] [ ] p ~ = − x n a x n k ∑ k = k 1 – It predicts the current sample as a linear combination of its several past samples • Linear predictive coding, LPC analysis, auto-regressive modeling 2004 SP- Berlin Chen 2

  3. Short-Term Analysis: Algebra Approach • Estimate the corresponding LPC coefficients as those that minimize the total short-term prediction error ( minimum mean squared error ) [ ] [ ] [ ] ( ) ~ 2 = 2 = − E e n x n x n ≤ ≤ − ∑ ∑ , 0 n N 1 m m m m n n 2 Framing/Windowing, ⎛ ⎞ [ ] [ ] p = − − ⎜ x n a x n j ⎟ ∑ ∑ The total short-term m j m ⎝ ⎠ = n j 1 prediction error for a specific frame m ⎡ ⎤ 2 ⎛ ⎞ [ ] [ ] p ∂ − − ⎢ ⎜ x n a x n j ⎟ ⎥ ∑ ∑ m j m ⎝ ⎠ ∂ ⎢ ⎥ E = ⎣ n j 1 ⎦ = = ∀ ≤ ≤ m 0 , 1 i p Take the derivative ∂ ∂ a a i i ⎡ ⎤ ⎛ ⎞ [ ] [ ] [ ] p − − − = ∀ ≤ ≤ x n a x n j x n i 0 , 1 i p ⎜ ⎟ ∑ ∑ ⎢ ⎥ m j m m ⎝ ⎠ ⎣ ⎦ = n j 1 [ ] e m n The error vector is orthogonal [ ] [ ] { } to the past vectors − = ∀ ≤ ≤ e n x n i 0 , 1 i p ∑ m m n This property will be used later on! 2004 SP- Berlin Chen 3

  4. Short-Term Analysis: Algebra Approach ∂ E m ∂ a i ⎡ ⎤ ⎛ ⎞ [ ] [ ] [ ] p − − − = ∀ ≤ ≤ x n a x n j x n i 0, 1 i p ⎜ ⎟ ∑ ∑ ⎢ ⎥ m j m m ⎝ ⎠ ⎣ ⎦ = n j 1 ⎡ ⎤ [ ] [ ] [ [ ] [ ] ] p ⇒ − − = − ∀ ≤ ≤ a x n i x n j x n i x n , 1 i p ∑ ∑ ∑ ⎢ ⎥ j m m m m ⎣ ⎦ = n j 1 n [ [ ] [ ] ] [ [ ] [ ] ] p ⇒ − − = − ∀ ≤ ≤ a x n i x n j x n i x n , 1 i p ∑ ∑ ∑ j m m m m = j 1 n n To be used in next page ! Define correlatio n coefficien ts : [ ] [ [ ] [ ] ] φ = − − i , j x n i x n j ∑ m m m [ ] [ ] [ ] [ ] n φ φ φ φ ⎡ 1 , 1 1 , 2 ... 1 , p ⎤ ⎡ a ⎤ ⎡ 1 , 0 ⎤ m m m 1 m ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] φ φ φ φ 2 , 1 2 , 2 ... 2 , p a 2 , 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ m m m 2 m ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . ... . . . [ ] [ ] p = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⇒ ∑ φ = φ ∀ ≤ ≤ a i , j i , 0 , 1 i p . . ... . . . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ j m m . . ... . . . = j 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] ⎥ φ φ φ φ ⎢ p , 1 p , 2 ... p , p ⎥ ⎢ a ⎥ ⎢ p , 0 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ m m m P m ⇒ Φ a = a Ψ Ψ Φ 2004 SP- Berlin Chen 4

  5. Short-Term Analysis: Algebra Approach ≤ ≤ a j , 1 j p • The minimum error for the optimal, 2 ⎛ ⎞ [ ] [ ] [ ] [ ] [ ] ( ) ~ p = = − 2 = − − 2 E e n x n x n x n a x n j ⎜ ⎟ ∑ ∑ ∑ ∑ m m m m m j m ⎝ ⎠ = n n n j 1 ⎛ ⎞ ⎛ ⎞ [ ] [ ] [ ] [ ] [ ] p p p = 2 − − + − − x n 2 x n a x n j a x n j a x n k ⎜ ⎟ ⎜ ⎟ ∑ ∑ ∑ ∑ ∑ ∑ m m j m j m k m ⎝ ⎠ ⎝ ⎠ = = = n n j 1 n j 1 k 1 ⎛ ⎞ [ ] [ ] p p − − a x n j a x n k ⎜ ⎟ ∑ ∑ ∑ j m k m equal ⎝ ⎠ = = n j 1 k 1 ⎧ [ ] [ ] ⎫ ( ) p p = − − a a x n j x n k ∑ ⎨ ∑ ∑ ⎬ j k m m ⎩ ⎭ = = j 1 k 1 n Use the property derived [ ] [ ] p in the previous page ! = − a x n j x n ∑ ∑ j m m = j 1 n [ ] [ ] [ ] ( ) p = 2 − − Total Prediction Error E x n a x n x n j ∑ ∑ ∑ m m j m m = n j 1 n The error can be monitored to [ ] [ ] p = φ − φ 0 , 0 a 0 , j ∑ help establish p m j m = j 1 2004 SP- Berlin Chen 5

  6. Short-Term Analysis: Geometric Approach • Vector Representations of Error and Speech Signals [ ] [ ] [ ] p = ∑ − + ≤ ≤ x n a x n k e n , 0 n N- 1 m k m m = k 1 [ ] [ ] [ ] [ ] [ ] − − − ⎡ ⎤ ⎡ ⎤ a ⎡ ⎤ ⎡ ⎤ x 1 1 x 2 ... x p e 0 x 0 1 x x 1 m m m m m m m ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] [ ] − − − a x 1 1 x 1 2 ... x 1 p e 1 x 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 m m m m m ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . . + = the past ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . . vectors ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ are as column ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . . vectors ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] [ ] ⎥ − − − − − − a − − ⎢ x N 1 1 x N 1 2 ... x N 1 p ⎥ ⎢ ⎥ ⎢ e N 1 ⎥ ⎢ x N 1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ m m m p m m [ ] ( ) a This property = 1 2 P X x x .... x e x m m m m m has been shown + = Xa e x [ ] [ ] [ ] ( ) previously (P.3) m m = T e e 0 , e 1 ,..., e N - 1 m m m m T = e is minimal if X e 0 [ ] [ ] [ ] ( ) m m i T = x x - i , x 1 - i ,..., x N - 1 - i ( ) m m m ⇒ T − = m X x Xa 0 m ⇒ = T T X Xa X x m ( ) i = e , x 0 − 1 ⇒ = T T a X X X x m m m ∀ ≤ ≤ , 1 i p The prediction error vector must be orthogonal to the past vectors 2004 SP- Berlin Chen 6

  7. Short-Term Analysis: Autocorrelation Method [ ] • is identically zero outside 0 ≤ n ≤ N-1 x m n • The mean-squared error is calculated within n =0~ N -1+ p [ ] [ ] + ≤ ≤ − ⎧ x n mL w n , 0 n N 1 [ ] L : Frame Period , the length = x m n ⎨ of time between successive 0 , otherwise ⎩ frames [ ] x n 0 mL mL+N-1 shift [ ] [ ] ~ = + x m n x n mL 0 N-1 Framing/Windowing [ ] [ ] [ ] ~ = x n x n w n m m 0 N-1 2004 SP- Berlin Chen 7

  8. Short-Term Analysis: Autocorrelation Method • The mean-squared error will be: Why? − + − + N 1 p N 1 p [ ] [ ] [ ] ( ) ∑ ∑ ~ 2 = 2 = − E e n x n x n m m m m = = n 0 n 0 [ ] [ ] e m n x m n 0 0 N+P-1 N+P-1 N-1 N-1 ∂ E m Take the derivative: [ ] ∂ a x m n i [ ] [ ] p ⇒ φ = φ ∀ ≤ ≤ a i , j i , 0 , 1 i p ∑ N-1 j m m 0 = j 1 [ ] − [ ] [ ] [ ] x m n j + − N p 1 φ = − − i , j x n i x n j ∑ m m m i ≥ = j n 0 N-1+p 0 N-1+j j [ ] [ ] − + N 1 j [ ] = − − x n i x n j − ∑ x m n i m m = n i ( ) [ ] [ ] − − − ( ) N 1 i j 0 i = + − N-1+i x n x n i j ∑ m m = n 0 2004 SP- Berlin Chen 8

  9. Short-Term Analysis: Autocorrelation Method • Alternatively, [ ] [ ] [ ] φ = − i , j R i j – Where is the autocorrelation function of x m n m [ ] [ ] [ ] − − N 1 k – And = + R k x n x n k ∑ m m m = n 0 • Therefore: [ ] [ ] Why? = − R k R k m m p [ ] [ ] φ = φ ∀ ≤ ≤ ∑ a i , j i , 0 , 1 i p j m m = j 1 [ ] p [ ] ⇒ − = ∀ ≤ ≤ ∑ a R i j R i , 1 i p j m m = j 1 [ ] [ ] [ ] [ ] A Toeplitz Matrix: − ⎡ ⎤ ⎡ ⎤ a ⎡ ⎤ R 0 R 1 ... R p 1 R 1 1 m m m m ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] symmetric and all elements − R R R p a R 1 0 ... 2 2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 m m m m of the diagonal are equal ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . . . ... . . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ [ ] [ ] [ ] [ ] ⎥ − − ⎢ a ⎥ ⎢ R P 1 x P 2 ... R 0 ⎥ ⎢ R p ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ p m m m m 2004 SP- Berlin Chen 9

Recommend


More recommend