speech signal representations part 2 speech signal
play

Speech Signal Representations Part 2: Speech Signal Processing - PowerPoint PPT Presentation

Speech Signal Representations Part 2: Speech Signal Processing Hsin-min Wang References: 1 X. Huang et al., Spoken Language Processing, Chapters 5-6 2 J. R. Deller et al., Discrete-Time Processing of Speech Signals, Chapters 4-6 3 J. W. Picone,


  1. Speech Signal Representations Part 2: Speech Signal Processing Hsin-min Wang References: 1 X. Huang et al., Spoken Language Processing, Chapters 5-6 2 J. R. Deller et al., Discrete-Time Processing of Speech Signals, Chapters 4-6 3 J. W. Picone, “Signal modeling techniques in speech recognition,” proceedings of the IEEE, September 1993, pp. 1215-1247 1

  2. Speech Recognition - Acoustic Processing a 11 a 22 a 33 Speech Waveform s=1 s=2 s=3 a 12 a 23 Framing o 1 o 2 o t Signal Processing Feature vector sequence b (o) b (o) b (o) 1 2 3 = = = o 1 o 2 o 3 o 4 ............... o t a P ( s j | s i ) O ................... − ij t t 1 = = b ( o ) P ( o | s i ) S * s 1 s 2 s 3 s 4 ............... s t i t t t ................... M ∑ = µ Σ c N ( o ; ; ) ik t ik ik = k 1 = * S arg max P ( O | S ) Hidden Markov Model S = * W arg max P ( O | W ) W 2

  3. Source-Filter Model � Source-Filter model: decomposition of speech signals − A source passed through a linear-time-varying filter − Source (excitation): the air flow at the vocal cords ( 聲帶 ) − Filter : the resonances ( 共鳴 ) of the vocal tract ( 聲道 ) which change over time − Once the filter has been estimated, the source can be obtained by passing the speech signal through the inverse filter e [ n ] h [ n ] x [ n ] 3

  4. Source-Filter Model (cont.) � Phoneme classification is mostly dependent on the characteristics of the filter − Speech recognizers estimate the filter characteristics and ignore the source • Speech production model : linear prediction coding and cepstral analysis • Speech perception model : mel-frequency cepstrum − Speech synthesis techniques use a source-filter model because it allows flexibility in altering the pitch and the filter − Speech coders use a source-filter model because it allows a low bit rate 4

  5. Characteristics of the Source-Filter Model � The characteristics of the vocal tract define the uttered phoneme − Such characteristics are evidenced in the frequency domain by the location of the formants, i.e., the peaks given by resonances of the vocal tract 5

  6. Main Considerations in Feature Extraction � Perceptually Meaningful − Parameters represent salient aspects of the speech signal − Parameters are analogous to those used by human auditory system (perceptually meaningful) � Robust Parameters − Parameters are robust to variations in environments such as the channels, speakers, and transducers � Time-Dynamic Parameters − Parameters can capture spectral dynamics, or changes of the spectrum with time (temporal correlation) 6

  7. Typical Procedures for Feature Extraction Spectral Shaping Spectral Shaping Conditioned Speech Signal Signal Framing A/D Conversion Pre-emphasis and Windowing Fourier Transform Filter Bank Cepstral or Processing Linear Prediction (LP) Parameters Measurements Parametric Transform Parametric Transform Spectral Analysis Spectral Analysis 7

  8. Spectral Shaping � A/D Conversion − Conversion of the signal from a sound pressure wave to a digital signal − Sampling � Digital Filtering (Pre-emphasis) − Emphasizing important frequency components in the signal � Framing and Windowing − Short-time processing 8

  9. A/D Conversion � Undesired side effects of A/D conversion − Line frequency noise (50/60-Hz hum) − Loss of low- and high-frequency information − Nonlinear input-output distortion − Example: • Frequency response of a typical telephone grade A/D converter • The sharp attenuation of low frequency and high frequency response causes problem for subsequent parametric spectral analysis algorithms � The most popular sampling frequency − Telecommunication: 8kHz − Non-telecommunication: 10~16kHz 9

  10. Sampling Frequency vs. Recognition Accuracy 10

  11. Pre-emphasis 11

  12. Pre-emphasis � The pre-emphasis filter N ( ) ( ) pre − = k H z ∑ a k z pre pre − A FIR high-pass filter = k 0 − A first-order finite impulse response filter is widely used ( ) − = − 1 H z 1 a z pre pre • a pre : values close to 1.0 that can be efficiently implemented in fixed point hardware, such as -1 or –(1-1/16), are most common • Boost the signal spectrum approximately 20 dB per decade Speech signal x [ n ] x’[n]=x[n]-ax[n-1] 20dB H ( z )= 1-a • z -1 0<a ≤ 1 H ( z )= 1-a • z -1 0<a ≤ 1 20dB decade 12 frequency

  13. Why Pre-emphasis? � Reason 1: Eliminate the glottal formants − The component of the glottal signal can be modeled by a simple two-real-pole filter whose poles are near z=1 − The lip radiation characteristic, with its zero near z=1, tends to cancel the spectral effects of one of the glottal pole − By introducing a second zero near z=1 (pre-emphasis), we can eliminate effectively the larynx and lips spectral contributions ==> Analysis can be asserted to be seeking the parameters corresponding to the vocal tract only u G [ n ] u [ n ] 1 1 = ⋅ G [ z ] H ( z ) 1- cz -1 x [ n ] − − − − 1 1 1 b z 1 b z 1 2 vocal glottal lip signal tract 13

  14. Why Pre-emphasis? (cont.) � Reason 2: Prevent Numerical Instability − If the speech signal is dominated by low frequencies, it is highly predictable and a large LP model will result in an ill-conditioned autocorrelation matrix � Reason 3 : − Voiced sections of the speech signal naturally have a negative spectral slope (attenuation) of approximately 20 dB per decade due to physiological characteristics of the speech production system − High frequency formants have small amplitude with respect to low frequency formants. A pre-emphasis of high frequencies is therefore required to obtain similar amplitude for all formants 14

  15. Why Pre-emphasis? (cont.) � Reason 4 : − Hearing is more sensitive above the 1 kHz region of the spectrum − The pre-emphasis filter amplifies this most perceptually important area of the spectrum 15

  16. Framing and Windowing 16

  17. Short-Time Fourier Analysis � Spectral Analysis � Spectrogram Representation − A spectrogram of a time signal is a two-dimension representation that displays time in its horizontal axis and frequency in its vertical axis − A gray scale is typically used to indicate the energy at each point (t,f) • “white”: low energy “black”: high energy 17

  18. Framing and Windowing � Short-time-analysis by framing: decompose the speech signal into a series of overlapping frames − Traditional methods for spectral evaluation are reliable in the case of a stationary signal (i.e., a signal whose statistical characteristics are invariant with respect to time) • The frame has to be short enough for the behavior (periodicity or noise-like appearance) of the signal to be approximately constant or assumed stationary – the signal characteristics (whether periodicity or noise-like appearance) are uniform in that region � Terminology − Frame Duration (N) : the length of time over which a set of parameters is valid, typically on the order of 20 ~ 30 ms − Frame Period (L): the length of time between successive parameter calculations (Target Rate) − Frame Rate : the number of frames computed per second 18

  19. Framing and Windowing (cont.) � Given a speech signal x [ n ], we define the short-time signal x m [ n ] of frame m as the product of x [ n ] by a window function w m [ n ] [ ] [ ] [ ] = x n x n w n m m − w m [ n ] = w [ m-n ] where w [ n ] = 0 for |n|>N/2 • In practice, the window length N is on the order of 20 to 30 N − The short-time Fourier L representation for frame m is defined as frame m+1 frame m ( ) ∞ ∞ − − = = − jw ∑ jwn ∑ jwn X e x [ n ] e w [ m n ] x [ n ] e m m = −∞ = −∞ n n 19

  20. Framing and Windowing (cont.) � Rectangular window − w [ n ]=1 for 0 ≤ n ≤ N-1 • Just extract the frame part of signal without further processing • Its frequency response has high side lobes � Main lobe : spreads out in a wider frequency range the narrow band power of the signal, and thus reduces the local frequency resolution 2 π /16 � Side lobe : swaps energy from different and distant Twice as wide as the rectangle window frequencies of x m [ n ], which is called spectral leakage 20

  21. Framing and Windowing (cont.) [ ] [ ] π ∞ 2 − N 1 = δ − x n n kP ∑ = δ − π jw ∑ X ( e ) ( w 2 k / P ) = −∞ k P = k 0 Main lobe width = 4 π / N Hamming window → N ≥ 2 P of length N 21

  22. Framing and Windowing (cont.) 17 dB The rectangular window provides better time resolution than the Hamming window π 2 N The Hamming window offers less spectral leakage than the rectangular window Rectangular windows are rarely used for speech 31 dB analysis despite their better time resolution π 4 N 44 dB π 4 N 22

  23. Framing and Windowing (cont.) � We want to select a window satisfy − the main lobe is as narrow as possible in its width − the side lobe is as low as possible in its magnitude However, this is a trade-off! � In practice, the windows lengths are on the order of 20 to 30 ms − This choice is a compromise between stationarity assumption and the frequency resolution 23

Recommend


More recommend