an efferent inspired auditory model front end for speech
play

An Efferent-inspired Auditory Model Front-end for Speech Recognition - PowerPoint PPT Presentation

An Efferent-inspired Auditory Model Front-end for Speech Recognition Chia-ying Lee, James Glass and Oded Ghitza* MIT Computer Science and Artificial Intelligence Lab, Cambridge, MA, USA *Boston University Hearing Research Lab, Boston, MA, USA


  1. An Efferent-inspired Auditory Model Front-end for Speech Recognition Chia-ying Lee, James Glass and Oded Ghitza* MIT Computer Science and Artificial Intelligence Lab, Cambridge, MA, USA *Boston University Hearing Research Lab, Boston, MA, USA

  2. Motivation • Human v.s. Automatic Speech Recognizers (ASRs) - Humans are particularly good at dealing with previously unseen noise or dynamic noises.

  3. Motivation • Human v.s. Automatic Speech Recognizers (ASRs) - Humans are particularly good at dealing with previously unseen noise or dynamic noises. • Mounting evidence of the role of efferent-feedback in mammalian auditory systems - Operating point of the cochlea is regulated by background noise - Results in stable internal representations

  4. Motivation • Human v.s. Automatic Speech Recognizers (ASRs) - Humans are particularly good at dealing with previously unseen noise or dynamic noises. • Mounting evidence of the role of efferent-feedback in mammalian auditory systems - Operating point of the cochlea is regulated by background noise - Results in stable internal representations • Explore potential use of a feedback mechanism for ASR - Use a MOC efferent-inspired auditory model as an ASR front-end

  5. An Efferent-inspired Auditory Model • Messing et al., 2009 Inner Dynamic Middle Cochlea Hair Range Ear Cell Window G

  6. Model of Ascending Pathway Inner Dynamic Middle Cochlea Hair Range Ear Cell Window

  7. Model of Ascending Pathway Inner Dynamic Middle Cochlea Hair Range Ear Cell Window • Middle Ear - Modeled by a high-pass filter

  8. Model of Ascending Pathway Non- Inner Dynamic Middle linear Hair Range Ear Cochlea Cell Window • J. Goldstein, 1990 • Multi-Band Path Non-Linear model (MBPNL)

  9. MBPNL Model • Modeling cochlear nonlinearity • Example for center frequency = 1820 Hz - filter characteristics change instantaneously as a function of input signal strength

  10. Model of Ascending Pathway Inner Dynamic Middle Cochlea Hair Range Ear Cell Window • Inner Hair Cell - Generic MIT model - A half-wave rectifier followed by a low pass filter

  11. Model of Ascending Pathway Inner Dynamic Middle Cochlea Hair Range Ear Cell Window • Dynamic Range Window (DRW) - A hard limiter with upper and lower bounds, representing the dynamic range of auditory nerve firing

  12. Dynamic Range Window Output Lower Bound Upper Bound Input • No firing for signals below the lower bound • Saturation in firing rate for signals above the upper bound

  13. An Efferent-inspired Auditory Model n(t) Inner Dynamic Middle Cochlea Hair Range Ear Cell Window

  14. An Efferent-inspired Auditory Model n(t) Inner Dynamic Middle Cochlea Hair Range Ear Cell Window G • G is adjusted based on the background noise such that the output of the DRW is at “epsilon level”. - G impacts the filter response in the MBPNL cochlear model.

  15. An Efferent-inspired Auditory Model s(t) + n(t) Inner Dynamic Middle Cochlea Hair Range Ear Cell Window G • The noisy speech signal is processed by the tuned auditory model.

  16. Definitions • Open-loop model - The model for the ascending pathway Non- Inner Dynamic Middle linear Hair Range Ear Cochlea Cell Window

  17. Definitions • Closed-loop model - The ascending pathway model with the efferent-inspired feedback Non- Inner Dynamic Middle linear Hair Range Ear Cochlea Cell Window G

  18. Visual Illustration • Rows represent speech in different types of noise at 10 dB SNR Short time Fourier transform Closed-loop model

  19. A Closed-loop Front-end for ASR s(t)+n(t) Inner Dynamic Middle Cochlea Hair Range Ear Cell Window G • Need to extract features that can be processed by speech recognizers

  20. A Closed-loop Front-end for ASR s(t)+n(t) Inner Dynamic R(n) Middle Cochlea Framing DCT Hair Range Log Ear Cell Window G DC Log Framing offset • The feature generation method follows the standard MFCC extraction process.

  21. Experimental Setup • Corpus creation (noisy speech data synthesis) • Feature extraction methods • Recognizer training and testing • Experimental results

  22. Corpus Creation • Noise signals - Stationary noise: speech-shaped, white, pink - Non-stationary Aurora2 noise: train, subway • Speech signals - Aurora2 digits (TIDigits) • Noisy speech synthesis - Noise signals are fixed at 70 dB SPL - Speech signals are adjusted to create 5 to 20 dB SNRs - 300 ms adaptation prior to speech signal

  23. Feature Extraction Methods • Three feature extraction methods - MFCC baseline with conventional normalization method - The open-loop auditory model (in paper) - The closed-loop auditory model

  24. Recognizer Training and Testing • Standard Aurora2 HMM-based recognizer was used • Jackknifing experiments with mismatched training and test conditions Training data Test data N1 N2 N3 N4 N5 20 dB SNR 20 dB SNR 4004 15 dB SNR 15 dB SNR 6672 Test Training Utterances 10 dB SNR 10 dB SNR Utterances 5 dB SNR 5 dB SNR

  25. Experimental Results Accuracy MFCC Baseline Closed-loop model (%) Average 86 92 8.6 4.7 STD • The closed-loop model performs 43% better than the MFCC baseline, and reduced variation across mismatched conditions by 45%.

  26. Experimental Results MFCC baseline Closed-loop model Acc (%) Acc (%) speech- speech- Train Pink Subway Train Pink Subway White White dB SNR shaped shaped dB SNR 20 20 95 92 91 88 94 96 94 95 93 96 15 15 94 90 89 84 93 96 93 96 92 95 10 10 91 85 85 76 92 94 91 95 89 93 81 73 76 62 84 5 83 83 91 78 84 5 90 85 85 77 91 92 90 94 88 92 Avg Avg • The closed-loop model performed better than the baseline across all mismatched training and test conditions.

  27. Conclusions • Key ideas - Efferent-inspired feedback regulates the operating point of the front-end - Results in a stable representation -- a desired property for ASR • Experimental validation - Digit recognition in noise in mismatched conditions with multiple noise types and SNRs - The closed-loop model outperformed the baseline across all mismatched training and test conditions. - The results indicate that incorporating feedback in the front-end shows promise for generating robust speech features.

Recommend


More recommend