Combining a priori knowledge and non-acoustic signals for single-channel auditory speech enhancement
Hearing-impaired people usually need 5-15 dB better signal-to-noise ratio than normal-hearing users, to achieve the same speech understanding. This apparent reduction of signal quality is caused by reduced resolution in auditory signal analysis and information transmission, and remains when hearing aid amplification makes speech and noise fully audible. Single-channel acoustic speech enhancement methods can substantially reduce subjective noise loudness and improve listening comfort, but have not yet demonstrated improved intelligibility for speech in speech-like background noise. The objective of this project is to improve intelligibility by allowing a noise reduction algorithm to use also synchronous non-acoustic input to control the acoustic signal processing.
Srinivasan, S., Samuelsson, J., Kleijn, W.B. (2007): “Codebook-based Bayesian Speech Enhancement for Nonstationary Environments”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 2, pp. 441–452.
Zhao, D.Y., Kleijn, W.B. (2007). “HMM-based gain-modelling for enhancement of speech in noise”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 3, pp. 882–892.
Zhao, D.Y. (2007): Model based speech enhancement and coding. Ph.D. thesis, KTH Sound and Image Processing, Stockholm.