RSS

Learning to recognise speech in noisy environments

Investigator: Jonathan Laidler Supervisors: Martin Cooke and Neil Lawrence

Speech communication rarely takes place in 'clean' acoustic conditions. Additional sound energy arrives at our ears from other sources or from reverberation, and the signal is frequently distorted by the communication channel itself. Listeners with normal hearing are remarkably adept at extracting meaning in such environments, but noise creates huge difficulties both for people with moderate hearing-impairment and for automatic speech recognition technology. Research on noise-robustness at Sheffield and elsewhere has focussed on processes and algorithms which both listeners and computational devices might use to extract and identify the target speech from a noisy mixture. However, an important aspect of the process has hitherto been ignored, namely, how do listeners manage to develop representations of speech at all, given that virtually their entire exposure to such signals takes place in everyday noise backgrounds? Recently, a new model of speech perception has been developed which argues that listeners use 'glimpses' of clean speech in normal speech recognition. The model successfully explains many aspects of perception in noise. The aim of the current proposal is to apply the glimpsing notion to the problem of how infants construct a model of speech in the presence of noise. The work will involve the application of techniques from the statistical learning community and previous work at Sheffield on missing data methods in robust speech recognition. The research will shed light on the origin of robustness in the auditory system, and will be evaluated in the context of applications from both hearing prostheses and automatic speech recognition using internationally recognised corpora.