Incorporating Binaural Cues in a Computational Model of Auditory Scene Analysis
Funded by the European Commission's Training and Mobility of Researchers (TMR) Programme, Research Training Grant No. ERBFMBICT950311, from 1st July 1996 for one year.
Auditory Scene Analysis (ASA) promisses to provide the needed front-end for robust automatic speech recognition devices. A more powerful approach to the current monaural one would be to include binaural cues. Listeners are able to use such cues as timing and intensity differences between the two ears to locate sounds in space, and to group sounds that originate from the same spatial location. This project aims to model this process in a physiologically plausible manner.
Recently, the first database primarily intended for studying computational ASA has been collected, analysed and released from Sheffield (see the ShATR home page). This corpus will provide an ideal data resource for the project.