About the project

Rationale

Disciplines involving the production and perception of sound can be brought to life through the use of demonstrations. The field is full of so-called illusions, just-noticeable differences, thresholds and other effects. A small subset of these are routinely demonstrated on university courses in many subject areas, ranging from engineering to neurophysiology. Traditionally, most of these illustrations required some combination of expensive facilities, specialised software and dedicated tutors. More recently, CDs such as the Acoustical Society of America's Auditory Demonstrations CD, and Al Bregman's Auditory Scene Analysis CD have eased the task. Now, through the use of relatively cheap computing resources and platform-independent software, it is practical to demonstrate virtually all published phenomena in speech and hearing via interactive tools. The benefits of this approach are:

Aims

The Matlab Auditory Demonstrations project, which got underway in late 1997, aims to exploit the significant benefits afforded by interactivity to create user-centred demonstrations and associated work-sheets in the following areas:

Within each area, the scope for demonstrations is very wide. Our focus was initially on things we have most experience in, and things we require locally for teaching. Most of the effort to date has been in auditory scene analysis, robust ASR, and basic speech processing demonstrations for teaching. As the project gathers steam, student projects will contribute to the pool of demos. At present, several such projects are underway in the areas of binaural and pitch effects.

The perpetrators

This enterprise is currently being carried out largely by members of the Speech and Hearing Group in the Department of Computer Science at the University of Sheffield. If you would like to get involved (e.g. contribute ideas or code) please contact us.

As of August 1999, demos have been contributed by:

The paymasters

The MAD project is largely a spare-time activity. However, we gratefully acknowledge some financial support from the ELSNET Language Engineering Training Showcase for the production of 9 demonstrations in speech signal processing (contract 98/02).