RSS

Auditory Display

Investigator: Jon Barker Supervisor: Martin Cooke

There is growing interest in the use of sound to "display" non-acoustic data. The basic premise of so-called "auditory display" is that the sophisticated temporal pattern processing facilities of human listeners might provide a means to extract salient cues from multi-dimensional data sets. It is suggested that consideration of listeners' propensity to group and to segregate sound components is required for effective auditory display. Some general principles for mapping multi-dimensional data sets on to sound are being investigated, taking into account knowledge of auditory scene analysis. As a demonstration of these ideas, a software simulation of traffic flow in an arbitrarily-complex network is being developed, and auditory display is being employed to present listeners with an aural image of this domain. It is hoped that by using these techniques indication of emergent properties of the system, such as traffic congestion, can be provided.