(N.B the PASCAL challenge is now officially closed and the results are available here. Instructions and data are being left on-line so that the benefit of groups wishing to compare their algoriths with those that have been submitted.)

Evaluation

The evaluation tools are now available for download.

Documentation is included with the download but can also be viewed online here.

The tools include scripts that will allow you to:

  • train a baseline recognition system using the challenge training data,
  • transcribe utterances in the development test set — either before or after enhancement/separation — using a trained recognition system,
  • score a set of recognition transcriptions using a standardised scoring script.

The baseline recogniser employs Mel Frequency Cepstral Coefficients and Cepstral Mean Normalisation. You should obtain precisely the following keyword recognition accuracies (%) for the CHiME data sets:


SNR = -6 dB -3 dB 0 dB 3 dB 6 dB 9 dB
Development set 31.08 36.75 49.08 64.00 73.83 83.08
Final test set 30.33 35.42 49.50 62.92 75.00 82.42

These are the baseline 'do nothing' results.

While extensive testing has been performed we would appreciate rapid feedback on any problems that you find. Please report problems to Ning Ma (n.ma@dcs.shef.ac.uk).