Instructions

In order to reach a broad audience we have tried to avoid setting rules that might artificially disadvantage one research community over another. However, to keep the task as close to an application scenario as possible, and to allow systems to be broadly comparable, there are some guidelines that we expect participants to follow.

Which information can I use?

You are allowed to use the fact that the four classes of acoustic environments (BUS, CAF, PED, STR) are shared across datasets.

You are also allowed to use the environment and speaker labels in the training data, and the speaker labels in the development and test data.

You are encouraged to use the embedded training and development data and the corresponding noise-only recordings in any way that may help, e.g., to learn models of the acoustic environments and use them to recognize the test environment and/or to enhance the signal. The embedded test data may also be used in the limit of the immediate acoustic context of each test utterance, that is the 5 s preceding the utterance. Note that these 5 s may also contain speech, that is not always annotated.

Which information shall I not use?

The systems should not exploit the following information in order to transcribe a given test utterance:

Automatic identification of the environment of the test utterance and the immediate acoustic context is allowed, though. The rationale is that a commercial ASR system to be deployed on a tablet should work in any environment just after the tablet has been switched on.

Similarly, manual refinement of the speech start and end times or manual annotation of the unnotated speech data are not allowed, but automatic refinement and automatic detection of the speech data in the 5 s context are allowed.

All parameters should be tuned on the training set or the development set. The system should not use different tuning parameters depending on different noisy environments and different data types (real or simulation). For example the baseline script tunes the system with a single language model weight, which is optimized by the average WER of over all recognition results in the development set including all noisy environments and data types.

Which results should I report?

For every tested system, you should report 4 WERs (%), namely:

For instance, here are the WERs (%) achieved by the baseline GMM and DNN models (the WERs on test data will be available later). All these results are obtained by training on noisy multicondition data (channel 5) and testing on data enhanced by BeamformIt. They were obtained for one run on one machine. If you run the baseline yourself, you will probably obtain slightly different results due to random initialisation and to machine-specific issues.

TrackModelDevelopment set
RealSimulated
1chGMM22.1624.48
DNN+sMBR14.6715.67
DNN+RNNLM11.5712.98
2chGMM16.2219.15
DNN+sMBR10.9012.36
DNN+RNNLM8.239.50
6chGMM13.0314.30
DNN+sMBR8.149.07
DNN+RNNLM5.766.77

Such results will make it possible to assess whether simulated data are a reliable way of predicting ASR performance on real data, for development and/or for test. This currently appears to be approximately true. You are encouraged to improve the simulation baseline, so that it becomes even more true.

Eventually, only the results of the best system on the real test will be taken into account in the final WER ranking of all systems. The best system is taken to be the one that performs best on the real development set.

For that system, you should report 16 WERs (one for every development/test set and for every environment). The participants should also provide the recognized transcriptions for all the sets, when applicable with time alignment information (if the format of the transcriptions is not standard it must be described).

For instance, here are the WERs achieved by the baseline DNN+RNNLM system.

TrackEnvironmentDevelopment set
RealSimulated
1chBUS15.1311.90
CAF11.8115.90
PED7.429.94
STR11.9014.19
2chBUS10.908.19
CAF7.9612.15
PED5.227.12
STR8.8210.55
6chBUS7.396.02
CAF5.778.10
PED3.725.49
STR6.187.48

Can I use different features, a different recogniser or more data?

You are entirely free in the development of your system, from the front end to the back end and beyond, and you may even use extra data, including clean data, additional noisy data created by running the provided simulation baseline (or an improved version thereof), or any other data.

However, you should provide enough information, results and comparisons, such that one can understand where the performance gains obtained by your system come from. For example, if your system is made of multiple blocks, we encourage you to separately evaluate and report the influence of each block on performance.

Specifically:

The interface between front and back end is taken to be either at the signal or feature level, depending whether your front end operates in the signal or feature domain.

Only the results obtained using the official training and development sets (including possible modifications of the acoustic simulation baseline as specified above) and one of the baseline language models will be taken into account in the final WER ranking of all systems.