A Spoken Query Interface (SQI) has been developed for the news retrieval system.
The user clicks the Start listening button and speaks a query.
The system records the speech and passes it on to the Abbot recogniser.
Abbot produces a hypothesis of what was spoken in the query along with a lattice containing a set of alternative likely words. If the initial hypothesis contains errors, it might be possible to recover the errors by parsing the lattice.
A natural language parser is used to find the best path through the lattice subject to linguistic query constraints. Portions of the new best word-sequence hypothesis are tagged as keywords (bottom left corner of display).
These keywords are then submitted to the thislIR information retrieval system, and retrieval is performed in a similar way to the BBC news retrieval demonstrator.
Found documents are summarized in the middle panel on the right. Clicking one of the summary lines displays the entire recognizer transcription for that programme in the lower panel, scrolled to the relevant section.
Clicking anywhere in the transcription begins playback from that point in the original recording, and continues until the Stop playback button is pressed.
A new query can be spoken at any time, whereupon the process starts all over again.