Improved human performance
Tower controllers reduce workload with enhanced speech recognition
Advanced technology including automatic speech recognition, artificial intelligence and machine learning offer potential enhancements to the controller working position (CWP). Building on work already performed during wave 1, SESAR 2020 is looking at control tower applications of this innovative technology using advanced human-machine interface (HMI) methods of interaction.
The automatic speech recognition (ASR) system transforms audio signals into a sequence of words (speech-to-text transcription). The transcribed words are then transformed into air traffic control concepts (text-to-concept annotation). This can be supported by modules of an assistant-based speech recognition (ABSR) system such as command prediction and command extraction. The set of predicted commands is derived through machine learning algorithms from current and historic contextual knowledge (surveillance data, flight plans, meteorological data, routing information, earlier transcriptions and annotations etc.) to assist and speed up the speech recognition engine in choosing the right recognition hypotheses. This increases the command recognition rate and minimises command recognition errors of ABSR systems.
The command extraction module searches relevant word sequences in the transcription to extract ATC concepts such as callsigns, command types, values, units, qualifiers, conditions etc. This annotation process is again based on machine learning algorithms and historical data. The extracted concepts are combined to recognized ATC commands and are presented on an HMI for further acceptance or eventual correction by the controller.