Validation of speech recognition for relieve of tower controllers

posted in: PJ05 News | 0

SESAR 3 JU member, DLR together with air traffic controllers from AustroControl and Oro Navigacija recently validated an assistant-based speech recognition system to support controllers working in future multiple remote tower centres. The validations are part of the project “Digital Technologies for Tower” (PJ05-W2-97 DTT, Solution 97.2), which is developing new technologies for airport towers, particularly in the area of remotely controlling multiple airports and innovative human-machine interfaces.

Flight strips are an essential tool for tower air traffic controllers. DLR’s prototypic electronic flight strip display shows the most relevant information for every flight at each of the three airports in different bays.

The developed assistant-based speech recognition system first transforms the controller speech into a sequence of words. Afterwards, relevant air traffic control (ATC) concepts such as call sign, command types and values are automatically extracted from the sequence of words. Such ATC concepts are displayed and highlighted in the flight strip system without the need for the controller to manually insert information with an electronic pen. The aim is to keep the workload and situational awareness of controllers at an optimal level at all times.

The assistant-based speech recognition system uses machine learning algorithms to automatically adapt the acoustic, language, command prediction and command extraction models to new environments. Furthermore, it uses contextual knowledge from radar data, flight plan data, and meteorological data to reduce command recognition error rates.

The three-week validation was carried out from 14 February to 3 March 2022 with ten air traffic controllers from Austria and Lithuania in the TowerLab of DLR’s Institute of Flight Guidance in Braunschweig. Working in a multiple remote tower setup they remotely controlled three simulated airports in two scenarios: one with the developed speech recognition support and a second scenario for comparison without such support. During and after the simulation scenarios, researchers gathered data about the command recognition, workload, situation awareness, and system usability.

The trials are intended to prove that the automatic extraction of ATC commands supports and relieves controllers in their work. After the successful completion of the test campaign, the data collected is now being evaluated by the involved project partners. The first results will also be presented at an online open day a few weeks after the trials.