woman with camera

€5.6M for project to develop machine translation for deaf sign language users

Published: 07th December 2020 Last updated: 07th December 2020

3 votes.


The SignON project of 17 partners, co-coordinated by Tilburg School of Humanities and Digital Sciences, received Horizon 2020 funding of €5.6M by European Union. SignON is a 3-year project led by ADAPT at DCU to address the communication gap between users of spoken languages and deaf sign language users.

Prof. Andy Way, Professor of Computing at Dublin City University, Ireland (coordinator), and Dr. Dimitar Shterionov Assistant Professor in Cognitive Science and Artificial Intelligence at Tilburg University, The Netherlands (scientific lead), have received this award to conduct state of the art research and develop a mobile solution for automatic translation between sign and oral (written and spoken) languages.

SignON is a user-centric and community-driven project that aims to facilitate the exchange of information among deaf and hard of hearing, and hearing individuals across Europe targeting the Irish, British, Dutch, Flemish and Spanish Sign and English, Irish, Dutch, Spanish oral  languages. There are 5,000 deaf Irish Sign Language (ISL) signers; in the UK around 87,000 deaf signers use British Sign Language (BSL); in Flanders, Belgium some 5,000 deaf people use Flemish Sign Language (VGT); there are approximately 13,000 signers using Sign Language of the Netherlands (NGT); and it is estimated that there are over 100,000 Spanish Sign Language (LSE) signers.

Collaboration with European deaf and hard of hearing communities

Through collaboration with these European deaf and hard of hearing communities,  researchers will define use-cases, co-design and co-develop the SignON service and application.  The objective of the research project is the fair, unbiased and inclusive spread of information and digital content in European society.

The SignON communication service will be more than an advanced machine translation system. Behind the scenes, SignON will incorporate sophisticated machine learning capabilities that will allow (i) learning new Sign and oral languages; (ii) style-, domain- and user-adaptation and (iii) automatic error correction, based on user feedback. To the user, SignON will deliver signed conversations via a life-like avatar built with latest graphics technologies.

To ensure wide uptake, improved sign language detection and synthesis, as well as multilingual speech processing for everyone, the project will deploy the SignON service as a smart phone application running on standard modern devices. While the application is designed as a light-weight interface, the SignON framework, will be distributed on the cloud where the computationally intensive tasks will be executed.

For more details, please contact Dr Dimitar Shterionov