Training Network on Automatic Processing of Pathological Speech (TAPAS)
Sheffield investigators
Partners
Idiap Research Institute, Switzerland
Friedrich-Alexander-Univeristaet Erlangen Nuernberg, Germany
Interuniversitair Micro-Electronicacentrum IMEC VZW, Belgium
INESC ID – Instituto de Engenhariade Sistemas E Computabores Investigacao E Desenvolvimento em Lisboa, Portugal
Ludwig-Maximilians-Universitaet Muenchen, Germany
Stichting Het Nederlands Kanker Instituut-Antoni Van Leeuwenhoek Ziekenhuis , Netherlands
Philips Electronics Nederland B.V., Netherlands
Philips Electronics UK Limited
Stichting Katholieke Universiteit, Netherlands
Universität Augsburg, Germany
Université Toulouse III-Paul Sabatier, France
Universitair Ziekenhuis Antwerpen, Belgium
Therapy Box, UK
Funder
EU Horizon 2020
About the project
TAPAS is an EU Horizon 2020 Marie Sklodowska-Curie Innovative Training Networks European Training Network (ETN). It will train a new generation of 15 researchers, two of which will be hosted at the University of Sheffield and supervised by CATCH computer science academics.
There are an increasing number of people across Europe with debilitating speech pathologies (eg, due to cerebral palsy and dementia). These groups face communication problems that can lead to social exclusion. They are now being further marginalised by a new wave of speech technology that is increasingly woven into everyday life but which is not robust to atypical speech. TAPAS aims to transform the wellbeing of these people.
The training will be via PhD research projects and the two PhD projects in CATCH will be:
1. Using speech analysis to detect onset and monitor cognitive decline
Develop speech technology that can detect, as early as possible, the onset of cognitive decline which might lead to dementia. This must be done in an unobtrusive manner. If a decline in cognitive ability is detected the solution will then monitor the progression and predict future cognitive ability.
This project will contribute to the state-of-the-art by investigating ways of monitoring and tracking signs of cognitive decline that work on incidental speech as it occurs in people’s homes. This differs from most current research approaches that tend to focus on planned recordings such as timed naming or picture description tasks.
2. Phrase-based speech recognition for people with moderate to severe dysarthria
The objective is to explore methods for moving towards handling larger-vocabulary, phrase-based speech recognition of dysarthric speech including different input strategies, better acoustic modelling, better data capture approaches and better machine learning for an inherently sparse data domain.