Skip to content
Research Fellows Directory

Matthew Aylett

Dr Matthew Aylett

Research Fellow


CereProc LTD

Research summary

The aim of this project is to give the multi-modal interfaces of the future the ability to convey personality and emotion through a novel and groundbreaking approach to the synthesis of expressive speech. New approaches to capture, share and manipulate information in sectors such as health care and the creative industries require computers to enter the arena of human social interaction. Users readily adopt a social view of computers and previous research has shown how this can be harnessed in applications such as: health advice, tutoring, or

helping children overcome bullying.

If we can generate voices that mimic the internal state of speaker, by conveying emotion, and by being expressive we can give machines voices that can address this social aspect of computing. Furthermore we can offer those who have lost their voice a replacement, allowing them to participate in the rich world of vocal communication. In this project we have begun by looking at how voice quality effects our view of a person, do they sound tense? or relaxed? And investigates how to simulate this effect in synthesised speech and recognise this effect in human speech. We have looked at how we can copy voices and give an artificial 'chatbot' a sense of character through a light hearted website written for last years US Presidential election. Here in collaboration with CereProc Ltd. you can find copies of Barack Obama and Mitt Romney using state of the art voice 'cloning' technology (

Interests and expertise (Subject groups)

Grants awarded

Personification using affective speech synthesis

Scheme: Industry Fellowship

Dates: Jan 2012 - Dec 2015

Value: £93,617