Tilburg center for Cognition and Communication (TiCC)

We study how people communicate with each other and how computer systems can be taught to communicate with us.

vignetweb

TiCC Colloquium: Khiet Truong

What: Beyond words: recognizing affective and social signals in speech for socially interactive technology
Where: AZ 210
When: Wednesday, 25 May 2016, 12:30 - 13:30 hours


Abstract

When we interact with each other, not only the content of the words matter (what we say), but also the manner in which these words are spoken matter (how we speak), as well as the body language. Non-verbal behavior plays a key role in communicating affective and social information in human-human interaction. With the increasing acceptance of technology in our daily lives, such as virtual agents and robots, the need for developing technology that can sense and interpret these non-verbal behaviors increases as well.

In this talk, I will present some of our recent research on non-verbal behavior analysis in speech. For example, we have been investigating laughter in spoken interaction, voice analysis for health applications and for child-robot interaction. What are relevant non-verbal speech behaviors and how can we model these for (affective and socially) interactive technology?

About Khiet Truong

Khiet Truong is an assistant professor with the Human Media Interaction (HMI) group at the University of Twente, Enschede, the Netherlands. She completed her PhD at TNO on the topics of automatic emotion recognition in speech and automatic laughter detection, and currently is working in the areas of affective computing and social signal processing. Her main research interests lie in analysing and understanding emotionally expressive and social behaviors in interactions between humans, as well as in interactions between humans and virtual/physical agents (robots). Using this understanding, her aim is to develop socially intelligent and affective technology. She is particularly interested in paralinguistics: how do people talk in interaction and how can we develop technology that can automatically analyse and interpret the way people talk? At HMI, she is and was involved in several large EU-projects such as SQUIRREL, TERESA, and SSPNet, as well as national projects such as COMMIT/ and 3TU.H&T.

When: 25 May 2016 12:30

End date: 25 May 2016 13:30