Title: Improving features and models for automatic emotion prediction in acted speech
Speaker: Ani Nenkova (Penn)
Time: 2:15pm-3:30pm, Friday, March 30, 2012
Place: Room 4102, CUNY Graduate Center, 365 Fifth Ave (34str&35str).
		
Abstract:

In this talk I will present our recent work on emotion prediction in
acted speech, as well as plans to extend this effort to applications
on spontaneous speech.

We introduce a class of spectral features computed over three phoneme
type classes of interest—stressed vowels, unstressed vowels and
consonants in the utterance. Classification accuracies are
consistently higher for our features compared to prosodic or
utterance-level spectral features. Combination of our phoneme class
features with prosodic features leads to even further improvement.
Further analyses reveal that spectral features computed from consonant
regions of the utterance contain more information about emotion than
either stressed or unstressed vowel features. We also explore how
emotion recognition accuracy depends on utterance length. We show
that, while there is no significant dependence for utterance-level
prosodic features, accuracy of emotion recognition using class-level
spectral features increases with the utterance length.

This is joint work with Dmitri Bitouk, Houwei Cao and Ragini Verma.

Bio:

Ani Nenkova is an Assistant Professor of Computer and Information
Science at the University of Pennsylvania. Her main areas of research
are automatic summarization, discourse, and text quality. She obtained
her PhD degree in Computer Science from Columbia University in 2006.
She also spent a year and a half as a postdoctoral fellow at Stanford
University before joining Penn in Fall 2007.

http://www.cis.upenn.edu/~nenkova