Time: 1245pm-145pm, Friday, March 19
Place: Room 6496, CUNY Graduate Center, 365 Fifth Ave (34str&35str).
Speaker: Paul Tepper (Northwestern)
Title: Modeling Iconic Gesture Generation in Humans and Virtual Humans

Abstract:

When expressing information about spatial domains, it is natural for
people to accompany their speech with gestures, and more specifically,
iconic gestures, which express visual and spatial information about
objects and actions. Virtual humans and embodied conversational agents
(ECAs) can produce these kinds of gestures to improve on the naturalness
and clarity of their communication in human-computer dialogue. However,
unlike people, these systems typically rely on a library of pre-scripted
or "canned" gestures.

In this talk, I will discuss my research group’s work on modeling
generation of novel iconic gestures in coordination with natural
language. I will start by presenting our framework for analyzing
gestural images into semantic units (image description features), and
linking those units to features of gesture form, or morphology (e.g.
hand shape and movement trajectory). I will describe our application of
this theory in a study of people giving directions around a college
campus. This will also include discussion of the use of perspective in
gestures that indicate the location of landmarks.

Then I will review the NUMACK system, wherein this framework has been
used to implement an ECA with a multimodal microplanner. This system
derives the form of both language and gesture directly from a common set
of communicative goals, enabling generation of novel gestures and
coordinated language on-the-fly. Finally, I will present a new, ongoing
study aimed at extending this line of research with a more
domain-general approach, based on lessons learned from the work on
direction-giving.

Speaker Bio:

Paul Tepper is a Ph.D. candidate in the Technology and Social Behavior
program, a joint Ph.D. from the School of Communication and Robert E.
McCormick School of Engineering and Applied Science at Northwestern
University. Paul’s research focuses on developing computational models
of face-to-face conversation, implemented in Embodied Conversational
Agents (ECAs) - virtual humans capable of communicating using language
and non-verbal behavior. An intrinsically interdisciplinary project,
this work also involves the collection and analysis of empirical data to
inform and motivate the design of these models and their implementation.
His current work is on modeling generation of coordinated language and
iconic gestures, based on careful study and analysis of people using
visual and spatial language and gestures, in tasks such as giving
directions and describing 3D shapes. His previous research includes the
use of ECAs for modeling rapport, common ground and interpersonal
relationships, based on theories and evidence about how people develop
and show rapport in interpersonal communication. In 2003, Paul completed
an M.Sc. in Artificial Intelligence, specializing in Human Language
Technology at the University of Edinburgh. He also holds a bachelor’s in
Computer Science, Cognitive Science and Linguistics from Rutgers
University.