Time: 2pm-3pm, Friday, December 4, 2009
Place: Room 4422, CUNY Graduate Center, 365 Fifth Ave (34str&35str).
Speaker: Matt Huenerfauth (CUNY)
Title: A Motion-Capture Corpus of American Sign Language for Generation Research


A majority of deaf 18-year-olds in the United States have a fourth-grade English 
reading level or below.  Software that can present information in the form of 
American Sign Language (ASL) animations or automatically translate English text 
to ASL could significantly improve these individuals' access to information, 
communication, and services.  ASL is a natural language with a distinct grammar 
and vocabulary from English; so, computational linguistic tools for generating 
grammatically correct and understandable ASL sentences (to be performed by a 
virtual animated human character) must be developed.

The motion-path of individual signs in an ASL sentence can vary greatly, 
depending on various linguistic factors.  For instance, entities under 
discussion can be associated with 3D points in space around a signer, and the 
movements of verb signs are deflected from their standard motion path based 
on how the subject and object of the verb have been "set up" in the signing 
space.  Computational models of the motion path of signs and the use of space 
by signers are necessary for generating natural and understandable ASL 
sentences -- concatenation of animations of signs from a fixed lexicon is 
insufficient for generating correctly inflected signs or natural 
coarticulation effects.  

For this reason, CUNY has begun a multi-year project to build the first 
motion-capture corpus of multi-sentential ASL utterances.  Native ASL 
signers are being recorded performing spontaneous and directed ASL 
sentences while wearing motion-capture body suits, gloves, eye-trackers, 
and head-trackers.  This data is being linguistically annotated with 
syntactic and discourse information by native ASL signers to produce a 
permanent research resource.  As an initial use of this corpus, we are 
studying how to learn spatially-parameterized models of ASL verb signs so 
that our ASL animation technology can synthesize novel ASL verb 
performances for unseen arrangements of subject/object reference points 
in the signing space.  This would be a necessary component of a fluent 
ASL generation system.  This talk will give an overview of the project, 
our corpus collection and annotation techniques, our user-based ASL 
animation evaluation approach, and our current progress.

Speaker Bio:

Matt Huenerfauth is an assistant professor of Computer Science at CUNY 
Graduate Center and CUNY Queens College.  His research focuses on the 
design of computer technology to benefit people who are deaf or have low 
levels of written-language literacy.  His work is at the intersection of 
the fields of assistive technology for people with disabilities, 
computational linguistics, virtual human animation, and the linguistics 
of American Sign Language (ASL).  In 2005 and 2007, he received the 
Best Paper Award at the ACM SIGACCESS Conference on Computers and 
Accessibility, the major computer science conference on assistive 
technology for people with disabilities.  In 2008, he received a five-year 
Faculty Early Career Development (CAREER) Award from the National Science 
Foundation to support his research on ASL.  In 2008, he became an 
Associate Editor of the ACM Transactions on Accessible Computing 
(TACCESS), the Association of Computing Machinery's journal in the field 
of assistive technology and accessibility for people with disabilities.