Title: Efficient Higher-Order Dependency Parsing with Cube Pruning
Speaker: Dr. Hao Zhang (Google Research) 
Date: Friday, March 8, 2013 
Time: 2:15pm
Place: Graduate Center Room 6496, 5th Ave & 34th St.


State-of-the-art graph-based parsers use features over higher-order
dependencies that rely on decoding algorithms that are slow and
difficult to generalize. On the other hand, transition-based dependency
parsers can easily utilize such features without increasing the linear
complexity of the shift-reduce system beyond a constant. We attempt to
address this imbalance for graph-based parsing by generalizing the
Eisner (1996) algorithm to handle arbitrary features over higher-order
dependencies. The generalization is at the cost of asymptotic
efficiency. To account for this, cube pruning for decoding is utilized
(Chiang, 2007). For the first time, label tuple and structural features
such as valencies can be scored efficiently with third-order features
in a graph-based parser. Our parser achieves the state-of-art
unlabeled accuracy of 93.06% and labeled accuracy of 91.86% on the
standard test set for English, at a faster speed than a
re-implementation of the third-order model of Koo et al. (2010).


Hao Zhang is a senior software engineer at Google, working on parsing
and syntax-based machine translation. He received his PhD degree in
Computer Science from the University of Rochester in 2008, his
master's degree from the Chinese Academy of Sciences in 2003, and his
bachelor's degree from Peking University in 2000. He has published
fifteen articles in ACL/EMNLP/Coling and the journal of Computational
Linguistics on the topics of synchronous grammars, efficient decoding
algorithms, word alignment, and dependency parsing.