Title: Neural Networks and Supervised Embedding Models for NLP and Retrieval
Speaker: Jason Weston (Google)
Place: Science Center. Rm 4102, CUNY Graduate Center. 5th Ave & 34th St.


Abstract:

I will give a summary of my work of applying both simple ("supervised
embedding") and a bit more complex ("deep learning") neural networks
to the fields of NLP and text retrieval:
- Multi-tasking multilayer neural networks for the tasks of
part-of-speech tagging, chunking, named entity recognition and
semantic role labeling.
- Document retrieval using supervised embedding models (including
dealing with scalability, diversity and ambiguity).
- Utilizing world knowledge (in the form of knowledge bases) to
improve concept tagging and word sense disambiguation.

Bio:

Jason Weston is a Research Scientist at Google NY since July 2009. He
earned his PhD in machine learning at Royal Holloway, University of
London and at AT&T Research in Red Bank, NJ (advisor: Vladimir Vapnik)
in 2000. From 2000 to 2002, he was a Researcher at Biowulf
technologies, New York. From 2002 to 2003 he was a Research Scientist
at the Max Planck Institute for Biological Cybernetics, Tuebingen,
Germany. From 2003 to June 2009 he was a Research Staff Member at NEC
Labs America, Princeton. His interests lie in statistical machine
learning and its application to text, audio and images. Jason has
published over 80 papers, including best paper awards at ICML and
ECML.