Using Neural Networks for Modeling and Representing Natural Languages

Tomas Mikolov (Facebook)

Artificial neural networks are powerful statistical models that have been shown
to provide excellent results in a number of domains. In the last few years, the 
computer vision and automatic speech recognition communities have been heavily 
influenced by these techniques. Applications to problems that involve natural 
language, such as machine translation and computational semantics, are becoming 
mainstream in the NLP research.

In this talk, I will give overview of some recent results where neural networks 
have been successfully used to push the state of the art in tasks such as 
language modeling and distributed word representations. I will explain how neural 
networks work, and how are they related to the popular field of deep learning.

Bio:

Tomas Mikolov is a research scientist at Facebook AI Research since May 2014. 
Previously he has been member of Google Brain team, where he developed efficient 
algorithms for computing distributed representations of words (word2vec project). 
He obtained PhD degree from Brno University of Technology (Czech Republic) in 2012 
for research on recurrent neural network based language models (RNNLM). His long 
term research goal is to develop intelligent machines capable of natural 
communication with people using language.