Title: Scaling language understanding in a multi-lingual context

Abstract
Natural language is a pervasive human skill not yet fully achievable by automated computing systems. 
The main challenge is understanding how to computationally model the semantics of natural language 
and scale this understanding across a diverse set of natural languages.

In this talk, I discuss a novel approach for scaling semantic understanding across a diverse set of 
natural languages. Specifically, I will present two probabilistic models that tackle two different 
linguistic tasks across multiple languages: syntactic parsing and joint learning of named entity 
recognition and coreference resolution.

The syntactic parsing model outperforms current state of the art models by discovering linguistic 
information shared across languages at the granular level of a sentence. The coreference resolution 
system is one of the first attempts at joint multilingual modeling of named entity recognition and 
coreference resolution with limited linguistic resources. It performs second best on three out of 
four languages when compared to state of the art systems built with rich linguistic resources. I show
that we can simultaneously model both the depth and the breadth of natural languages using the underlying 
linguistic structure shared across languages. 

Bio:

Andreea Bodnari is the chief data scientist at Fountain.com, a micro-consulting app that connects users 
to a curated set of experts on demand. She received her PhD from the MIT Computer Science and Artificial 
Intelligence lab, focused on Natural Language Processing and Healthcare Informatics. Throughout her research,
she investigated how technology can be used to improve healthcare outcomes, decipher the mechanism behind 
the human language, and bring education to new heights. She is involved with the local and international 
academic community, and dedicates her spare time to causes such as improving gender inequality in the STEM
fields and promoting education in developing countries.