Learning Latent-Variable Models of Natural Language

Professor Percy Liang
Assistant Professor, Stanford University
Given on: October 18, 2012

Abstract

A key property of natural language is that raw observations (e.g., sentences) are often associated with latent structures (e.g., parse trees). To infer these latent structures, we need to design sensible probabilistic models that connect latent structures with observations, as well as develop efficient algorithms for estimating the model parameters. First, I will discuss syntactic and semantic parsing models, showing how the latter can be used for question answering. Second, I will present recent work on learning restricted PCFGs using eigenvalue methods.

Biography

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research focuses on methods for learning richly-structured statistical models from limited supervision, most recently in the context of semantic parsing in natural language processing. He won a best student paper at the International Conference on Machine Learning in 2008, received the NSF, GAANN, and NDSEG fellowships, and is also a 2010 Siebel Scholar.