Computational and Statistical Tradeoffs via Convex Relaxation

Professor Venkat Chandrasekaran
Professor, California Institute of Technology
Given on: March 7, 2013


Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources such as time or space. Our approach to this problem is to define a notion of “algorithmic weakening,” in which a hierarchy of algorithms is ordered by both computational efficiency and statistical efficiency, allowing the growing strength of the data at scale to be traded off against the need for sophisticated processing. We illustrate this approach in the setting of denoising problems, using convex relaxation as the core inferential tool. Hierarchies of convex relaxations have been widely used in theoretical computer science to yield tractable approximation algorithms to many computationally intractable tasks. In this talk we show how to endow such hierarchies with a statistical characterization and thereby obtain concrete tradeoffs relating algorithmic runtime to amount of data. (Joint work with Michael Jordan.)


Venkat Chandrasekaran is an Assistant Professor at Caltech in the Department of Computing and Mathematical Sciences and in the Department of Electrical Engineering. He received a Ph.D. in Electrical Engineering and Computer Science in 2011 from MIT, under Profs. Pablo Parrilo and Alan Willsky in the Laboratory for Information and Decision Systems.

He also received an S.M. in EECS in June 2007 from MIT, and a B.A. in Mathematics as well as a B.S. in Electrical and Computer Engineering in 2005 from Rice University in Houston, Texas. In 2011-2012, he was a postdoc with Prof. Michael Jordan at U. C. Berkeley. He received the Jin-Au Kong Outstanding Doctoral Thesis award in 2012 from MIT's EECS department.