Recovery of Simultaneously Structured Models

Professor Maryam Fazel
Assistant Professor, University of Washington
Given on: April 4, 2013


Finding models with a low-dimensional structure, given a number of linear observations much smaller than the ambient dimension, has been well-studied in recent years. Examples of such models are sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others.

In many signal processing and machine learning applications, the desired model has multiple structures simultaneously. Examples include recovering a sparse signal from phaseless measurements (sparse phase retrieval), and learning models with several structural priors in machine learning tasks.

Often convex penalties that promote individual structures are known, and require a minimal number of generic measurements (e.g.,$ell$1 norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms to recover a simultaneously structured model. We show that, surprisingly, if we use multiobjective optimization with the individual norms, we can do no better (order-wise) in terms of required measurements than an algorithm that exploits only one of the structures. This result holds in a general setting and suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, not one that is a function of relaxations used for each structure. Time permitting, we also consider denoising for simultaneously structured signals, and provide bounds on the minimax denoising risk.


Maryam Fazel is an assistant professor of Electrical Engineering at the University of Washington, Seattle, with adjunct appointments in the Computer Science and Engineering Department and the Mathematics Department. Maryam is an ISL alumna and received her MS and PhD in EE from Stanford University. She received her BS in EE from Sharif University of Technology in Iran, and was a Rsesearch Scientist at Caltech prior to joining UW in 2008. Maryam is a recipient of the NSF Career Award (2009), and the UW EE Outstanding Teaching Award (2009). Her current research interests include convex optimization, and parsimonious models in machine learning and signal processing.