About the Event
Alternating minimization represents a widely applicable and empirically successful approach for several optimization problems that frequently appear in ML related applications, such as, finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the given optimization problem is written as an optimization over two or more blocks of variables and the algorithm then fixes other blocks of variables and optimizes over only block. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result.
In this talk, we present theoretical analysis of alternating minimization methods for several different ML related problems, such as low-rank matrix completion, phase retrieval, dictionary learning. For these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the global optima, while allowing a simpler analysis.
The talk is based on joint works with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, and Rashish Tandon.
Prateek Jain is a researcher in the Machine Learning and Optimization group at Microsoft Research, Bangalore, India. His research interests lie in the areas of machine learning, optimization, and high-dimensional statistics. Before joining Microsoft, Prateek obtained his Ph.D. from the CS department at the University of Texas at Austin. He and his collaborators were recognized with the ICML Best Student Paper Award in 2007, SDM Best Paper Runner-up Award in 2008, and with the CVPR Best Student Paper Award in 2008.