About the Event
Many problems in signal and image processing, machine learning, and estimation require optimization of convex cost functions. For convex cost functions with Lipschitz continuous gradients, Nesterov's fast gradient method decreases the cost function at least as fast as the square of the number of iterations, a rate order that is optimal. This talk presents a new first-order convex optimization method that converges twice as fast yet has a remarkably simple implementation that is comparable to Nesterov's method. This is work by doctoral student Donghwan Kim.