ON THE CONVERGENCE OF THE PROXIMAL GRADIENT METHOD WITH VARIABLE STEP SIZES
Date Issued
2025
Author(s)
Nikolovski, Filip
Abstract
Composite optimization problems arise frequently in modeling, since the objective function might contain components that do not possess some “nice” properties like differentiability; the case of l1 (LASSO) regularization is one such example. The proximal gradient methods are designed to handle this kind of optimization problems, and can solve them efficiently when the proximal mapping has a closed-form solution. Theoretical analyses of the convergence properties of the proximal gradient method with constant step size have showed sublinear and linear convergence for convex and strongly convex objective functions respectively. In this paper we show that under standard assumptions the same kind of convergence result can be established for the proximal gradient method with variable step sizes in the general setting of bounded step sizes. Further, a recently proposed step size selection for the proximal gradient method with variable step sizes is considered, and the above convergence analysis is implemented for this method.
