zbMATH — the first resource for mathematics

Optimality and complexity for constrained optimization problems with nonconvex regularization. (English) Zbl 1386.90167
Summary: In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set, and the objective function has a nonsmooth, nonconvex regularizer. Such a regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding, and non-Lipschitz \(L_{p}\) penalties as special cases. Using the theory of the generalized directional derivative and the tangent cone, we derive a first order necessary optimality condition for local minimizers of the problem, and define the generalized stationary point of it. We show that the generalized stationary point is the Clarke stationary point when the objective function is Lipschitz continuous at this point, and satisfies the existing necessary optimality conditions when the objective function is not Lipschitz continuous at this point. Moreover, we prove the consistency between the generalized directional derivative and the limit of the classic directional derivatives associated with the smoothing function. Finally, we establish a lower bound property for every local minimizer and show that finding a global minimizer is strongly NP-hard when the objective function has a concave regularizer.

90C46 Optimality conditions and duality in mathematical programming
49K35 Optimality conditions for minimax problems
90C30 Nonlinear programming
65K05 Numerical mathematical programming methods
Full Text: DOI