×

zbMATH — the first resource for mathematics

Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions. (English) Zbl 1376.90046
Summary: This paper is concerned with convex composite minimization problems in a Hilbert space. In these problems, the objective is the sum of two closed, proper, and convex functions where one is smooth and the other admits a computationally inexpensive proximal operator. We analyze a family of generalized inertial proximal splitting algorithms (GIPSA) for solving such problems. We establish weak convergence of the generated sequence when the minimum is attained. Our analysis unifies and extends several previous results. We then focus on \(\ell _1\)-regularized optimization, which is the ubiquitous special case where the nonsmooth term is the \(\ell _1\)-norm. For certain parameter choices, GIPSA is amenable to a local analysis for this problem. For these choices we show that GIPSA achieves finite “active manifold identification”, i.e. convergence in a finite number of iterations to the optimal support and sign, after which GIPSA reduces to minimizing a local smooth function. We prove local linear convergence under either restricted strong convexity or a strict complementarity condition. We determine the rate in terms of the inertia, stepsize, and local curvature. Our local analysis is applicable to certain recent variants of the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), for which we establish active manifold identification and local linear convergence. Based on our analysis we propose a momentum restart scheme in these FISTA variants to obtain the optimal local linear convergence rate while maintaining desirable global properties.

MSC:
90C25 Convex programming
90C48 Programming in abstract spaces
Software:
glmnet; UNLocBoX
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Afonso, MV; Bioucas-Dias, JM; Figueiredo, MA, Fast image recovery using variable splitting and constrained optimization, IEEE Trans. Image Process., 19, 2345-2356, (2010) · Zbl 1371.94018
[2] Agarwal, A; Negahban, S; Wainwright, MJ, Fast global convergence of gradient methods for high-dimensional statistical recovery, Ann. Statist., 40, 2452-2482, (2012) · Zbl 1373.62244
[3] Alvarez, F, On the minimizing property of a second order dissipative system in Hilbert spaces, SIAM J. Control Optim., 38, 1102-1119, (2000) · Zbl 0954.34053
[4] Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 1-53 (2016). doi:10.1007/s10107-016-0992-8. · Zbl 1395.34068
[5] Attouch, H; Peypouquet, J, The rate of convergence of nesterov’s accelerated forward-backward method is actually faster than \(1/k^{2}\), SIAM J. Optim., 26, 1824-1834, (2016) · Zbl 1346.49048
[6] Attouch, H; Peypouquet, J; Redont, P, A dynamical approach to an inertial forward-backward algorithm for convex minimization, SIAM J. Optim., 24, 232-256, (2014) · Zbl 1295.90044
[7] Bach, F; Jenatton, R; Mairal, J; Obozinski, G; etal., Convex optimization with sparsity-inducing norms, Optim. Mach. Learn., 5, 19-53, (2011) · Zbl 06064248
[8] Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Heidelberg (2011) · Zbl 1218.47001
[9] Beck, A; Teboulle, M, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Img. Sci., 2, 183-202, (2009) · Zbl 1175.94009
[10] Bello Cruz, JY; Nghia, TT, On the convergence of the forward-backward splitting method with linesearches, Optim. Methods Software, 31, 1209-1238, (2016) · Zbl 1354.65116
[11] Bertsekas, D.P.: Nonlinear programming, 2nd edn. Athena Scientific (1999) · Zbl 1015.90077
[12] Boţ, RI; Csetnek, ER; Hendrich, C, Inertial Douglas-Rachford splitting for monotone inclusion problems, Appl. Math. Comput., 256, 472-487, (2015) · Zbl 1338.65145
[13] Bredies, K; Lorenz, DA, Linear convergence of iterative soft-thresholding, J. Fourier Anal. Appl., 14, 813-837, (2008) · Zbl 1175.65061
[14] Burachik, RS; Svaiter, B, \(ε \)-enlargements of maximal monotone operators in Banach spaces, Set-Valued Anal., 7, 117-132, (1999) · Zbl 0948.47050
[15] Cevher, V; Becker, S; Schmidt, M, Convex optimization for big data: scalable, randomized, and parallel algorithms for big data analytics, Signal Process. Mag. IEEE, 31, 32-43, (2014)
[16] Chambolle, A; Dossal, C, On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm, J. Optim. Theory Appl., 166, 968-982, (2015) · Zbl 1371.65047
[17] Chambolle, A; Pock, T, On the ergodic convergence rates of a first-order primal-dual algorithm, Math. Program., 159, 253-287, (2016) · Zbl 1350.49035
[18] Choi, K; Wang, J; Zhu, L; Suh, TS; Boyd, S; Xing, L, Compressed sensing based cone-beam computed tomography reconstruction with a first-order method, Med. phys, 37, 5113-5125, (2010)
[19] Combettes, PL; Pesquet, JC, Proximal splitting methods in signal processing, 185-212, (2011), New York · Zbl 1242.90160
[20] Combettes, PL; Wajs, VR, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4, 1168-1200, (2005) · Zbl 1179.94031
[21] Condat, L, A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms, J. Optim. Theory Appl., 158, 460-479, (2013) · Zbl 1272.90110
[22] Davis, D., Yin, W.: Convergence Rate Analysis of Several Splitting Schemes. Tech. rep, UCLA CAM Report (2014) · Zbl 1372.65168
[23] Eckstein, J., Yao, W.: Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results. RUTCOR Research Reports 32 (2012) · Zbl 0778.90052
[24] Friedman, J; Hastie, T; Tibshirani, R, Regularization paths for generalized linear models via coordinate descent, J. Stat. Software, 33, 1, (2010)
[25] Güler, O, New proximal point algorithms for convex minimization, SIAM J. Optim., 2, 649-664, (1992) · Zbl 0778.90052
[26] Hale, ET; Yin, W; Zhang, Y, Fixed-point continuation for \(ℓ _1\)-minimization: methodology and convergence, SIAM J. Optim., 19, 1107-1130, (2008) · Zbl 1180.65076
[27] Hare, W; Lewis, AS, Identifying active constraints via partial smoothness and prox-regularity, J Convex Anal., 11, 251-266, (2004) · Zbl 1178.90314
[28] Kim, SJ; Koh, K; Lustig, M; Boyd, S; Gorinevsky, D, An interior-point method for large-scale \(ℓ _1\)-regularized least squares, IEEE J. Sel. Top. Signal Process., 1, 606-617, (2007)
[29] Liang, J; Fadili, J; Peyré, G, Convergence rates with inexact non-expansive operators, Math. Program., 159, 403-434, (2016) · Zbl 1353.47106
[30] Liang, J., Fadili, J., Peyré, G.: Local linear convergence of forward-backward under partial smoothness. In: Advances in Neural Information Processing Systems, pp. 1970-1978 (2014) · Zbl 1242.90160
[31] Liang, J., Fadili, J., Peyré, G.: Activity identification and local linear convergence of inertial forward-backward splitting. arXiv preprint arXiv:1503.03703 (2015) · Zbl 1354.65116
[32] Lorenz, DA; Pock, T, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., (2014) · Zbl 1327.47063
[33] Lustig, M; Donoho, D; Pauly, JM, Sparse MRI: the application of compressed sensing for rapid MR imaging, Magn. Reson. Med., 58, 1182-1195, (2007)
[34] Maingé, PE, Convergence theorems for inertial KM-type algorithms, J. Comput. Appl. Math., 219, 223-236, (2008) · Zbl 1156.65054
[35] Monteiro, RD; Ortiz, C; Svaiter, BF, An adaptive accelerated first-order method for convex optimization, Comput. Optim. Appl., 64, 31-73, (2016) · Zbl 1344.90049
[36] Moudafi, A; Oliny, M, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155, 447-454, (2003) · Zbl 1027.65077
[37] Nesterov, Y, A method of solving a convex programming problem with convergence rate \(O(1/k^2)\), Soviet Math. Dokl., 27, 372-376, (1983) · Zbl 0535.90071
[38] Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, Berlin (2004) · Zbl 1086.90045
[39] Nesterov, Y, Gradient methods for minimizing composite functions, Math. Program., 140, 125-161, (2013) · Zbl 1287.90067
[40] O’Donoghue, B; Candès, E, Adaptive restart for accelerated gradient schemes, Found. Comput. Math., 15, 715-732, (2015) · Zbl 1320.90061
[41] Opial, Z, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Am. Math. Soc, 73, 591-597, (1967) · Zbl 0179.19902
[42] Passty, GB, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72, 383-390, (1979) · Zbl 0428.47039
[43] Polyak, BT, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4, 1-17, (1964) · Zbl 0147.35301
[44] Polyak, B.T.: Introduction to Optimization. Optimization Software Inc. New York (1987) · Zbl 1337.62173
[45] Raguet, H; Fadili, J; Peyré, G, A generalized forward-backward splitting, SIAM J. Imaging Sci., 6, 1199-1226, (2013) · Zbl 1296.47109
[46] Shalev-Shwartz, S; Tewari, A, Stochastic methods for \(ℓ _1\)-regularized loss minimization, J. Mach. Learn. Res., 12, 1865-1892, (2011) · Zbl 1280.62081
[47] Su, W., Boyd, S., Candès, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. In: Advances in Neural Information Processing Systems, pp. 2510-2518 (2014)
[48] Tao, S; Boley, D; Zhang, S, Local linear convergence of ista and fista on the lasso problem, SIAM J. Optim., 26, 313-336, (2016) · Zbl 1358.90101
[49] Tibshirani, RJ, The lasso problem and uniqueness, Electron. J. Stat., 7, 1456-1490, (2013) · Zbl 1337.62173
[50] Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. SIAM J. Opt. (2008, submitted) · Zbl 1371.94018
[51] Wen, Z; Yin, W; Zhang, H; Goldfarb, D, On the convergence of an active-set method for \(ℓ _1\) minimization, Optim. Methods Softw., 27, 1127-1146, (2012) · Zbl 1244.49055
[52] Zhang, H; Yin, W; Cheng, L, Necessary and sufficient conditions of solution uniqueness in 1-norm minimization, J. Optim. Theory Appl., 164, 109-122, (2015) · Zbl 1308.65102
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.