zbMATH — the first resource for mathematics

Incremental proximal methods for large scale convex optimization. (English) Zbl 1229.90121
Summary: We consider the minimization of a sum \({\sum_{i=1}^{m}f_i(x)}\) consisting of a large number of convex component functions \(f_{i}\). For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components. We also discuss applications in a few contexts, including signal processing and inference/machine learning.

90C25 Convex programming
90C06 Large-scale problems in mathematical programming
90C90 Applications of mathematical programming
PDF BibTeX Cite
Full Text: DOI
[1] Blatt D., Hero A.O., Gauchman H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29–51 (2008) · Zbl 1154.90015
[2] Bauschke H.H., Combettes P.L., Luke D.R.: Hybrid projection-reflection method for phase retrieval. J. Opt. Soc. Am. 20, 1025–1034 (2003)
[3] Bauschke H.H., Combettes P.L., Kruk S.G.: Extrapolation algorithm for affine-convex feasibility problems. Numer. Algorithms 41, 239–274 (2006) · Zbl 1098.65060
[4] Ben-Tal A., Margalit T., Nemirovski A.: The ordered subsets mirror descent optimization method and its use for the positron emission tomography reconstruction. In: Butnariu, D., Censor, Y., Reich, S. (eds) Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, Elsevier, Amsterdam, Netherlands (2001) · Zbl 0992.92034
[5] Bertsekas D.P., Nedić C.A., Ozdaglar A.E.: Convex Analysis and Optimization. Athena Scientific, Belmont, MA (2003) · Zbl 1140.90001
[6] Bauschke, H.H. : Projection algorithms: results and open problems. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, Elsevier, Amsterdam, Netherlands (2001) · Zbl 0991.65050
[7] Bertsekas D.P., Tsitsiklis J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont, MA (1996) · Zbl 0924.68163
[8] Bertsekas D.P., Tsitsiklis J.N.: Gradient convergence in gradient methods. SIAM J. Optim. 10, 627–642 (2000) · Zbl 1049.90130
[9] Beck A., Teboulle M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009) · Zbl 1175.94009
[10] Beck, A., Teboulle, M.: Gradient-based algorithms with applications to signal-recovery problems. In: Eldar, Y., Palomar, D. (eds.) Convex Optimization in Signal Processing and Communications, pp. 42–88. Cambridge University Press, Cambridge (2010) · Zbl 1211.90290
[11] Bertsekas, D.P., Yu, H.: A unifying polyhedral approximation framework for convex optimization. In: Laboratory for Information and Decision Systems Report LIDS-P-2820. MIT (2009); SIAM J. Optim. (to appear)
[12] Bertsekas D.P.: Incremental least squares methods and the extended Kalman filter. SIAM J. Optim 6, 807–822 (1996) · Zbl 0945.93026
[13] Bertsekas D.P.: Hybrid incremental gradient method for least squares. SIAM J. Optim. 7, 913–926 (1997) · Zbl 0887.49025
[14] Bertsekas, D.P.: Nonlinear Programming. 2nd edn. Athena Scientific, Belmont, MA (1999) · Zbl 1015.90077
[15] Bertsekas D.P.: Convex Optimization Theory. Athena Scientific, Belmont, MA (2009) · Zbl 1242.90001
[16] Bertsekas, D.P.: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey. In: Labaratory for Information and Decision Systems Report LIDS-P-2848. MIT (2010)
[17] Bioucas-Dias J., Figueiredo M.A.T.: A new TwIST: two-step iterative shrinkage thresholding algorithms for image restoration. IEEE Trans. Image Process. 16, 2992–3004 (2007) · Zbl 05516510
[18] Chambolle A., DeVore R.A., Lee N.Y., Lucier B.J.: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7, 319–335 (1998) · Zbl 0993.94507
[19] Cegielski A., Suchocka A.: Relaxed alternating projection methods. SIAM J. Optim. 19, 1093–1106 (2008) · Zbl 1180.65073
[20] Combettes P.L., Wajs V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005) · Zbl 1179.94031
[21] Daubechies I., Defrise M., Mol C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004) · Zbl 1077.65055
[22] Elad M., Matalon B., Zibulevsky M.: Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization. J. Appl. Comput. Harmon. Anal. 23, 346–367 (2007) · Zbl 1133.65022
[23] Figueiredo M.A.T., Nowak R.D.: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 12, 906–916 (2003) · Zbl 1279.94015
[24] Figueiredo M.A.T., Nowak R.D., Wright S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007)
[25] Gubin L.G., Polyak B.T., Raik E.V.: The method of projection for finding the common point in convex sets U.S.S.R. Comput. Math. Phys. 7, 1–24 (1967) (English Translation) · Zbl 0199.51002
[26] Grippo L.: A class of unconstrained minimization methods for neural network training. Optim. Methods Softw. 4, 135–150 (1994)
[27] Grippo L.: Convergent on-line algorithms for supervised learning in neural networks. IEEE Trans. Neural Netw. 11, 1284–1299 (2000)
[28] Helou E.S., De Pierro A.R.: Incremental subgradients for constrained convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009) · Zbl 1207.65082
[29] Johansson B., Rabi M., Johansson M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20, 1157–1170 (2009) · Zbl 1201.65100
[30] Kibardin V.M.: Decomposition into functions in the minimization problem. Autom. Remote Control 40, 1311–1323 (1980) · Zbl 0428.49026
[31] Kiwiel K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004) · Zbl 1063.90039
[32] Lions P.L., Mercier B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979) · Zbl 0426.65050
[33] Litvakov B.M.: On an iteration method in the problem of approximating a function from a finite number of observations. Avtom. Telemech. 4, 104–113 (1966) · Zbl 0217.52303
[34] Luo Z.Q., Tseng P.: Analysis of an approximate gradient projection method with applications to the backpropagation algorithm. Optim. Methods Softw. 4, 85–101 (1994)
[35] Luo Z.Q.: On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput. 3, 226–245 (1991)
[36] Mangasarian O.L., Solodov M.V.: Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optim. Methods Softw. 4, 103–116 (1994)
[37] Martinet B.: Regularisation d’in équations variationelles par approximations successives. Revue Fran. d’Automatique et Infomatique Rech. Op’ erationelle 4, 154–159 (1970)
[38] Nedić A., Bertsekas D.P., Borkar V.: Distributed asynchronous incremental subgradient methods. In: Butnariu, D., Censor, Y., Reich, S. (eds) Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, Elsevier, Amsterdam, Netherlands (2001) · Zbl 0997.90102
[39] Nedić A., Bertsekas D.P.: Convergence rate of the incremental subgradient algorithm. In: Uryasev, S., Pardalos, P.M. (eds) Stochastic Optimization: Algorithms and Applications, Kluwer Academic Publishers, Dordrecht (2000) · Zbl 0984.90033
[40] Nedić A., Bertsekas D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001) · Zbl 0991.90099
[41] Nedić A., Bertsekas D.P.: The effect of deterministic noise in subgradient methods. Math. Program. Ser. A 125, 75–99 (2010) · Zbl 1205.90225
[42] Nedić, A.: Random projection algorithms for convex minimization problems. University of Illinois Report (2010); Math. Program. J. (to appear)
[43] Neveu J.: Discrete Parameter Martingales. North-Holland, Amsterdam, The Netherlands (1975) · Zbl 0345.60026
[44] Predd J.B., Kulkarni S.R., Poor H.V.: A collaborative training algorithm for distributed learning. IEEE Trans. Inf. Theory 55, 1856–1871 (2009) · Zbl 1368.68285
[45] Passty G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979) · Zbl 0428.47039
[46] Ram S.S., Nedić A., Veeravalli V.V.: Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20, 691–717 (2009) · Zbl 1231.90312
[47] Ram S.S., Nedić A., Veeravalli V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147, 516–545 (2010) · Zbl 1254.90171
[48] Rabbat, M.G., Nowak, R.D.: Distributed optimization in sensor networks. In: Proceedings of Information Processing Sensor Networks, pp. 20–27. Berkeley, CA (2004)
[49] Rabbat M.G., Nowak R.D.: Quantized incremental algorithms for distributed optimization. IEEE J Sel. Areas Commun 23, 798–808 (2005)
[50] Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970) · Zbl 0193.18401
[51] Rockafellar R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976) · Zbl 0358.90053
[52] Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter A.: Pegasos: primal estimated subgradient solver for SVM. In: ICML 07 pp. 807–814. New York, N.Y. (2007) · Zbl 1211.90239
[53] Solodov M.V., Zavriev S.K.: Error stability properties of generalized gradient-type algorithms. J. Opt. Theory Appl. 98, 663–680 (1998) · Zbl 0913.90245
[54] Solodov M.V.: Incremental gradient algorithms with stepsizes bounded away from zero. Comput. Optim. Appl. 11, 28–35 (1998) · Zbl 0915.65061
[55] Tseng P.: An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. SIAM J. Optim. 8, 506–531 (1998) · Zbl 0922.90131
[56] Vonesch, C., Unser, M.: Fast iterative thresholding algorithm for wavelet-regularized deconvolution. In: Proceedings of the SPIE Optics and Photonics 2007 Conference on Mathematical Methods: Wavelet XII, vol. 6701, pp. 1–5. San Diego, CA (2007) · Zbl 1371.94389
[57] Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2008), pp. 3373–3376 (2008)
[58] Widrow, B., Hoff, M.E.: Adaptive switching circuits. Institute of Radio Engineers, Western Electronic Show and Convention, Convention Record, Part 4, 96–104 (1960)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.