×

Path-following gradient-based decomposition algorithms for separable convex optimization. (English) Zbl 1317.90239

Summary: A new decomposition optimization algorithm, called path-following gradient-based decomposition, is proposed to solve separable convex optimization problems. Unlike path-following Newton methods considered in the literature, this algorithm does not require any smoothness assumption on the objective function. This allows us to handle more general classes of problems arising in many real applications than in the path-following Newton methods. The new algorithm is a combination of three techniques, namely smoothing, Lagrangian decomposition and path-following gradient framework. The algorithm decomposes the original problem into smaller subproblems by using dual decomposition and smoothing via self-concordant barriers, updates the dual variables using a path-following gradient method and allows one to solve the subproblems in parallel. Moreover, compared to augmented Lagrangian approaches, our algorithmic parameters are updated automatically without any tuning strategy. We prove the global convergence of the new algorithm and analyze its convergence rate. Then, we modify the proposed algorithm by applying Nesterov’s accelerating scheme to get a new variant which has a better convergence rate than the first algorithm. Finally, we present preliminary numerical tests that confirm the theoretical development.

MSC:

90C25 Convex programming
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Bertsekas, D., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice Hall, Englewood Cliffs (1989) · Zbl 0743.65107
[2] Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1-122 (2011) · Zbl 1229.90122 · doi:10.1561/2200000016
[3] Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64, 81-101 (1994) · Zbl 0823.90097 · doi:10.1007/BF01582566
[4] Dolan, E., Moré, J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201-213 (2002) · Zbl 1049.90004 · doi:10.1007/s101070100263
[5] Duchi, J., Agarwal, A., Wainwright, M.: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592-606 (2012) · Zbl 1369.90156 · doi:10.1109/TAC.2011.2161027
[6] Fraikin, C., Nesterov, Y., Dooren, P.V.: Correlation between two projected matrices under isometry constraints. CORE Discussion Paper 2005/80, UCL (2005)
[7] Hamdi, A.: Two-level primal-dual proximal decomposition technique to solve large-scale optimization problems. Appl. Math. Comput. 160, 921-938 (2005) · Zbl 1122.90061 · doi:10.1016/j.amc.2003.11.040
[8] Hamdi, A., Mishra, S.: Decomposition methods based on augmented Lagrangians: a survey. In: Mishra S.K. (ed.) Topics in Nonconvex Optimization: Theory and Application, pp. 175-203. Springer-Verlag (2011) · Zbl 1247.90249
[9] Kojima, M., Megiddo, N., Mizuno, S.: Horizontal and vertical decomposition in interior point methods for linear programs. Technical report, Information Sciences, Tokyo Institute of Technology, Tokyo (1993) · Zbl 0808.90093
[10] Lenoir, A., Mahey, P.: Accelerating convergence of a separable augmented Lagrangian algorithm. Technical report, LIMOS/RR-07-14, 1-34 (2007). · Zbl 1122.90061
[11] Necoara, I., Suykens, J.: Applications of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 53(11), 2674-2679 (2008) · Zbl 1367.90085 · doi:10.1109/TAC.2008.2007159
[12] Necoara, I., Suykens, J.: Interior-point lagrangian decomposition method for separable convex optimization. J. Optim. Theory Appl. 143(3), 567-588 (2009) · Zbl 1184.90126 · doi:10.1007/s10957-009-9566-8
[13] Nedíc, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48-61 (2009) · Zbl 1367.90086 · doi:10.1109/TAC.2008.2009515
[14] Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, Applied Optimization, vol. 87. Kluwer Academic Publishers, Dordrecht (2004) · Zbl 1086.90045 · doi:10.1007/978-1-4419-8853-9
[15] Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming. Society for Industrial Mathematics, Philadelphia (1994) · Zbl 0824.90112 · doi:10.1137/1.9781611970791
[16] Nesterov, Y., Protasov, V.: Optimizing the spectral radius. CORE Discussion Paper pp. 1-16 (2011) · Zbl 1282.15029
[17] Palomar, D., Chiang, M.: A tutorial on decomposition methods for network utility maximization. IEEE J. Sel. Areas Commun. 24(8), 1439-1451 (2006) · doi:10.1109/JSAC.2006.879350
[18] Ruszczyński, A.: On convergence of an augmented lagrangian decomposition method for sparse convex optimization. Math. Oper. Res. 20, 634-656 (1995) · Zbl 0845.90098 · doi:10.1287/moor.20.3.634
[19] Tran-Dinh, Q., Necoara, I., Savorgnan, C., Diehl, M.: An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization. SIAM J. Optim. 23(1), 95-125 (2013) · Zbl 1284.90049 · doi:10.1137/11085311X
[20] Tran-Dinh, Q., Savorgnan, C., Diehl, M.: Combining lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems. Comput. Optim. Appl. 55(1), 75-111 (2012) · Zbl 1295.90048
[21] Xiao, L., Johansson, M., Boyd, S.: Simultaneous routing and resource allocation via dual decomposition. IEEE Trans. Commun. 52(7), 1136-1144 (2004) · doi:10.1109/TCOMM.2004.831346
[22] Zhao, G.: A Lagrangian dual method with self-concordant barriers for multistage stochastic convex programming. Math. Progam. 102, 1-24 (2005) · Zbl 1058.90043 · doi:10.1007/s10107-003-0471-x
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.