×

Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\). (English) Zbl 1391.90466

Summary: This paper describes an algorithm for solving structured nonsmooth convex optimization problems using the optimal subgradient algorithm (OSGA), which is a first-order method with the complexity \(\mathcal {O}(\varepsilon ^{-2})\) for Lipschitz continuous nonsmooth problems and \(\mathcal {O}(\varepsilon ^{-1/2})\) for smooth problems with Lipschitz continuous gradient. If the nonsmoothness of the problem is manifested in a structured way, we reformulate the problem so that it can be solved efficiently by a new setup of OSGA (called OSGA-V) with the complexity \(\mathcal {O}(\varepsilon ^{-1/2})\). Further, to solve the reformulated problem, we equip OSGA-O with an appropriate prox-function for which the OSGA-O subproblem can be solved either in a closed form or by a simple iterative scheme, which decreases the computational cost of applying the algorithm for large-scale problems. We show that applying the new scheme is feasible for many problems arising in applications. Some numerical results are reported confirming the theoretical foundations.

MSC:

90C25 Convex programming
90C60 Abstract computational complexity for mathematical programming problems
49M37 Numerical methods based on nonlinear programming
65K05 Numerical mathematical programming methods
68Q25 Analysis of algorithms and problem complexity
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Ahookhosh M (2015) High-dimensional nonsmooth convex optimization via optimal subgradient methods, PhD Thesis, University of Vienna
[2] Ahookhosh M (2016) Optimal subgradient algorithms with application to large-scale linear inverse problems (Submitted). http://arxiv.org/abs/1402.7291 · Zbl 1191.90038
[3] Ahookhosh M, Amini K, Kimiaei M (2015) A globally convergent trust-region method for large-scale symmetric nonlinear systems. Numer Funct Anal Optim 36:830-855 · Zbl 1330.65074 · doi:10.1080/01630563.2015.1046080
[4] Ahookhosh M, Neumaier A (2013) High-dimensional convex optimization via optimal affine subgradient algorithms. In: ROKS workshop, pp 83-84 · Zbl 1287.90067
[5] Ahookhosh M, Neumaier A (2016) An optimal subgradient algorithm with subspace search for costly convex optimization problems (submitted). http://www.optimization-online.org/DB_FILE/2015/04/4852.pdf · Zbl 1412.90105
[6] Ahookhosh M, Neumaier A (2017) An optimal subgradient algorithms for large-scale bound-constrained convex optimization. Math Methods Oper Res. doi:10.1007/s00186-017-0585-1 · Zbl 1380.90215
[7] Ahookhosh M, Neumaier A (2017) Optimal subgradient algorithms for large-scale convex optimization in simple domains. Numer Algorithm. doi:10.1007/s11075-017-0297-x · Zbl 1411.90261
[8] Auslender A, Teboulle M (2006) Interior gradient and proximal methods for convex and conic optimization. SIAM J Optim 16:697-725 · Zbl 1113.90118 · doi:10.1137/S1052623403427823
[9] Beck A, Teboulle M (2003) Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper Res Lett 31(3):167-175 · Zbl 1046.90057 · doi:10.1016/S0167-6377(02)00231-6
[10] Beck A, Teboulle M (2012) Smoothing and first order methods: a unified framework. SIAM J Optim 22:557-580 · Zbl 1251.90304 · doi:10.1137/100818327
[11] Beck A, Ben-Tal A, Guttmann-Beck N, Tetruashvili L (2010) The CoMirror algorithm for solving nonsmooth constrained convex problems. Oper Res Lett 38(6):493-498 · Zbl 1202.90209 · doi:10.1016/j.orl.2010.08.005
[12] Boţ RI, Hendrich C (2013) A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. Comput Optim Appl 54(2):239-262 · Zbl 1291.90237 · doi:10.1007/s10589-012-9523-6
[13] Boţ RI, Hendrich C (2015) On the acceleration of the double smoothing technique for unconstrained convex optimization problems. Optimization 64(2):265-288 · Zbl 1334.90124 · doi:10.1080/02331934.2012.745530
[14] Boyd S, Xiao L, Mutapcic A (2003) Subgradient methods, Notes for EE392o, Stanford University. http://www.stanford.edu/class/ee392o/subgrad_method.pdf
[15] Candés E (2006) Compressive sampling. In: Proceedings of International Congress of Mathematics, vol 3, Madrid, Spain, pp 1433-1452 · Zbl 1130.94013
[16] Chen Y, Lan G, Ouyang Y (2014) Optimal primal-dual methods for a class of saddle point problems. SIAM J Optim 24(4):1779-1814 · Zbl 1329.90090 · doi:10.1137/130919362
[17] Chen Y, Lan G, Ouyang Y (2017) Accelerated scheme for a class of variational inequalities. doi:10.1007/s10107-017-1161-4 · Zbl 1386.90102
[18] Chen Y, Lan G, Ouyang Y (2015) An accelerated linearized alternating direction method of multipliers. SIAM J Imaging Sci 8(1):644-681 · Zbl 1321.90105 · doi:10.1137/14095697X
[19] Chen Y, Lan G, Ouyang Y, Zhang W (2014) Fast bundle-level type methods for unconstrained and ball-constrained convex optimization. http://arxiv.org/pdf/1412.2128v1.pdf · Zbl 1414.90263
[20] Combettes P, Pesquet JC (2011) Proximal splitting methods in signal processing. In: Bauschke H, Burachik R, Combettes P, Elser V, Luke D, Wolkowicz H (eds) Fixed-point algorithms for inverse problems in science and engineering. Springer, New York, pp 185-212 · Zbl 1297.90097
[21] Daubechies I, DeVore R, Fornasier M, Güntürk CS (2010) Iteratively reweighted least squares minimization for sparse recovery. Commun Pure Appl Math 63(1):1-38 · Zbl 1202.65046 · doi:10.1002/cpa.20303
[22] Devolder O, Glineur F, Nesterov Y (2013) First-order methods of smooth convex optimization with inexact oracle. Math Program 146:37-75 · Zbl 1317.90196 · doi:10.1007/s10107-013-0677-5
[23] Devolder O, Glineur F, Nesterov Y (2012) Double smoothing technique for large-scale linearly constrained convex optimization. SIAM J Optim 22(2):702-727 · Zbl 1257.90047 · doi:10.1137/110826102
[24] Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91(2):201-213 · Zbl 1049.90004 · doi:10.1007/s101070100263
[25] Donoho DL (2006) Compressed sensing. IEEE Tran Inf Theory 52(4):1289-1306 · Zbl 1288.94016 · doi:10.1109/TIT.2006.871582
[26] Esser E, Lou Y, Xin J (2013) A method for finding structured sparse solutions to nonnegative least squares problems with applications. SIAM J Imaging Sci 6(4):2010-2046 · Zbl 1282.90239 · doi:10.1137/13090540X
[27] Figueiredo MAT, Nowak RD, Wright SJ (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J Sel Top Signal Process 1(4):586-597 · doi:10.1109/JSTSP.2007.910281
[28] Hansen N, Auger A, Ros R, Finck S, Posik P (2010) Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009. In: Proc. Workshop GECCO, pp 1689-1696
[29] Gonzaga CC, Karas EW (2013) Fine tuning Nesterov’s steepest descent algorithm for differentiable convex programming. Math Program 138:141-166 · Zbl 1297.90118 · doi:10.1007/s10107-012-0541-z
[30] Gonzaga CC, Karas EW, Rossetto DR (2013) An optimal algorithm for constrained differentiable convex optimization. SIAM J Optim 23(4):1939-1955 · Zbl 1288.65087 · doi:10.1137/110836602
[31] Juditsky A, Nesterov Y (2014) Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stoch Syst 4(1):44-80 · Zbl 1297.90097 · doi:10.1287/10-SSY010
[32] Kaufman L, Neumaier A (1996) PET regularization by envelope guided conjugate gradients. IEEE Trans Med Imaging 15:385-389 · doi:10.1109/42.500147
[33] Kaufman L, Neumaier A (1997) Regularization of ill-posed problems by envelope guided conjugate gradients. J Comput Graph Stat 6(4):451-463
[34] Lagarias JC, Reeds JA, Wright MH, Wright PE (1998) Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J Optim 9:112-147 · Zbl 1005.90056 · doi:10.1137/S1052623496303470
[35] Lan G (2015) Bundle-level type methods uniformly optimal for smooth and non-smooth convex optimization. Math Program 149(1):1-45 · Zbl 1321.90104 · doi:10.1007/s10107-013-0737-x
[36] Lan G, Lu Z, Monteiro RDC (2011) Primal-dual first-order methods with \[O(1/\varepsilon )O\](1/ε) iteration-complexity for cone programming. Math Program 126:1-29 · Zbl 1208.90113 · doi:10.1007/s10107-008-0261-6
[37] Li DH, Yamashita N, Fukushima M (2001) Nonsmooth equation based bfgs method for solving KKT systems in mathematical programming. J Optim Theory Appl 109(1):123-167 · Zbl 1016.90051 · doi:10.1023/A:1017565922109
[38] Liu H, Zhang J, Jiang X, Liu J (2010) The group Dantzig selector. J Mach Learn Res Proc Track 9:461-468
[39] Nemirovsky AS, Yudin DB (1983) Problem complexity and method efficiency in optimization. Wiley, New York · Zbl 0501.90062
[40] Nesterov Y (2004) Introductory lectures on convex optimization: a basic course. Kluwer, Dordrecht · Zbl 1086.90045 · doi:10.1007/978-1-4419-8853-9
[41] Nesterov Y (1983) A method of solving a convex programming problem with convergence rate \[O(1/k^2)O\](1/k2), Doklady AN SSSR (In Russian), 269:543-547. English translation: Soviet Math. Dokl., 27: 372-376 (1983) · Zbl 0535.90071
[42] Nesterov Y (2005) Smooth minimization of non-smooth functions. Math Prog 103:127-152 · Zbl 1079.90102 · doi:10.1007/s10107-004-0552-5
[43] Nesterov Y (2005) Excessive gap technique in nonsmooth convex minimization. SIAM J Optim 16:235-249 · Zbl 1096.90026 · doi:10.1137/S1052623403422285
[44] Nesterov Y (2011) Barrier subgradient method. Math Program 127:31-56 · Zbl 1233.90235 · doi:10.1007/s10107-010-0421-3
[45] Nesterov Y (2006) Primal-dual subgradient methods for convex problems. Math Program 120:221-259 · Zbl 1191.90038 · doi:10.1007/s10107-007-0149-x
[46] Nesterov Y (2013) Gradient methods for minimizing composite objective function. Math Program 140:125-161 · Zbl 1287.90067 · doi:10.1007/s10107-012-0629-5
[47] Nesterov Y (2015) Universal gradient methods for convex optimization problems. Math Program 152(1):381-404 · Zbl 1327.90216 · doi:10.1007/s10107-014-0790-0
[48] Neumaier A (2016) OSGA: a fast subgradient algorithm with optimal complexity. Math Program 158(1):1-21 · Zbl 1346.90671 · doi:10.1007/s10107-015-0911-4
[49] Neumaier A (2001) Introduction to numerical analysis. Cambridge University Press, Cambridge · Zbl 0980.65001 · doi:10.1017/CBO9780511612916
[50] Pang JS, Qi L (1993) Nonsmooth equations: motivation and algorithms. SIAM J Optim 3:443-465 · Zbl 0784.90082 · doi:10.1137/0803021
[51] Parikh N, Boyd S (2013) Proximal algorithms. Found Trends Optim 1(3):123-231
[52] Polyak B (1987) Introduction to optimization. Optimization Software Inc., Publications Division, NewYork
[53] Potra FA, Qi L, Sun D (1998) Secant methods for semismooth equations. Numerische Mathematik 80(2):305-324 · Zbl 0914.65051 · doi:10.1007/s002110050369
[54] Qi L (1995) Trust region algorithms for solving nonsmooth equations. SIAM J Optim 5:219-230 · Zbl 0832.90108 · doi:10.1137/0805011
[55] Qi L, Sun D (1999) A survey of some nonsmooth equations and smoothing Newton methods. Prog Optim 30:121-146 · Zbl 0957.65042 · doi:10.1007/978-1-4613-3285-5_7
[56] Rauhut H, Ward R (2016) Interpolation via weighted \[l_1\] l1-minimization. Appl Comput Harmon Anal 40(2):321-351 · Zbl 1333.41003
[57] Shor NZ (1985) Minimization methods for non-differentiable functions. Springer, Berlin (Springer series in computational mathematics) · Zbl 0561.90058 · doi:10.1007/978-3-642-82118-9
[58] Sun D, Han J (1997) Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J Optim 7(2):463-480 · Zbl 0872.90087 · doi:10.1137/S1052623494274970
[59] Tseng P (2008) On accelerated proximal gradient methods for convex-concave optimization, Technical report, Mathematics Department, University of Washington. http://pages.cs.wisc.edu/ brecht/cs726docs/Tseng.APG.pdf · Zbl 1327.90216
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.