×

Convergence rate of incremental subgradient algorithms. (English) Zbl 0984.90033

Uryasev, Stanislav (ed.) et al., Stochastic optimization: Algorithms and applications. Conference, Univ. of Florida, Tallahassee, FL, USA, February 20-22, 2000. Dordrecht: Kluwer Academic Publishers. Appl. Optim. 54, 223-264 (2001).
Summary: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.
In this paper, we present convergence results and estimates of the convergence rate of a number of variants of incremental subgradient methods, including some that use randomization. The convergence rate estimates are consistent with our computational results, and suggests that the randomized variants perform substantially better than their deterministic counterparts.
For the entire collection see [Zbl 0964.00055].

MSC:

90C25 Convex programming
52A41 Convex functions and convex programs in convex geometry
90C15 Stochastic programming
PDFBibTeX XMLCite