zbMATH — the first resource for mathematics

Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. (English) Zbl 06951767

65D18 Numerical aspects of computer graphics, image analysis, and computational geometry
65K10 Numerical optimization and variational techniques
74S60 Stochastic and other probabilistic methods applied to problems in solid mechanics
90C25 Convex programming
90C15 Stochastic programming
92C55 Biomedical imaging and signal processing
94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
Full Text: DOI arXiv
[1] J. Adler, H. Kohr, and O. Öktem, Operator Discretization Library (ODL), 2017, .
[2] Z. Allen-Zhu, Y. Yuan, P. Richtárik, and Y. Yuan, Even faster accelerated coordinate descent using non-uniform sampling, in International Conference on Machine Learning, Proc. Mach. Learn. Res. 48, 2016; preprint available from .
[3] P. Balamurugan and F. Bach, Stochastic variance reduction methods for saddle-point problems, in Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16), 2016, pp. 1416–1424.
[4] H. Bauschke and J. Borwein, Legendre functions and the method of random Bregman projections, J. Convex Anal., 4 (1997), pp. 27–67. · Zbl 0894.49019
[5] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, 2011, . · Zbl 1218.47001
[6] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183–202, . · Zbl 1175.94009
[7] M. Benning, C.-B. Schönlieb, T. Valkonen, and V. Vlačić, Explorations on Anisotropic Regularisation of Dynamic Inverse Problems by Bilevel Optimisation, preprint, , 2016.
[8] D. P. Bertsekas, Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey, in Optimization for Machine Learning, S. Sra, S. Nowozin, and S. J. Wright, eds., MIT Press, 2011, pp. 85–120.
[9] D. P. Bertsekas, Incremental proximal methods for large scale convex optimization, Math. Program., 129 (2011), pp. 163–195, . · Zbl 1229.90121
[10] D. Blatt, A. O. Hero, and H. Gauchman, A convergent incremental gradient method with a constant step size, SIAM J. Optim., 18 (2007), pp. 29–51, . · Zbl 1154.90015
[11] K. Bredies and M. Holler, A TGV-based framework for variational image decompression, zooming, and reconstruction. Part I: Analytics, SIAM J. Imaging Sci., 8 (2015), pp. 2814–2850, . · Zbl 1333.94006
[12] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), pp. 89–97. · Zbl 1366.94048
[13] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), pp. 120–145, . · Zbl 1255.68217
[14] A. Chambolle and T. Pock, An introduction to continuous optimization for imaging, Acta Numer., 25 (2016), pp. 161–319, . · Zbl 1343.65064
[15] A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm, Math. Program., 159 (2016), pp. 253–287, . · Zbl 1350.49035
[16] P. L. Combettes and J.-C. Pesquet, Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping, SIAM J. Optim., 25 (2015), pp. 1221–1248, . · Zbl 1317.65135
[17] D. Csiba, Z. Qu, and P. Richtárik, Stochastic dual coordinate ascent with adaptive probabilities, J. Mach. Learn. Res., 37 (2015), pp. 674–683.
[18] C. D. Dang and G. Lan, Randomized Methods for Saddle Point Computation, preprint, , 2014.
[19] A. Defazio, F. Bach, and S. Lacoste-Julien, SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, in Advances in Neural Information Processing Systems 27 (NIPS 2014), Curran Associates, 2014, pp. 1646–1654, .
[20] R. M. de Oliviera, E. S. Helou, and E. F. Costa, String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images, Inverse Problems, 32 (2016), 115014, . · Zbl 1355.65080
[21] E. Esser, X. Zhang, and T. F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science, SIAM J. Imaging Sci., 3 (2010), pp. 1015–1046, . · Zbl 1206.90117
[22] V. Estellers, S. Soatto, and X. Bresson, Adaptive regularization with the structure tensor, IEEE Trans. Image Process., 24 (2015), pp. 1777–1790, .
[23] O. Fercoq and P. Bianchi, A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non Separable Functions, preprint, , 2015.
[24] O. Fercoq and P. Richtárik, Accelerated, parallel, and proximal coordinate descent, SIAM J. Optim., 25 (2015), pp. 1997–2023, . · Zbl 1327.65108
[25] X. Gao, Y. Xu, and S. Zhang, Randomized Primal-Dual Proximal Block Coordinate Updates, preprint, , 2016.
[26] G. Gilboa, M. Moeller, and M. Burger, Nonlinear spectral analysis via one-homogeneous functionals: Overview and future prospects, J. Math. Imaging Vis., 56 (2016), pp. 300–319, . · Zbl 1409.94182
[27] F. Knoll, M. Holler, T. Koesters, R. Otazo, K. Bredies, and D. K. Sodickson, Joint MR-PET reconstruction using a multi-channel image regularizer, IEEE Trans. Medical Imaging, 36 (2017), pp. 1–16, .
[28] J. Konečný, J. Liu, P. Richtárik, and M. Takáč, Mini-batch semi-stochastic gradient descent in the proximal setting, IEEE J. Selected Topics Signal Process., 10 (2016), pp. 242–255.
[29] R. D. Kongskov, Y. Dong, and K. Knudsen, Directional Total Generalized Variation Regularization, preprint, , 2017.
[30] P.-L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), pp. 964–979, . · Zbl 0426.65050
[31] A. Nedić and D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, SIAM J. Optim., 12 (2001), pp. 109–138, . · Zbl 0991.90099
[32] J. M. Ollinger and J. A. Fessler, Positron emission tomography, IEEE Signal Process. Mag., 14 (1997), pp. 43–55, .
[33] N. Parikh and S. P. Boyd, Proximal algorithms, Found. Trends Optim., 1 (2014), pp. 123–231, .
[34] Z. Peng, T. Wu, Y. Xu, M. Yan, and W. Yin, Coordinate friendly structures, algorithms and applications, Ann. Math. Sci. Appl., 1 (2016), pp. 1–54, . · Zbl 1432.65076
[35] J.-C. Pesquet and A. Repetti, A Class of Randomized Primal-Dual Algorithms for Distributed Optimization, preprint, , 2015. · Zbl 1336.65113
[36] T. Pock and A. Chambolle, Diagonal preconditioning for first order primal-dual algorithms in convex optimization, in Proceedings of the IEEE International Conference on Computer Vision, 2011, pp. 1762–1769, .
[37] T. Pock, D. Cremers, H. Bischof, and A. Chambolle, An algorithm for minimizing the Mumford-Shah functional, in Proceedings of the IEEE International Conference on Computer Vision, 2009, pp. 1133–1140, .
[38] Z. Qu and P. Richtárik, Coordinate descent with arbitrary sampling I: Algorithms and complexity, Optim. Methods Softw., 31 (2016), pp. 829–857, . · Zbl 1365.90205
[39] Z. Qu and P. Richtárik, Quartz: Randomized dual coordinate ascent with arbitrary sampling, in Proceedings of the 28th International Conference on Neural Information Processing Systems, Volume 1, 2015, pp. 865–873.
[40] P. Richtárik and M. Takáč, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, Math. Program., 144 (2014), pp. 1–38, . · Zbl 1301.65051
[41] P. Richtárik and M. Takáč, On optimal probabilities in stochastic coordinate descent methods, Optim. Lett., 10 (2016), pp. 1233–1243, . · Zbl 1353.90148
[42] P. Richtárik and M. Takáč, Parallel coordinate descent methods for big data optimization, Math. Program., 156 (2016), pp. 433–484. · Zbl 1342.90102
[43] D. Rigie and P. La Riviere, Joint reconstruction of multi-channel, spectral CT data via constrained total nuclear variation minimization, Phys. Med. Biol., 60 (2015), pp. 1741–1762, .
[44] R. T. Rockafellar, Convex Analysis, Princeton University Press, 1970. · Zbl 0193.18401
[45] L. Rosasco and S. Villa, Stochastic Inertial Primal-Dual Algorithms, preprint, , 2015.
[46] M. Schmidt, N. Le Roux, and F. Bach, Minimizing finite sums with the stochastic average gradient, Math. Program., 162 (2017), pp. 83–112, 1030-6. · Zbl 1358.90073
[47] M. Takác, A. Bijral, P. Richtárik, and N. Srebro, Mini-batch primal and dual methods for SVMs, in Proceedings of the 30th International Conference on Machine Learning, Springer, 2013, pp. 537–552.
[48] P. Tseng, An incremental gradient(-projection) method with momentum term and adaptive stepsize rule, SIAM J. Optim., 8 (1998), pp. 506–531, . · Zbl 0922.90131
[49] T. Valkonen, Block-Proximal Methods with Spatially Adapted Acceleration, preprint, , 2016.
[50] W. van Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. De Beenhouwer, K. Joost Batenburg, and J. Sijbers, Fast and flexible X-ray tomography using the ASTRA toolbox, Optics Express, 24 (2016), pp. 25129–25147, .
[51] W. van Aarle, W. J. Palenstijn, J. De Beenhouwer, T. Altantzis, S. Bals, K. J. Batenburg, and J. Sijbers, The ASTRA toolbox: A platform for advanced algorithm development in electron tomography, Ultramicroscopy, 157 (2015), pp. 35–47, .
[52] M. Wen, S. Yue, Y. Tan, and J. Peng, A Randomized Inertial Primal-Dual Fixed Point Algorithm for Monotone Inclusions, preprint, , 2016.
[53] Y. Zhang and L. Xiao, Stochastic primal-dual coordinate method for regularized empirical risk minimization, in Proceedings of the 32nd International Conference on Machine Learning, 2015, pp. 1–34.
[54] L. W. Zhong and J. T. Kwok, Fast stochastic alternating direction method of multipliers, J. Mach. Learn. Res., 32 (2014), pp. 46–54.
[55] Z. Zhu and A. J. Storkey, Adaptive stochastic primal-dual coordinate descent for separable saddle point problems, in Machine Learning and Knowledge Discovery in Databases, A. Appice, P. P. Rodrigues, V. Santos Costa, C. Soares, J. Gama, and A. Jorge, eds., Springer, 2015, pp. 643–657.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.