zbMATH — the first resource for mathematics

A continuous exact \(\ell_0\) penalty (CEL0) for least squares regularized problem. (English) Zbl 1325.65086

65K05 Numerical mathematical programming methods
90C26 Nonconvex programming, global optimization
90C55 Methods of successive quadratic programming type
Full Text: DOI
[1] H. Attouch, J. Bolte, and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods, Math. Program., 137 (2013), pp. 91–129. · Zbl 1260.49048
[2] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183–202. · Zbl 1175.94009
[3] A. Blake and A. Zisserman, Visual Reconstruction, Vol. 2, MIT Press, Cambridge, MA, 1987.
[4] T. Blumensath and M. E. Davies, Iterative thresholding for sparse approximations, J. Fourier Anal. Appl., 14 (2008), pp. 629–654. · Zbl 1175.94060
[5] J. Bolte, A. Daniilidis, A. Lewis, and M. Shiota, Clarke critical values of subanalytic Lipschitz continuous functions, Ann. Polon. Math., 87 (2005), pp. 13–25. · Zbl 1090.35033
[6] L. Breiman, Better subset regression using the nonnegative garrote, Technometrics, 37 (1995), pp. 373–384. · Zbl 0862.62059
[7] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), pp. 489–509. · Zbl 1231.94017
[8] E. J. Candes, M. B. Wakin, and S. P. Boyd, Enhancing sparsity by reweighted \(ℓ_1\) minimization, J. Fourier Anal. Appl., 14 (2008), pp. 877–905. · Zbl 1176.94014
[9] S. Chen, C. F. N. Cowan, and P. M. Grant, Orthogonal least squares learning algorithm for radial basis function networks, IEEE Trans. Neural Networks, 2 (1991), pp. 302–309.
[10] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 20 (1998), pp. 33–61. · Zbl 0919.94002
[11] F. H. Clarke, Optimization and Nonsmooth Analysis, Classics Appl. Math. 5, SIAM, Philadelphia, 1990. · Zbl 0696.49002
[12] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), pp. 1168–1200. · Zbl 1179.94031
[13] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math., 57 (2004), pp. 1413–1457. · Zbl 1077.65055
[14] G. Davis, S. Mallat, and M. Avellaneda, Adaptive greedy approximations, Constr. Approx., 13 (1997), pp. 57–98. · Zbl 0885.41006
[15] T. P. Dinh and H. A. L. Thi, Recent advances in DC programming and DCA, in Transactions on Computational Intelligence XIII, Springer, New York, 2014, pp. 1–37.
[16] D. L. Donoho, For most large underdetermined systems of linear equations the minimal \(ℓ_1\)-norm solution is also the sparsest solution, Comm. Pure Appl. Math., 59 (2006), pp. 797–829. · Zbl 1113.15004
[17] J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), pp. 1348–1360. · Zbl 1073.62547
[18] S. Foucart and M.-J. Lai, Sparsest solutions of underdetermined linear systems via \(ℓ_q\)-minimization for \(0< q ≤1\), Appl. Comput. Harmon. Anal., 26 (2009), pp. 395–407. · Zbl 1171.90014
[19] G. M. Fung and O. L. Mangasarian, Equivalence of minimal \(ℓ_0\)- and \(ℓ_p\)-norm solutions of linear equalities, inequalities and linear programs for sufficiently small p, J. Optim. Theory Appl., 151 (2011), pp. 1–10. · Zbl 1226.90104
[20] G. Gasso, A. Rakotomamonjy, and S. Canu, Recovering sparse signals with a certain family of nonconvex penalties and DC programming, IEEE Trans. Signal Process., 57 (2009), pp. 4686–4698. · Zbl 1391.90489
[21] S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Machine Intell., PAMI-6 (1984), pp. 721–741. · Zbl 0573.62030
[22] P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye, A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems, in Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, 2013, pp. 37–45.
[23] M. Kowalski, Thresholding rules and iterative shrinkage/thresholding algorithm: A convergence study, in the ICIP-International Conference on Image Processing, Paris, France, 2014.
[24] H. A. Le Thi, H. M. Le, and T. P. Dinh, Feature selection in machine learning: An exact penalty approach using a difference of convex function algorithm, Machine Learning, July (2014), pp. 1–24. · Zbl 1343.68201
[25] Y. G. Leclerc, Constructing simple stable descriptions for image partitioning, Int. J. Comput. Vision, 3 (1989), pp. 73–102.
[26] S. G. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., 41 (1993), pp. 3397–3415. · Zbl 0842.94004
[27] H. Mohimani, M. Babaie-Zadeh, I. Gorodnitsky, and C. Jutten, Sparse Recovery Using Smoothed \(ℓ_0\) (SL0): Convergence Analysis, preprint, http://arxiv.org/abs/1001.5073, 2010.
[28] H. Mohimani, M. Babaie-Zadeh, and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed \(ℓ_0\) norm, IEEE Trans. Signal Processing, 57 (2009), pp. 289–301. · Zbl 1391.94325
[29] B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), pp. 227–234. · Zbl 0827.68054
[30] M. Nikolova, Markovian reconstruction using a GNC approach, IEEE Trans. Image Process., 8 (1999), pp. 1204–1220.
[31] M. Nikolova, Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares, Multiscale Model. Simul., 4 (2005), pp. 960–991. · Zbl 1091.94007
[32] M. Nikolova, Description of the minimizers of least squares regularized with \(ℓ_0\)-norm. Uniqueness of the global minimizer, SIAM J. Imaging Sci., 6 (2013), pp. 904–937. · Zbl 1281.65092
[33] M. Nikolova, Relationship Between the Optimal Solutions of Least Squares Regularized with \(ℓ_0\)-Norm and Constrained by k-Sparsity, preprint, hal-00944006v1, available online from https://hal.inria.fr/file/index/docid/944006/filename/KsparsLo.pdf, 2014.
[34] P. Ochs, A. Dosovitskiy, T. Brox, and T. Pock, On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision, SIAM J. Imaging Sci., 8 (2015), pp. 331–372. · Zbl 1326.65078
[35] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers (Pacific Grove, CA), IEEE, Washington, DC, 1993, pp. 40–44.
[36] D. Peleg and R. Meir, A bilinear formulation for vector sparsity optimization, Signal Process., 88 (2008), pp. 375–389. · Zbl 1186.94273
[37] M. C. Robini, A. Lachal, and I. E. Magnin, A stochastic continuation approach to piecewise constant reconstruction, IEEE Trans. Image Process., 16 (2007), pp. 2576–2589.
[38] M. C. Robini and I. E. Magnin, Optimization by stochastic continuation, SIAM J. Imaging Sci., 3 (2010), pp. 1096–1121. · Zbl 1207.65083
[39] C. Soussen, J. Idier, D. Brie, and J. Duan, From Bernoulli–Gaussian deconvolution to sparse signal restoration, IEEE Trans. Signal Process., 59 (2011), pp. 4572–4584. · Zbl 1392.94466
[40] V. N. Temlyakov, Greedy approximation, Acta Numer., 17 (2008), pp. 235–409. · Zbl 1178.65050
[41] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory, 50 (2004), pp. 2231–2242. · Zbl 1288.94019
[42] C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Ann. Statist., 38 (2010), pp. 894–942. · Zbl 1183.62120
[43] H. Zou, The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc., 101 (2006), pp. 1418–1429. · Zbl 1171.62326
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.