×

DC approximation approach for \(\ell_0\)-minimization in compressed sensing. (English) Zbl 1406.94010

Thi, Hoai An Le (ed.) et al., Advanced computational methods for knowledge engineering. Proceedings of 3rd international conference on computer science, applied mathematics and applications – ICCSAMA 2015. Extended versions of papers, Metz, France, May 11–13, 2015. Cham: Springer (ISBN 978-3-319-17995-7/pbk; 978-3-319-17996-4/ebook). Advances in Intelligent Systems and Computing 358, 37-48 (2015).
Summary: In this paper, we study the effectiveness of some non-convex approximations of \(\ell_0\)-norm in compressed sensing. Using four continuous non-convex approximations of \(\ell_0\)-norm, we reformulate the compressed sensing problem as DC (Difference of Convex functions) programs and then DCA (DC Algorithm) is applied to find the solutions. Computational experiments show the efficiency and the scalability of our method in comparison with other nonconvex approaches such as iterative reweighted schemes (including reweighted \(\ell_1\) and iterative reweighted least-squares algorithms).
For the entire collection see [Zbl 1316.68018].

MSC:

94A12 Signal theory (characterization, reconstruction, filtering, etc.)
90C26 Nonconvex programming, global optimization

Software:

PDCO
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Attouch, H.; Bolte, J.; Redont, P.; Soubeyran, A., Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Lojasiewicz inequality, Mathematics of Operations Research, 35, 2, 438-457 (2010) · Zbl 1214.65036 · doi:10.1287/moor.1100.0449
[2] Bradley, P.S., Mangasarian, O.L.: Feature Selection via concave minimization and support vector machines. In: Proceeding of International Conference on Machina Learning ICML 1998 (1998)
[3] Chen, S.; Donoho, D. L.; Saunders, M., Atomic decomposition by basis pursuit, SIAM Journal on Scientific Computing, 20, 1, 33-61 (1998) · Zbl 0919.94002 · doi:10.1137/S1064827596304010
[4] Candès, E. J.; Tao, T., Near-optimal signal recovery from random projections: universal encoding strategies?, IEEE Transaction Information Theory, 52, 12, 5406-5425 (2006) · Zbl 1309.94033 · doi:10.1109/TIT.2006.885507
[5] Candès, E.J., Wakin, M.B., Boyd, S.: Enhancing Sparsity by Reweighted l1 Minimization. Journal of Fourier Analysis and Applications 14(5), 877-905 (2008); special issue on sparsity · Zbl 1176.94014
[6] Candès, E.J., Romberg, J., Tao, T.: Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information (2006) · Zbl 1231.94017
[7] Candés, E. J.; Paige, A., Randall: Highly Robust Error Correction by Convex Programming, IEEE Transactions Information Theory Information Theory, 54, 7, 2829-2840 (2008) · Zbl 1332.94096 · doi:10.1109/TIT.2008.924688
[8] Chartrand, R., Exact Reconstruction of Sparse Signals via Nonconvex Minimization, IEEE Signal Process. Lett., 14, 10, 707-710 (2007) · doi:10.1109/LSP.2007.898300
[9] Chartrand, R., Yin, W.: Iteratively Reweighted Algorithms for Compressive Sensing. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (2008)
[10] Daubechies, I.; DeVore, R.; Fornasier, M.; Güntük, C., Iteratively reweighted least squares minimization for sparse recovery, Commun. Pure Appl. Math., 63, 1-38 (2010) · Zbl 1202.65046 · doi:10.1002/cpa.20303
[11] Donoho, D. L., Compressed sensing, IEEE Trans. Inform. Theory, 52, 4, 1289-1306 (2006) · Zbl 1288.94016 · doi:10.1109/TIT.2006.871582
[12] Donoho, D. L.; Xiaoming, H., Uncertainty principles and ideal atomic decomposition, IEEE Transactions on Information Theory, 47, 7, 2845-2862 (2001) · Zbl 1019.94503 · doi:10.1109/18.959265
[13] Fan, J.; Li, R., Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Stat. Ass., 96, 456, 1348-1360 (2001) · Zbl 1073.62547 · doi:10.1198/016214501753382273
[14] Fu, W. J., Penalized regressions: The bridge versus the Lasso, Journal of Computational and Graphical Statistics, 7, 397-416 (1998)
[15] Foucart, S.; Lai, M., Sparsest solutions of underdetermined linear systems via ℓ_q-minimization for 0 < q ≤ 1, Appl, Comput. Harmon. Anal., 26, 395-407 (2009) · Zbl 1171.90014 · doi:10.1016/j.acha.2008.09.001
[16] Gasso, G.; Rakotomamonjy, A.; Canu, S., Recovering sparse signals with a certain family of nonconvex penalties and DC programming, IEEE Transactions on Signal Processing, 57, 12, 4686-4698 (2009) · Zbl 1391.90489 · doi:10.1109/TSP.2009.2026004
[17] Mohimani, G. H.; Babaie-Zadeh, M.; Jutten, C.; Davies, M. E.; James, C. J.; Abdallah, S. A.; Plumbley, M. D., Fast Sparse Representation Based on Smoothed ℓ^0 Norm, Independent Component Analysis and Signal Separation, 389-396 (2007), Heidelberg: Springer, Heidelberg · Zbl 1173.94376 · doi:10.1007/978-3-540-74494-8_49
[18] Mohimani, H.; Babaie-Zadeh, M.; Jutten, C., A fast approach for overcomplete sparse decomposition based on smoothed L0 norm, IEEE Transactions on Signal Processing, 57, 1, 289-301 (2009) · Zbl 1173.94376 · doi:10.1109/TSP.2008.2007606
[19] Lai, M.-J.; Xu, Y.; Yin, W., Improved Iteratively reweighted least squares for unconstrained smoothed ℓ_p minimization, SIAM J. Numer. Anal., 51, 2, 927-957 (2013) · Zbl 1268.49038 · doi:10.1137/110840364
[20] Pham Dinh, T.; Le Thi, H. A., Convex analysis approach to DC programming: Theory, algorithms and applications, Acta Math. Vietnamica, 22, 1, 289-357 (1997) · Zbl 0895.90152
[21] Le Thi, H. A.; Pham Dinh, T., DC Optimization Algorithm for Solving The Trust Region Problem, SIAM Journal on Optimization, 8, 2, 476-505 (1998) · Zbl 0913.65054 · doi:10.1137/S1052623494274313
[22] Le Thi, H. A.; Pham Dinh, T., The DC (difference of convex functions) Programming and DCA revisited with DC models of real world nonconvex optimization problems, Annals of Operations Research, 133, 23-46 (2005) · Zbl 1116.90122 · doi:10.1007/s10479-004-5022-1
[23] Le Thi, H. A.; Van Nguyen, V.; Ouchani, S.; Tang, C.; Ling, C. X.; Zhou, X.; Cercone, N. J.; Li, X., Gene Selection for Cancer Classification Using DCA, Advanced Data Mining and Applications, 62-72 (2008), Heidelberg: Springer, Heidelberg · doi:10.1007/978-3-540-88192-6_8
[24] Le Thi, H. A.; Le Hoai, M.; Nguyen, V. V.; Pham Dinh, T., A DC Programming approach for feature selection in support vector machines learning, Adv. Data Analysis and Classification, 2, 3, 259-278 (2008) · Zbl 1284.90057 · doi:10.1007/s11634-008-0030-7
[25] Le Thi, H.A.: A new approximation for the ℓ_0-norm. Research report LITA EA 3097, University of Lorraine, France (2012)
[26] Le Thi, H. A.; Nguyen Thi, B. T.; Le, H. M.; Selamat, A.; Nguyen, N. T.; Haron, H., Sparse signal recovery by difference of convex functions algorithms, Intelligent Information and Database Systems, 387-397 (2013), Heidelberg: Springer, Heidelberg · doi:10.1007/978-3-642-36543-0_40
[27] Le Thi, H. A.; Pham Dinh, T.; Le, H. M.; Vo, X. T., DC approximation approaches for sparse optimization, European Journal of Operational Research, 244, 1, 26-46 (2015) · Zbl 1346.90819 · doi:10.1016/j.ejor.2014.11.031
[28] Le, H.M., Le Thi, H.A., Nguyen, M.C.: Sparse Semi-Supervised Support Vector Machines by DC Programming and DCA. Neurocomputing (November 27, 2014), (published online), doi:10.1016/j.neucom.2014.11.051,
[29] Le Thi, H. A.; Nguyen, M. C.; Pham Dinh, T., A DC programming approach for finding Communities in networks, Neural Computation, 26, 12, 2827-2854 (2014) · Zbl 1415.68168 · doi:10.1162/NECO_a_00673
[30] Le Thi, H. A.; Vo, X. T.; Pham Dinh, T., Feature Selection for linear SVMs under Uncertain Data: Robust optimization based on Difference of Convex functions Algorithms, Neural Networks, 59, 36-50 (2014) · Zbl 1327.90236 · doi:10.1016/j.neunet.2014.06.011
[31] Ong, C. S.; Le Thi, H. A., Learning sparse classifiers with difference of convex functions algorithms, Optimization Methods and Software, 28, 4, 830-854 (2013) · Zbl 1282.90181 · doi:10.1080/10556788.2011.652630
[32] Peleg, D., Meir, R.: A bilinear formulation for vector sparsity optimization. Signal Processing 88(2), 375-389 (2008) ISSN 0165-1684 · Zbl 1186.94273
[33] Rinaldi, F., Concave programming for finding sparse solutions to problems with convex constraints, Optimization Methods and Software, 26, 6, 971-992 (2011) · Zbl 1254.90179 · doi:10.1080/10556788.2010.511668
[34] Rinaldi, F.; Schoen, F.; Sciandrone, M., Concave programming for minimizing the zero-norm over polyhedral sets, Comput. Opt. Appl., 46, 3, 467-486 (2010) · Zbl 1229.90170 · doi:10.1007/s10589-008-9202-9
[35] Rao, B. D.; Kreutz-Delgado, K., An affine scaling methodology for best basis selection, IEEE Trans. Signal Processing, 47, 87-200 (1999) · Zbl 0984.94010 · doi:10.1109/78.738251
[36] Thiao, M., Pham Dinh, T., Le Thi, H.A.: DC Programming Approach for a Class of Nonconvex Programs Involving lo Norm. In: Le Thi, H.A., Bouvry, P., Pham Dinh, T. (eds.) MCO 2008. CCIS, vol. 14, pp. 348-357. Springer, Heidelberg (2008) · Zbl 1160.90626
[37] Zhang, T., Some sharp performance bounds for least squares regression with regularization, Ann. Statist., 37, 2109-2144 (2009) · Zbl 1173.62029 · doi:10.1214/08-AOS659
[38] Zhang, C.; Shao, Y.; Tan, J.; Deng, N., Mixed-norm linear support vector machine, Neural Computing and Applications, 23, 7-8, 2159-2166 (2013) · doi:10.1007/s00521-012-1166-0
[39] Zhao, Y.; Li, D., Reweighted l1-Minimization for Sparse Solutions to Underdetermined Linear Systems, SIAM J. Opt., 22, 3, 1065-1088 (2012) · Zbl 1261.65042 · doi:10.1137/110847445
[40] Zou, H., The adaptive lasso and its oracle properties, J. Amer. Stat. Ass., 101, 1418-1429 (2006) · Zbl 1171.62326 · doi:10.1198/016214506000000735
[41] Zou, H.; Li, R., One-step sparse estimates in nonconcave penalized likelihood models, The Annals of Statistics, 36, 4, 1509-1533 (2008) · Zbl 1142.62027 · doi:10.1214/009053607000000802
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.