Sparsest solutions of underdetermined linear systems via \( \ell _q\)-minimization for \(0<q\leqslant 1\).

*(English)*Zbl 1171.90014The paper of Simon Foucart and Ming-Jun Lai is located in the interface between applied linear algebra, the theory of inverse problems and optimization theory. In fact, many real-world problems or discretized versions of continuous problems (of integral equation form) ask for an approximate solution of a linear system of equations, often with “the” solution underdetermined. They may have a motivation from data mining, statistics, science or engineering. While the classical goal consists in a highest accuracy of the solution, the often ill-conditionedness (or ill-posedness) of the problem is related with a high sensitivity (or instability, complexity) of the solution and with its selection; both targets constitute a tradeoff. For these reasons, regularization (e.g., Tikhonov regularization) became introduced to diminish the complexity and, herewith, stabilize (regularize) the solution, basically, by making the solution “small” - in the sense of its norm or the norm of a “differentiated” solution (first-, second-order dervivatives, etc.) small. This paper can, e.g., be regarded in this framework of tradeoff between accuracy and stability, and it involves various concepts of smallness (or size) of the solution, geometric ones and, in particular, ones with a numerical-computational or statistical meaning as well, related with entries (“features”) vanishing (sparsity). So, this paper is about the study of a constrained minimization problem of some norm(s) of the unknown state variable subject to a system of linear equations or, approximately, to inequality constraint(s) on such system(s) under error tolerance(s). This study covers theory, methods, and comparative examples.

In the paper, the authors present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with mininal \(\ell_q\)-quasinorm is also the sparsest one. This generalizes, and slightly improves, a similar result for the \(\ell_1\)-norm. Then, the authors introduce a simple numerical scheme to compute solutions with minimal \(\ell_q\)-quasinorm, and they study its convergence. Finally, they display the result of some experiments which indicate that the \(\ell_q\)-method performs better than other available methods.

This articles is well-structured with five sections, namely, on introduction, exact recovery via \(\ell_q\)-minimization, approximate recovery from imperfect data, description of the algorithm, and numerical experiments with its plots on frequency of success vs. number of iternations \(n\), frequency of success vs. exponent \(q\), and frequency of success vs. sparsity s, and comparison of the five algorithms for sparse vectors with arbitrary entries. Namely, these algorithms are the procedure introduced, indeed being competitive, and orthogonal greedy algorithm, regularized orthogonal matching pursuit, \(\ell_1\)-minimization and reweighted \(\ell_1\)-minimization.

In fact, as indicated in this report, real-world challenges of various kinds and backgrounds, from financial mathematics and risk-management, Operations Research, natural sciences, engineering and hightech, may in future benefit from the clear, well written and demonstrated contribution and from its solution methods proposed.

In the paper, the authors present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with mininal \(\ell_q\)-quasinorm is also the sparsest one. This generalizes, and slightly improves, a similar result for the \(\ell_1\)-norm. Then, the authors introduce a simple numerical scheme to compute solutions with minimal \(\ell_q\)-quasinorm, and they study its convergence. Finally, they display the result of some experiments which indicate that the \(\ell_q\)-method performs better than other available methods.

This articles is well-structured with five sections, namely, on introduction, exact recovery via \(\ell_q\)-minimization, approximate recovery from imperfect data, description of the algorithm, and numerical experiments with its plots on frequency of success vs. number of iternations \(n\), frequency of success vs. exponent \(q\), and frequency of success vs. sparsity s, and comparison of the five algorithms for sparse vectors with arbitrary entries. Namely, these algorithms are the procedure introduced, indeed being competitive, and orthogonal greedy algorithm, regularized orthogonal matching pursuit, \(\ell_1\)-minimization and reweighted \(\ell_1\)-minimization.

In fact, as indicated in this report, real-world challenges of various kinds and backgrounds, from financial mathematics and risk-management, Operations Research, natural sciences, engineering and hightech, may in future benefit from the clear, well written and demonstrated contribution and from its solution methods proposed.

PDF
BibTeX
XML
Cite

\textit{S. Foucart} and \textit{M.-J. Lai}, Appl. Comput. Harmon. Anal. 26, No. 3, 395--407 (2009; Zbl 1171.90014)

Full Text:
DOI

**OpenURL**

##### References:

[1] | R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property for random matrices, Constr. Approx., in press · Zbl 1177.15015 |

[2] | Candès, E.J., The restricted isometry property and its implications for compressed sensing, C. R. acad. sci. Sér. I, 346, 589-592, (2008) · Zbl 1153.94002 |

[3] | Candès, E.J.; Romberg, J.K.; Tao, T., Stable signal recovery from incomplete and inaccurate measurements, Comm. pure appl. math., 59, 1207-1223, (2006) · Zbl 1098.94009 |

[4] | Candès, E.J.; Tao, T., Decoding by linear programming, IEEE trans. inform. theory, 51, 12, 4203-4215, (2005) · Zbl 1264.94121 |

[5] | Candès, E.J.; Tao, T., Near-optimal signal recovery from random projections: universal encoding strategies, IEEE trans. inform. theory, 52, 12, 5406-5425, (2006) · Zbl 1309.94033 |

[6] | E.J. Candès, M. Watkin, S. Boyd, Enhancing sparsity by reweighted \(l_1\) minimization, J. Fourier Anal. Appl., in press |

[7] | Chartrand, R., Exact reconstruction of sparse signals via nonconvex minimization, IEEE signal process. lett., 14, 707-710, (2007) |

[8] | Donoho, D.L.; Elad, M., Optimally sparse representation in general (nonorthogonal) dictionaries via \(l^1\) minimization, Proc. natl. acad. sci. USA, 100, 5, 2197-2202, (2003) · Zbl 1064.94011 |

[9] | Donoho, D.L.; Elad, M.; Temlyakov, V.N., Stable recovery of sparse overcomplete representations in the presence of noise, IEEE trans. inform. theory, 52, 6-18, (2006) · Zbl 1288.94017 |

[10] | Gribonval, R.; Nielsen, M., Highly sparse representations from dictionaries are unique and independent of the sparseness measure, Appl. comput. harmon. anal., 22, 335-355, (2007) · Zbl 1133.94011 |

[11] | Natarajan, B.K., Sparse approximate solutions to linear systems, SIAM J. comput., 24, 227-234, (1995) · Zbl 0827.68054 |

[12] | D. Needell, R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Found. Comput. Math., in press · Zbl 1183.68739 |

[13] | Petukhov, A., Fast implementation of orthogonal greedy algorithm for tight wavelet frames, Signal process., 86, 471-479, (2006) · Zbl 1163.94371 |

[14] | Temlyakov, V.N., Weak greedy algorithms, Adv. comput. math., 12, 213-227, (2000) · Zbl 0964.65009 |

[15] | Temlyakov, V.N., Nonlinear methods of approximation, Found. comput. math., 3, 33-107, (2003) · Zbl 1039.41012 |

[16] | Tropp, J.A., Greed is good: algorithmic results for sparse approximation, IEEE trans. inform. theory, 50, 2231-2242, (2004) · Zbl 1288.94019 |

This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.