×

Sparse polynomial chaos expansions via compressed sensing and D-optimal design. (English) Zbl 1441.65005

Summary: In the field of uncertainty quantification, sparse polynomial chaos (PC) expansions are commonly used by researchers for a variety of purposes, such as surrogate modeling. Ideas from compressed sensing may be employed to exploit this sparsity in order to reduce computational costs. A class of greedy compressed sensing algorithms use least squares minimization to approximate PC coefficients. This least squares problem lends itself to the theory of optimal design of experiments (ODE). Our work focuses on selecting an experimental design that improves the accuracy of sparse PC approximations for a fixed computational budget. We propose a novel sequential design, greedy algorithm for sparse PC approximation. The algorithm sequentially augments an experimental design according to a set of the basis polynomials deemed important by the magnitude of their coefficients, at each iteration. Our algorithm incorporates topics from ODE to estimate the PC coefficients. A variety of numerical simulations are performed on three physical models and manufactured sparse PC expansions to provide a comparative study between our proposed algorithm and other non-adaptive methods. Further, we examine the importance of sampling by comparing different strategies in terms of their ability to generate a candidate pool from which an optimal experimental design is chosen. It is demonstrated that the most accurate PC coefficient approximations, with the least variability, are produced with our design-adaptive greedy algorithm and the use of a studied importance sampling strategy. We provide theoretical and numerical results which show that using an optimal sampling strategy for the candidate pool is key, both in terms of accuracy in the approximation, but also in terms of constructing an optimal design.

MSC:

65C20 Probabilistic models, generic numerical methods in probability and statistics
62K05 Optimal statistical designs

Software:

NetQuest; CoSaMP; EGO
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Ghanem, R. G.; Spanos, P. D., Stochastic Finite Elements: a Spectral Approach (2003), Courier Corporation
[2] Le Maître, O.; Knio, O. M., Spectral Methods for Uncertainty Quantification: with Applications to Computational Fluid Dynamics (2010), Springer Science & Business Media · Zbl 1193.76003
[3] Xiu, D., Numerical Methods for Stochastic Computations: A Spectral Method Approach (2010), Princeton University Press · Zbl 1210.65002
[4] Xiu, D.; Karniadakis, G. E., The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24, 2, 619-644 (2002) · Zbl 1014.65004
[5] Candès, E. J.; Wakin, M. B., An introduction to compressive sampling, IEEE Signal Process. Mag., 25, 2, 21-30 (2008)
[6] Donoho, D. L., Compressed sensing, IEEE Trans. Inf. Theory, 52, 4, 1289-1306 (2006) · Zbl 1288.94016
[7] Candès, E. J.; Romberg, J.; Tao, T., Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52, 2, 489-509 (2006) · Zbl 1231.94017
[8] Elad, M., From exact to approximate solutions, (Sparse and Redundant Representations (2010), Springer), 79-109
[9] Eldar, Y. C.; Kutyniok, G., Compressed Sensing: Theory and Applications (2012), Cambridge University Press
[10] Doostan, A.; Owhadi, H., A non-adapted sparse approximation of PDEs with stochastic inputs, J. Comput. Phys., 230, 8, 3015-3034 (2011) · Zbl 1218.65008
[11] Blatman, G.; Sudret, B., Adaptive sparse polynomial chaos expansion based on least angle regression, J. Comput. Phys., 230, 6, 2345-2367 (2011) · Zbl 1210.65019
[12] Mathelin, L.; Gallivan, K., A compressed sensing approach for partial differential equations with random input data, Commun. Comput. Phys., 12, 04, 919-954 (2012) · Zbl 1388.65018
[13] Jones, B. A.; Parrish, N.; Doostan, A., Postmaneuver collision probability estimation using sparse polynomial chaos expansions, J. Guid. Control Dyn., 38, 8, 1425-1437 (2015)
[14] Sargsyan, K.; Safta, C.; Najm, H. N.; Debusschere, B. J.; Ricciuto, D.; Thornton, P., Dimensionality reduction for complex models via Bayesian compressive sensing, Int. J. Uncertain. Quantif., 4, 1 (2014) · Zbl 1513.65004
[15] Yan, L.; Guo, L.; Xiu, D., Stochastic collocation algorithms using \(\ell_1\)-minimization, Int. J. Uncertain. Quantif., 2, 3 (2012) · Zbl 1291.65024
[16] Yang, X.; Karniadakis, G. E., Reweighted \(\ell_1\) minimization method for stochastic elliptic differential equations, J. Comput. Phys., 248, 87-108 (2013) · Zbl 1349.60113
[17] Peng, J.; Hampton, J.; Doostan, A., A weighted \(\ell_1\)-minimization approach for sparse polynomial chaos expansions, J. Comput. Phys., 267, 92-111 (2014) · Zbl 1349.65198
[18] Schiavazzi, D.; Doostan, A.; Iaccarino, G., Sparse multiresolution regression for uncertainty propagation, Int. J. Uncertain. Quantif., 4, 4 (2014) · Zbl 1301.65011
[19] West IV, T. K.; Hosder, S., Uncertainty quantification of hypersonic reentry flows with sparse sampling and stochastic expansions, J. Spacecr. Rockets, 52, 1, 120-133 (2014)
[20] Jakeman, J. D.; Eldred, M. S.; Sargsyan, K., Enhancing \(\ell_1\)-minimization estimates of polynomial chaos expansions using basis selection, J. Comput. Phys., 289, 18-34 (2015) · Zbl 1352.65026
[21] Hampton, J.; Doostan, A., Coherence motivated sampling and convergence analysis of least squares polynomial chaos regression, Comput. Methods Appl. Mech. Engrg., 290, 73-97 (2015) · Zbl 1426.62174
[22] Bouchot, J.-L.; Bykowski, B.; Rauhut, H.; Schwab, C., Compressed sensing Petrov-Galerkin approximations for parametric PDEs, (Sampling Theory and Applications, SampTA, 2015 International Conference on (2015), IEEE), 528-532
[23] Peng, J.; Hampton, J.; Doostan, A., On polynomial chaos expansion via gradient-enhanced \(\ell_1\)-minimization, J. Comput. Phys., 310, 440-458 (2016) · Zbl 1349.65538
[24] A. Chkifa, N. Dexter, H. Tran, C.G. Webster, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets, 2016, arXiv preprint arXiv:1602.05823; A. Chkifa, N. Dexter, H. Tran, C.G. Webster, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets, 2016, arXiv preprint arXiv:1602.05823 · Zbl 1516.94010
[25] Winokur, J.; Kim, D.; Bisetti, F.; Le Maître, O. P.; Knio, O. M., Sparse pseudo spectral projection methods with directional adaptation for uncertainty quantification, J. Sci. Comput., 68, 2, 596-623 (2016) · Zbl 1371.65015
[26] Yang, X.; Lei, H.; Baker, N. A.; Lin, G., Enhancing sparsity of Hermite polynomial expansions by iterative rotations, J. Comput. Phys., 307, 94-109 (2016) · Zbl 1351.60041
[27] Adcock, B., Infinite-dimensional \(\ell_1\) minimization and function approximation from pointwise data, Constr. Approx., 45, 3, 345-390 (2017) · Zbl 1367.41012
[28] Jakeman, J. D.; Narayan, A.; Zhou, T., A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions, SIAM J. Sci. Comput., 39, 3, A1114-A1144 (2017) · Zbl 1368.65025
[29] J. Tropp, A.C. Gilbert, Signal Recovery from Partial Information Via Orthogonal Matching Pursuit, Citeseer, 2005.; J. Tropp, A.C. Gilbert, Signal Recovery from Partial Information Via Orthogonal Matching Pursuit, Citeseer, 2005.
[30] Tropp, J. A.; Gilbert, A. C., Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53, 12, 4655-4666 (2007) · Zbl 1288.94022
[31] Needell, D.; Vershynin, R., Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE J. Sel. Top. Signal Process., 4, 2, 310-316 (2010)
[32] Davenport, M. A.; Wakin, M. B., Analysis of orthogonal matching pursuit using the restricted isometry property, IEEE Trans. Inform. Theory, 56, 9, 4395-4401 (2010) · Zbl 1366.94093
[33] Needell, D.; Tropp, J. A., CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26, 3, 301-321 (2009) · Zbl 1163.94003
[34] Pal, D. K.; Mengshoel, O. J., Stochastic CoSaMP: Randomizing greedy pursuit for sparse signal recovery, (Joint European Conference on Machine Learning and Knowledge Discovery in Databases (2016), Springer), 761-776
[35] Dai, W.; Milenkovic, O., Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory, 55, 5, 2230-2249 (2009) · Zbl 1367.94082
[36] Rauhut, H.; Ward, R., Sparse Legendre expansions via \(\ell_1\)-minimization, J. Approx. Theory, 164, 5, 517-533 (2012) · Zbl 1239.65018
[37] Pukelsheim, F., Optimal Design of Experiments (2006), SIAM · Zbl 1101.62063
[38] Sinha, B. K.; Mandal, N. K.; Pal, M.; Das, P., Optimal Mixture Experiments, vol. 1028 (2014), Springer · Zbl 1308.62007
[39] M.A. Davenport, A.K. Massimino, D. Needell, T. Woolf, Constrained adaptive sensing, 2015, arXiv preprint arXiv:1506.05889; M.A. Davenport, A.K. Massimino, D. Needell, T. Woolf, Constrained adaptive sensing, 2015, arXiv preprint arXiv:1506.05889 · Zbl 1414.94155
[40] Chepuri, S. P.; Leus, G., Compression schemes for time-varying sparse signals, (2014 48th Asilomar Conference on Signals, Systems and Computers (2014), IEEE), 1647-1651
[41] P. Seshadri, A. Narayan, S. Mahadevan, Effectively subsampled quadratures for least squares polynomial approximations, 2016, arXiv preprint arXiv:1601.05470; P. Seshadri, A. Narayan, S. Mahadevan, Effectively subsampled quadratures for least squares polynomial approximations, 2016, arXiv preprint arXiv:1601.05470 · Zbl 1384.93159
[42] Sommariva, A.; Vianello, M., Computing approximate Fekete points by QR factorizations of Vandermonde matrices, Comput. Math. Appl., 57, 8, 1324-1336 (2009) · Zbl 1186.65028
[43] Fajraoui, N.; Marelli, S.; Sudret, B., Sequential design of experiment for sparse polynomial chaos expansions, SIAM/ASA J. Uncertain. Quantif., 5, 1, 1061-1085 (2017) · Zbl 06861783
[44] Hosder, S.; Walters, R. W.; Perez, R., A non-intrusive polynomial chaos method for uncertainty propagation in CFD simulations, AIAA Paper, 891, 2006 (2006)
[45] Hampton, J.; Doostan, A., Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies, J. Comput. Phys., 280, 363-386 (2015) · Zbl 1349.94110
[46] Szeg, G., Orthogonal Polynomials, vol. 23 (1939), American Mathematical Soc. · Zbl 0023.21505
[47] Dominici, D., Asymptotic analysis of the Hermite polynomials from their differential – difference equation, J. Difference Equ. Appl., 13, 12, 1115-1128 (2007) · Zbl 1136.33003
[48] Askey, R.; Wainger, S., Mean convergence of expansions in Laguerre and Hermite series, Amer. J. Math., 87, 3, 695-708 (1965) · Zbl 0125.31301
[49] Muckenhoupt, B., Asymptotic forms for Laguerre polynomials, Proc. Amer. Math. Soc., 24, 2, 288-292 (1970) · Zbl 0189.34104
[50] Nevai, P.; Erdélyi, T.; Magnus, A. P., Generalized Jacobi weights, Christoffel functions, and Jacobi polynomials, SIAM J. Math. Anal., 25, 2, 602-614 (1994) · Zbl 0805.33013
[51] Cohen, A.; Davenport, M. A.; Leviatan, D., On the stability and accuracy of least squares approximations, Found. Comput. Math., 13, 5, 819-834 (2013) · Zbl 1276.93086
[52] A. Cohen, G. Migliorati, Optimal weighted least-squares methods, 2016, arXiv preprint arXiv:1608.00512; A. Cohen, G. Migliorati, Optimal weighted least-squares methods, 2016, arXiv preprint arXiv:1608.00512
[53] Hampton, J.; Doostan, A., Compressive sampling methods for sparse polynomial chaos expansions, (Handbook of Uncertainty Quantification (2016)), 1-29
[54] M. Hadigol, A. Doostan, Least squares polynomial chaos expansion: a review of sampling strategies, 2017, arXiv preprint arXiv:1706.07564; M. Hadigol, A. Doostan, Least squares polynomial chaos expansion: a review of sampling strategies, 2017, arXiv preprint arXiv:1706.07564
[55] Smith, K., On the standard deviations of adjusted and interpolated values of an observed polynomial function and its constants and the guidance they give towards a proper choice of the distribution of observations, Biometrika, 12, 1/2, 1-85 (1918)
[56] Kiefer, J., Collected Papers III: Design of Experiments (1985), Springer-Verlag · Zbl 0586.62002
[57] Box, G. E.; Hunter, J. S.; Hunter, W. G., Statistics for Experimenters: Design, Innovation, and Discovery, vol. 2 (2005), Wiley-Interscience New York · Zbl 1082.62063
[58] Atkinson, A.; Donev, A.; Tobias, R., Optimum Experimental Designs, with SAS, vol. 34 (2007), Oxford University Press · Zbl 1183.62129
[59] Fedorov, V. V., Theory of Optimal Experiments (1972), Elsevier
[60] Fedorov, V. V.; Hackl, P., Model-Oriented Design of Experiments, vol. 125 (2012), Springer Science & Business Media
[61] Jones, B.; Goos, P., \(I\)-optimal versus \(D\)-optimal split-plot response surface designs, J. Qual. Technol., 44, 2, 85 (2012)
[62] Shin, Y.; Xiu, D., Nonadaptive quasi-optimal points selection for least squares linear regression, SIAM J. Sci. Comput., 38, 1, A385-A411 (2016) · Zbl 06548919
[63] Mandal, A.; Wong, W. K.; Yu, Y., Algorithmic searches for optimal designs, (Handbook of Design and Analysis of Experiments (2015)), 755-783 · Zbl 1352.62126
[64] Smucker, B. J., By Design: Exchange Algorithms to Construct Exact Model-Robust and Multiresponse Experimental Designs (2010), The Pennsylvania State University
[65] Cook, R. D.; Nachtrheim, C. J., A comparison of algorithms for constructing exact \(D\)-optimal designs, Technometrics, 22, 3, 315-324 (1980) · Zbl 0459.62061
[66] Mitchell, T. J., An algorithm for the construction of \(D\)-optimal experimental designs, Technometrics, 16, 2, 203-210 (1974) · Zbl 0297.62055
[67] Wynn, H. P., The sequential generation of \(D\)-optimum experimental designs, Ann. Math. Stat., 41, 5, 1655-1664 (1970) · Zbl 0224.62038
[68] Johnson, M. E.; Nachtsheim, C. J., Some guidelines for constructing exact \(D\)-optimal designs on convex design spaces, Technometrics, 25, 3, 271-277 (1983) · Zbl 0526.62070
[69] Atkinson, A. C.; Donev, A. N., The construction of exact \(D\)-optimum experimental designs with application to blocking response surface designs, Biometrika, 76, 3, 515-526 (1989) · Zbl 0677.62066
[70] Meyer, R. K.; Nachtsheim, C. J., The coordinate-exchange algorithm for constructing exact optimal experimental designs, Technometrics, 37, 1, 60-69 (1995) · Zbl 0825.62652
[71] Dykstra, O., The augmentation of experimental data to maximize \(X^T X\), Technometrics, 13, 3, 682-688 (1971)
[72] Song, H. H.; Qiu, L.; Zhang, Y., Netquest: a flexible framework for large-scale network measurement, IEEE/ACM Trans. Netw., 17, 1, 106-119 (2009)
[73] Gammerman, A.; Luo, Z.; Vega, J.; Vovk, V., Conformal and Probabilistic Prediction with Applications: 5th International Symposium, COPA 2016, Madrid, Spain, April 20-22, 2016, Proceedings, vol. 9653 (2016), Springer
[74] Pronzato, L., Optimal experimental design and some related control problems, Automatica, 44, 2, 303-325 (2008) · Zbl 1283.93154
[75] Nguyen, N.-K.; Miller, A. J., A review of some exchange algorithms for constructing discrete \(D\)-optimal designs, Comput. Statist. Data Anal., 14, 4, 489-498 (1992) · Zbl 0937.62628
[76] Hansen, P. C.; Pereyra, V.; Scherer, G., Least Squares Data Fitting with Applications (2012), JHU Press
[77] Golub, G. H.; Van Loan, C. F., Matrix Computations, vol. 3 (2012), JHU Press
[78] Gu, M.; Eisenstat, S. C., Efficient algorithms for computing a strong rank-revealing QR factorization, SIAM J. Sci. Comput., 17, 4, 848-869 (1996) · Zbl 0858.65044
[79] J. Hampton, A. Doostan, Basis adaptive sample efficient polynomial chaos (BASE-PC), 2017, arXiv preprint arXiv:1702.01185; J. Hampton, A. Doostan, Basis adaptive sample efficient polynomial chaos (BASE-PC), 2017, arXiv preprint arXiv:1702.01185 · Zbl 1426.62174
[80] N. Alemazkoor, H. Meidani, A near-optimal sampling strategy for sparse recovery of polynomial chaos expansions, 2017, arXiv preprint arXiv:1702.07830; N. Alemazkoor, H. Meidani, A near-optimal sampling strategy for sparse recovery of polynomial chaos expansions, 2017, arXiv preprint arXiv:1702.07830
[81] Bernardo, M. C.; Buck, R.; Liu, L.; Nazaret, W. A.; Sacks, J.; Welch, W. J., Integrated circuit design optimization using a sequential strategy, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., 11, 3, 361-372 (1992)
[82] Jones, D. R.; Schonlau, M.; Welch, W. J., Efficient global optimization of expensive black-box functions, J. Global Optim., 13, 4, 455-492 (1998) · Zbl 0917.90270
[83] Hadar, J.; Russell, W. R., Rules for ordering uncertain prospects, Am. Econ. Rev., 59, 1, 25-34 (1969)
[84] C.V. Mai, B. Sudret, Polynomial chaos expansions for damped oscillators, in : 12th International Conference on Applications of Statistics and Probability in Civil Engineering, 2015.; C.V. Mai, B. Sudret, Polynomial chaos expansions for damped oscillators, in : 12th International Conference on Applications of Statistics and Probability in Civil Engineering, 2015.
[85] Forrester, A.; Keane, A., Engineering Design Via Surrogate Modelling: A Practical Guide (2008), John Wiley & Sons
[86] Moon, H., Design and Analysis of Computer Experiments for Screening Input Variables (2010), The Ohio State University, (Ph.D. thesis)
[87] Moon, H.; Dean, A. M.; Santner, T. J., Two-stage sensitivity-based group screening in computer experiments, Technometrics, 54, 4, 376-387 (2012)
[88] Ishigami, T.; Homma, T., An importance quantification technique in uncertainty analysis for computer models, (Uncertainty Modeling and Analysis, 1990. Proceedings., First International Symposium on (1990), IEEE), 398-403
[89] Tropp, J. A., User-friendly tail bounds for sums of random matrices, Found. Comput. Math., 12, 4, 389-434 (2012) · Zbl 1259.60008
[90] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, 2010, arXiv preprint arXiv:1011.3027; R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, 2010, arXiv preprint arXiv:1011.3027
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.