×

Characterization of sampling patterns for low-tt-rank tensor retrieval. (English) Zbl 1466.62455

Summary: In this paper, we analyze the fundamental conditions for low-rank tensor completion given the separation or tensor-train (TT) rank, i.e., ranks of TT unfoldings. We exploit the algebraic structure of the TT decomposition to obtain the deterministic necessary and sufficient conditions on the locations of the samples to ensure finite completability. Specifically, we propose an algebraic geometric analysis on the TT manifold that can incorporate the whole rank vector simultaneously in contrast to the existing approach based on the Grassmannian manifold that can only incorporate one rank component. Our proposed technique characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern and the TT decomposition, which is instrumental to obtaining the deterministic condition on the sampling pattern for finite completability. In addition, based on the proposed analysis, assuming that the entries of the tensor are sampled independently with probability \(p\), we derive a lower bound on the sampling probability \(p\), or equivalently, the number of sampled entries that ensures finite completability with high probability. Moreover, we also provide the deterministic and probabilistic conditions for unique completability.

MSC:

62R07 Statistical aspects of big data and data science
62D05 Sampling theory, sample surveys
15A69 Multilinear algebra, tensor calculus
68W01 General topics in the theory of algorithms

Software:

MCTDH; PhaseLift
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Ashraphijuo, M.; Aggarwal, V.; Wang, X., On deterministic sampling patterns for robust low-rank matrix completion, IEEE Signal Process. Lett., 25, 3, 343-347 (2018)
[2] Ashraphijuo, M.; Aggarwal, V.; Wang, X., Deterministic and probabilistic conditions for finite completability of low-Tucker-rank tensor, IEEE Trans. Inform. Theory, 65, 9, 5380-5400 (2019) · Zbl 1432.15021
[3] Ashraphijuo, M., Madani, R., Lavaei, J.: Characterization of rank-constrained feasibility problems via a finite number of convex programs. In: IEEE conference on decision and control (CDC), pp. 6544-6550 (2016)
[4] Ashraphijuo, M.; Xiaodong, W., Fundamental conditions for low-CP-rank tensor completion, J. Mach. Learn. Res. JMLR, 18, 63, 1-29 (2017) · Zbl 1439.15009
[5] Ashraphijuo, M.; Xiaodong, W., Clustering a union of low-rank subspaces of different dimensions with missing data, Pattern Recogn. Lett., 120, 31-35 (2019)
[6] Ashraphijuo, M.; Xiaodong, W.; Vaneet, A., Rank determination for low-rank data completion, J. Mach. Learn. Res. JMLR, 18, 1, 3422-3450 (2017) · Zbl 1441.65049
[7] Ashraphijuo, M.; Xiaodong, W.; Zhang, J., Low-rank data completion with very low sampling rate using Newton’s method, IEEE Trans. Signal Process., 67, 7, 1849-1859 (2019) · Zbl 1458.65049
[8] Beck, MH; Jäckle, A.; Worth, GA; Meyer, H-D, The multiconfiguration time-dependent hartree MCTDH method: a highly efficient algorithm for propagating wavepackets, Phys. Rep., 324, 1, 1-105 (2000)
[9] Cai, JF; Candès, EJ; Shen, Z., A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20, 4, 1956-1982 (2010) · Zbl 1201.90155
[10] Candès, EJ; Eldar, YC; Strohmer, T.; Voroninski, V., Phase retrieval via matrix completion, SIAM J. Imaging Sci., 6, 1, 199-225 (2013) · Zbl 1280.49052
[11] Candès, EJ; Recht, B., Exact matrix completion via convex optimization, Found. Comput. Math., 9, 6, 717-772 (2009) · Zbl 1219.90124
[12] Candès, EJ; Tao, T., The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inf. Theory, 56, 5, 2053-2080 (2010) · Zbl 1366.15021
[13] Douglas, CJ; Chang, J-J, Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition, Psychometrika, 35, 3, 283-319 (1970) · Zbl 0202.19101
[14] Lathauwer, LD; Moor, BD; Vandewalle, J., A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl., 21, 4, 1253-1278 (2000) · Zbl 0962.15005
[15] Ely, G., Aeron, S., Hao, N., Kilmer, M.E.: 5D and 4D pre-stack seismic data completion using tensor nuclear norm TNN. Society of Exploration Geophysicists SEG (2013)
[16] Gandy, S.; Recht, B.; Yamada, I., Tensor completion and low-N-rank tensor recovery via convex optimization, Inverse Problems, 27, 2, 1-19 (2011) · Zbl 1211.15036
[17] Rong G., lee, J.D., Tengyu, M.: Matrix completion has no spurious local minimum. arXiv:1605.07272 (2016)
[18] Goldfarb, Donald; Qin, Zhiwei, Robust low-rank tensor recovery: models and algorithms, SIAM J. Matrix Anal. Appl., 35, 1, 225-253 (2014) · Zbl 1296.65086
[19] Grasedyck, L., Hierarchical singular value decomposition of tensors, SIAM J. Matrix Anal. Appl., 31, 4, 2029-2054 (2010) · Zbl 1210.65090
[20] Holtz, S.; Rohwedder, T.; Schneider, R., On manifolds of tensors of fixed TT-rank, Numer. Math., 120, 4, 701-731 (2012) · Zbl 1242.15022
[21] Prateek, J., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. Annual Symposium on the Theory of Computing, pp. 665-674 (2013) · Zbl 1293.65073
[22] Kilmer, ME; Braman, K.; Hao, N.; Hoover, RC, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM J. Matrix Anal. Appl., 34, 1, 148-172 (2013) · Zbl 1269.65044
[23] Kolda, TG, Orthogonal tensor decompositions, SIAM J. Matrix Anal. Appl., 23, 1, 243-255 (2001) · Zbl 1005.15020
[24] Kreimer, N.; Stanton, A.; Sacchi, MD, Tensor completion based on nuclear norm minimization for 5D seismic data reconstruction, Geophysics, 78, 6, V273-V284 (2013)
[25] Kressner, D.; Steinlechner, M.; Vandereycken, B., Low-rank tensor completion by Riemannian optimization, BIT Numer. Math., 54, 2, 447-468 (2014) · Zbl 1300.65040
[26] Krishnamurthy, A., Singh, A.: Low-rank matrix and tensor completion via adaptive sampling. Advances in Neural Information Processing Systems, pp. 836-844 (2013)
[27] Lim, L-H; Comon, P., Multiarray signal processing: Tensor decomposition meets compressed sensing, Comptes Rendus Mecanique, 338, 6, 311-320 (2010) · Zbl 1220.94018
[28] Liu, J.; Przemyslaw, M.; Wonka, P.; Ye, J., Tensor completion for estimating missing values in visual data, IEEE Trans. Pattern Anal. Mach. Intell., 35, 1, 208-220 (2013)
[29] Liu, XY; Aeron, S.; Aggarwal, V.; Wang, X.; Wu, MY, Adaptive sampling of RF fingerprints for fine-grained indoor localization, IEEE Trans. Mob. Comput., 15, 10, 2411-2423 (2016)
[30] Liu, X.-Y., Aeron, S., Aggarwal, V., Wang, X.: xiaodong Low-tubal-rank tensor completion using alternating minimization. arXiv:1610.01690(2016)
[31] Liu, X.-Y., Aeron, S., Aggarwal, V., Wang, X., Wu, M.-Y.: Tensor completion via adaptive sampling of tensor fibers: Application to efficient indoor RF fingerprinting. In: IEEE International conference on acoustics, speech and signal processing ICASSP, pp. 2529-2533 (2016)
[32] Oseledets, I.; Tyrtyshnikov, E., TT -Cross approximation for multidimensional arrays, Linear Algebra Appl., 432, 1, 70-88 (2010) · Zbl 1183.65040
[33] Oseledets, IV, Tensor-train decomposition, SIAM J. Sci. Comput., 33, 5, 2295-2317 (2011) · Zbl 1232.15018
[34] Oseledets, IV; Tyrtyshnikov, EE, Breaking the curse of dimensionality, or how to use SVD in many dimensions, SIAM J. Sci. Comput., 31, 5, 3744-3759 (2009) · Zbl 1200.65028
[35] Oseledets, I.V., Tyrtyshnikov, E.E.: Tensor tree decomposition does not need a tree. Linear Algebra Applications, 8 (2009)
[36] Papalexakis, E.E., Faloutsos, C., Sidiropoulos, N.D.: Parcube: Sparse parallelizable tensor decompositions. European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 521-536 (2012)
[37] Pimentel-Alarcón, D.; Boston, N.; Nowak, R., A characterization of deterministic sampling patterns for low-rank matrix completion, IEEE J. Sel. Top. Signal Process., 10, 4, 623-636 (2016)
[38] Rauhut, H.; Schneider, R.; Stojanac, Z., Low rank tensor recovery via iterative hard thresholding, Linear Algebra Appl., 523, 220-262 (2017) · Zbl 1372.65130
[39] Romera-Paredes, B., Pontil, M.: A new convex relaxation for tensor completion. Advances in Neural Information Processing Systems, pp. 2967-2975 (2013)
[40] Schollwöck, U., The density-matrix renormalization group, J. Mod. Phys., 77, 1, 259 (2005) · Zbl 1205.82073
[41] Sidiropoulos, ND; Kyrillidis, A., Multi-way compressed sensing for sparse low-rank tensors, IEEE Signal Process. Lett., 19, 11, 757-760 (2012)
[42] Signoretto, M.; Dinh, QT; Lathauwer, LD; Suykens, JA, Learning with tensors: a framework based on convex optimization and spectral regularization, Mach. Learn., 94, 3, 303-351 (2014) · Zbl 1319.68191
[43] Stegeman, A.; Sidiropoulos, ND, On Kruskal’s uniqueness condition for the candecomp/Parafac decomposition, Linear Algebra Appl., 420, 2, 540-552 (2007) · Zbl 1120.15002
[44] Sturmfels, B.: Solving Systems of Polynomial Equations. Number 97 american mathematical society (2002) · Zbl 1101.13040
[45] Berge, JMT; Sidiropoulos, ND, On uniqueness in candecomp/parafac, Psychometrika, 67, 3, 399-409 (2002) · Zbl 1297.62140
[46] Tomioka, R., Hayashi, K., Hisashi, K.: Estimation of low-rank tensors via convex optimization. arXiv:1010.0789 (2010)
[47] Uschmajew, A.; Vandereycken, B., The geometry of algorithms using hierarchical tensors, Linear Algebra Appl., 439, 1, 133-166 (2013) · Zbl 1281.65062
[48] Wang, W., Aggarwal, V., Shuchin A.: Tensor completion by alternating minimization under the tensor train TT model. arXiv:1609.05587(2016)
[49] Zhou, G.; Cichocki, A.; Xie, S., Fast nonnegative matrix/tensor factorization based on low-rank approximation, IEEE Trans. Signal Process., 60, 6, 2928-2940 (2012) · Zbl 1391.65117
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.