×

Deterministic tensor completion with hypergraph expanders. (English) Zbl 1476.15041

Summary: We provide a novel analysis of low-rank tensor completion based on hypergraph expanders. As a proxy for rank, we minimize the max-quasinorm of the tensor, which generalizes the max-norm for matrices. Our analysis is deterministic and shows that the number of samples required to approximately recover an order-\(t\) tensor with at most \(n\) entries per dimension is linear in \(n\), under the assumption that the rank and order of the tensor are \(O(1)\). As steps in our proof, we find a new expander mixing lemma for a \(t\)-partite, \(t\)-uniform regular hypergraph model and prove several new properties about the tensor max-quasinorm. To the best of our knowledge, this is the first deterministic analysis of tensor completion. We develop a practical algorithm that solves a relaxed version of the max-quasinorm minimization problem, and we demonstrate its efficacy with numerical experiments.

MSC:

15A69 Multilinear algebra, tensor calculus
05C90 Applications of graph theory
05C65 Hypergraphs
05C48 Expander graphs

Software:

Cross
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] M. Ajtai, J. Komlós, and E. Szemerédi, Deterministic simulation in logspace, in Proceedings of the 19th Annual ACM Symposium on Theory of Computing, ACM, New York, 1987, pp. 132-140.
[2] N. Alon, Eigenvalues and expanders, Combinatorica, 6 (1986), pp. 83-96. · Zbl 0661.05053
[3] N. Alon, U. Feige, A. Wigderson, and D. Zuckerman, Derandomized graph products, Comput. Complexity, 5 (1995), pp. 60-75. · Zbl 0816.60070
[4] B. Barak and A. Moitra, Noisy tensor completion via the sum-of-squares hierarchy, in Proceedings of the Conference on Learning Theory, New York, NY, 2016, pp. 417-445.
[5] A. Ben-Aroya and A. Ta-Shma, A combinatorial construction of almost-Ramanujan graphs using the zig-zag product, SIAM J. Comput., 40 (2011), pp. 267-290, https://doi.org/10.1137/080732651. · Zbl 1222.05147
[6] S. Bhojanapalli and P. Jain, Universal matrix completion, in Proceedings of the International Conference on Machine Learning, Beijing, China, 2014, pp. 1881-1889.
[7] Y. Bilu and S. Hoory, On codes from hypergraphs, European J. Combin., 25 (2004), pp. 339-354. · Zbl 1063.94104
[8] Y. Bilu and N. Linial, Lifts, discrepancy and nearly optimal spectral gap, Combinatorica, 26 (2006), pp. 495-519. · Zbl 1121.05054
[9] G. Brito, I. Dumitriu, and K. D. Harris, Spectral Gap in Random Bipartite Biregular Graphs and Its Applications, preprint, https://arxiv.org/abs/1804.07808v1, 2018.
[10] S. P. Burnwal and M. Vidyasagar, Deterministic completion of rectangular matrices using asymmetric Ramanujan graphs: Exact and stable recovery, IEEE Trans. Signal Process., 68 (2020), pp. 3834-3848. · Zbl 07591003
[11] C. Cai, G. Li, H. V. Poor, and Y. Chen, Nonconvex low-rank symmetric tensor completion from noisy data, in Advances in Neural Information Processing Systems 2019, NeurIPS, San Diego, CA, 2019.
[12] T. Cai and W.-X. Zhou, Matrix completion via max-norm constrained optimization, Electron. J. Stat., 10 (2016), pp. 1493-1525. · Zbl 1342.62091
[13] S. Chatterjee, A deterministic theory of low rank matrix completion, IEEE Trans. Inform. Theory, 66 (2020), pp. 8046-8055. · Zbl 1457.15027
[14] F. Chung, Spectral Graph Theory, CBMS Regional Conf. Ser. in Math 92, AMS, Providence, RI, 1997. · Zbl 0867.05046
[15] E. Cohen, D. Mubayi, P. Ralli, and P. Tetali, Inverse expander mixing for hypergraphs, Electron. J. Combin., 23 (2016), pp. 2-20. · Zbl 1335.05123
[16] G. Cohen, N. Peri, and A. Ta-Shma, Expander random walks: A Fourier-analytic approach, Electron. Colloquium Comput. Complex., 27 (2020), 6.
[17] S. De Winter, J. Schillewaert, and J. Verstraete, Large incidence-free sets in geometries, Electron. J. Combin., 19 (2012), 24. · Zbl 1266.51004
[18] S. J. Dilworth, The dimension of Euclidean subspaces of quasinormed spaces, Math. Proc. Cambridge Philos. Soc., 97 (1985), pp. 311-320. · Zbl 0573.46005
[19] I. Dumitriu and Y. Zhu, Spectra of Random Regular Hypergraphs, preprint, https://arxiv.org/abs/1905.06487, 2019. · Zbl 1510.05180
[20] S. Foucart, D. Needell, R. Pathak, Y. Plan, and M. Wootters, Weighted matrix completion from non-random, non-uniform sampling patterns, IEEE Trans. Inform. Theory, 67 (2021), pp. 1264-1290. · Zbl 1465.65038
[21] R. Foygel and N. Srebro, Concentration based guarantees for low-rank matrix reconstruction, in Proceedings of the 24th Annual Conference on Learning Theory, Budapest, Hungary, 2011, pp. 315-340.
[22] S. Friedland and L.-H. Lim, Nuclear norm of higher-order tensors, Math. Comp., 87 (2018), pp. 1255-1281. · Zbl 1383.15018
[23] J. Friedman and A. Wigderson, On the second eigenvalue of hypergraphs, Combinatorica, 15 (1995), pp. 43-65. · Zbl 0843.05075
[24] D. Gamarnik, Q. Li, and H. Zhang, Matrix completion from \(O(n)\) samples in linear time, in Proceedings of the 30th Conference on Learning Theory, Amsterdam, The Netherlands, 2017, pp. 940-947.
[25] S. Gandy, B. Recht, and I. Yamada, Tensor completion and low-\(n\)-rank tensor recovery via convex optimization, Inverse Problems, 27 (2011), 025010. · Zbl 1211.15036
[26] N. Ghadermarzy, Y. Plan, and Ö. Yilmaz, Near-optimal sample complexity for convex tensor completion, Inf. Inference, 8 (2019), pp. 577-619. · Zbl 1527.90218
[27] B. D. Haeffele and R. Vidal, Global optimality in neural network training, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Washington, DC, 2017, pp. 7331-7339.
[28] W. H. Haemers, Interlacing eigenvalues and graphs, Linear Algebra Appl., 226 (1995), pp. 593-616. · Zbl 0831.05044
[29] E. Heiman, G. Schechtman, and A. Shraibman, Deterministic algorithms for matrix completion, Random Structures Algorithms, 45 (2014), pp. 306-317. · Zbl 1306.15027
[30] C. J. Hillar and L.-H. Lim, Most tensor problems are NP-hard, J. ACM, 60 (2013), 45. · Zbl 1281.68126
[31] S. Hoory, N. Linial, and A. Wigderson, Expander graphs and their applications, Bull. Amer. Math. Soc., 43 (2006), pp. 439-561. · Zbl 1147.68608
[32] P. Jain and S. Oh, Provable tensor factorization with missing data, in Advances in Neural Information Processing Systems 2014, NeurIPS, San Diego, CA, 2014, pp. 1431-1439.
[33] N. Kahale, Eigenvalues and expansion of regular graphs, J. ACM, 42 (1995), pp. 1091-1106. · Zbl 0885.68117
[34] R. H. Keshavan, A. Montanari, and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), pp. 2980-2998. · Zbl 1366.62111
[35] O. Klopp, Noisy low-rank matrix completion with general sampling distribution, Bernoulli, 20 (2014), pp. 282-303. · Zbl 1400.62115
[36] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), pp. 455-500, https://doi.org/10.1137/07070111X. · Zbl 1173.65029
[37] A. Krishnamurthy and A. Singh, Low-rank matrix and tensor completion via adaptive sampling, in Advances in Neural Information Processing Systems 2013, NeurIPS, San Diego, CA, 2013, pp. 836-844.
[38] T. Lee, A. Shraibman, and R. Špalek, A direct product theorem for discrepancy, in Proceedings of the 23rd Annual IEEE Conference on Computational Complexity, IEEE, Washington, DC, 2008, pp. 71-80.
[39] J. Lenz and D. Mubayi, Eigenvalues and linear quasirandom hypergraphs, Forum Math. Sigma, 3 (2015), e2. · Zbl 1306.05144
[40] N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman, Complexity measures of sign matrices, Combinatorica, 27 (2007), pp. 439-463. · Zbl 1164.68006
[41] A. Liu and A. Moitra, Tensor completion made practical, in Advances in Neural Information Processing Systems 2020, NeurIPS, San Diego, CA, 2020.
[42] C. Liu, L. Zhu, and M. Belkin, Loss Landscapes and Optimization in Over-parameterized Non-linear Systems and Neural Networks, preprint, https://arxiv.org/abs/2003.00307, 2020.
[43] J. Liu, P. Musialski, P. Wonka, and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2012), pp. 208-220.
[44] A. Lubotzky, High dimensional expanders, in Proceedings of the International Congress of Mathematicians, ICM 2018, World Scientific, Hackensack, NJ, 2018, pp. 705-730. · Zbl 1448.05125
[45] A. W. Marcus, D. A. Spielman, and N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, Ann. of Math. (2), 182 (2015), pp. 307-325. · Zbl 1316.05066
[46] J. Matoušek, A. Nikolov, and K. Talwar, Factorization norms and hereditary discrepancy, Int. Math. Res. Not. IMRN, 2020 (2020), pp. 751-780. · Zbl 1459.11161
[47] S. Mohanty, R. O’Donnell, and P. Paredes, Explicit near-Ramanujan graphs of every degree, in Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, ACM, New York, 2020, pp. 510-523. · Zbl 1509.68206
[48] A. Montanari and N. Sun, Spectral algorithms for tensor completion, Comm. Pure Appl. Math., 71 (2018), pp. 2381-2425. · Zbl 1404.15023
[49] C. Mu, B. Huang, J. Wright, and D. Goldfarb, Square deal: Lower bounds and improved relaxations for tensor recovery, in Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014, pp. 73-81.
[50] C. Musco, C. Musco, and D. P. Woodruff, Simple heuristics yield provable algorithms for masked low-rank approximation, in 12th Innovations in Theoretical Computer Science Conference (ITCS 2021), LIPIcs. Leibniz Int. Proc. Inform. 185, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, Germany, 2021, 6.
[51] J. Y. Park, K. Carr, S. Zheng, Y. Yue, and R. Yu, Multiresolution tensor learning for efficient and interpretable spatial analysis, in Proceedings of the International Conference on Machine Learning, PMLR, 2020, pp. 7499-7509.
[52] O. Parzanchevski, Mixing in high-dimensional expanders, Combin. Probab. Comput., 26 (2017), pp. 746-761. · Zbl 1371.05329
[53] A. Potechin and D. Steurer, Exact tensor completion with sum-of-squares, in Proceedings of the 2017 Conference on Learning Theory, volume 65, PMLR, 2017, pp. 1619-1673.
[54] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), pp. 3413-3430. · Zbl 1280.68141
[55] J. D. M. Rennie and N. Srebro, Fast maximum margin matrix factorization for collaborative prediction, in Proceedings of the 22nd International Conference on Machine learning, ACM, New York, 2005, pp. 713-719.
[56] D. Shah and C. L. Yu, Iterative collaborative filtering for sparse noisy tensor estimation, in Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), IEEE, Washington, DC, 2019, pp. 41-45.
[57] Q. Song, H. Ge, J. Caverlee, and X. Hu, Tensor completion algorithms in big data analytics, ACM Trans. Knowledge Discovery from Data (TKDD), 13 (2019), 6.
[58] N. Srebro and A. Shraibman, Rank, trace-norm and max-norm, in Proceedings of the International Conference on Computational Learning Theory, Springer, New York, 2005, pp. 545-560. · Zbl 1137.68563
[59] N. Tomczak-Jaegermann, Banach-Mazur Distances and Finite-Dimensional Operator Ideals, Longman Scientific & Technical, Harlow, UK, 1989. · Zbl 0721.46004
[60] D. Xia and M. Yuan, On polynomial time methods for exact low-rank tensor completion, Found. Comput. Math., 19 (2019), pp. 1265-1313. · Zbl 1436.15031
[61] C. L. Yu, Tensor Estimation with Nearly Linear Samples, preprint, https://arxiv.org/abs/2007.00736, 2020.
[62] M. Yuan and C.-H. Zhang, On tensor completion via nuclear norm minimization, Found. Comput. Math., 16 (2016), pp. 1031-1068. · Zbl 1378.90066
[63] A. Zhang, Cross: Efficient low-rank tensor completion, Ann. Statist., 47 (2019), pp. 936-964. · Zbl 1416.62298
[64] Z. Zhou and Y. Zhu, Sparse random tensors: Concentration, regularization and applications, Electron. J. Stat., 15 (2021), pp. 2483-2516. · Zbl 1471.15021
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.