zbMATH — the first resource for mathematics

Data science applications to string theory. (English) Zbl 1452.81004
In the paper under review, the author provides a pedagogical introduction to data science techniques that are used to study large data sets, and outline their applications to string theory.
The problem with the string landscape is that it is unfathomably big. There is a huge number of different choices for the compact component of the string’s target space, and there is a huge number of additional data or boundary conditions, known as fluxes and branes, that are necessary to uniquely specify string theory in four dimensions. Early estimates argue that there are \(\mathcal{O}(10^{500})\) boundary data choices for any typical six dimensional compactification space [S. K. Ashok and M. R. Douglas, J. High Energy Phys. 2004, No. 1, 060, 36 p. (2004; Zbl 1243.83060)] . Estimates on the entire landscape are even much larger, \(\mathcal{O}(10^{272,000})\) [W. Taylor and Y.-N. Wang, J. High Energy Phys. 2015, No. 12, Paper No. 164, 21 p. (2015; Zbl 1388.81367)]. In addition, finding mathematically consistent and phenomenologically viable background configurations requires solving problems which are generically NP-complete, NP-hard, or even undecidable.
The paper under review consists of two parts. In sections 2 to 9, the author introduces concepts of data science that are relevant for string theory studies. This introduction is general and does not make reference to string theory concepts. Sections 2 to 4 introduce neural networks (NNs) and section 5 describes genetic algorithms. Section 6 describes persistent homology as an example for topological data analysis. Section 7 describes machine learning algorithms other than NNs that can be used in unsupervised machine learning to cluster data or detect outliers and anomalies in a data set. After explaining a general problem that occurs in all these algorithms the author introduces common algorithms such as principal component analysis, \(K\)-means clustering, mean shift clustering, Gaussian expectation-maximization clustering, and clustering with BIRCH and with DBSCAN. Section 8 introduces reinforcement learning to search for solutions in a large space of possibilities, and finally section 9 discusses classification and regression algorithms besides NNs that can be used in supervised machine learning. The algorithms discussed are the \(k\)-nearest neighbor algorithm, decision trees and random forests, and support vector machines.
In Section 10, the author explains the hardness of the problems encountered in string theory, reviews the existing machine learning literature and illustrates applications of the techniques explained in section 2 to 9 to problems that arise in string theory. This includes computing cohomologies of line bundles over Calabi-Yau manifolds, generating and proving conjectures based on observations made by the AI in some data sets, predicting the types of non-Higgsable gauge groups that appear in F-Theory on toric, elliptically fibered Calabi-Yau fourfolds, and generating superpotentials for \(4D\) \(\mathcal{N} = 1\) theories, studying the structure of string vacua and searching through the landscape of string vacua to identify viable models. The author also illustrates the use of genetic algorithms to distinguish high-scale SUSY breaking models, and the use of convolutional neural networks for toric diagrams to predict volumes of Sasaki-Einstein manifolds. Furthermore the author introduces the idea of using NNs to approximate the bulk metric in AdS/CFT, and discusses the deep Boltzmann machines and their relation to AdS/CFT and Riemann Theta functions.

81-08 Computational methods for problems pertaining to quantum theory
55N31 Persistent homology and applications, topological data analysis
68T05 Learning and adaptive systems in artificial intelligence
68T07 Artificial neural networks and deep learning
68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
81T30 String and superstring theories; other extended objects (e.g., branes) in quantum field theory
Full Text: DOI
[1] Y.-H. He, Deep-Learning the Landscape, 1706.02714.
[2] Krefl, D.; Seong, R.-K., Machine learning of Calabi-Yau volumes, Phys. Rev. D, 96, 6, Article 066014 pp. (2017), 1706.03346
[3] Ruehle, F., Evolving neural networks with genetic algorithms to study the String Landscape, J. High Energy Phys., 08, 038 (2017), 1706.07024
[4] Carifio, J.; Halverson, J.; Krioukov, D.; Nelson, B. D., Machine learning in the string landscape, J. High Energy Phys., 09, 157 (2017), 1707.00655 · Zbl 1382.81155
[5] Green, M.; Schwarz, J.; Witten, E., (Superstring Theory Vol. 1: Introduction. Superstring Theory Vol. 1: Introduction, Cambridge Monographs On Mathematical Physics (1987), Univ. Pr: Univ. Pr Cambridge, Uk), 469
[6] Green, M. B.; Schwarz, J. H.; Witten, E., (Superstring Theory Vol. 2: Loop Amplitudes, Anomalies and Phenomenology. Superstring Theory Vol. 2: Loop Amplitudes, Anomalies and Phenomenology, Cambridge Monographs On Mathematical Physics (1987), Univ. Pr: Univ. Pr Cambridge, Uk), 596
[7] Ibanez, L. E.; Uranga, A. M., String Theory and Particle Physics: An Introduction to String Phenomenology (2012), Cambridge University Press · Zbl 1260.81001
[8] Blumenhagen, R.; Lüst, D.; Theisen, S., Basic Concepts of String Theory. Theoretical and Mathematical Physics (2013), Springer Heidelberg: Springer Heidelberg Germany
[9] Witten, E., String theory dynamics in various dimensions, Nuclear Phys. B, 443, 85-126 (1995), hep-th/9503124. [,333(1995)] · Zbl 0990.81663
[10] Vafa, C., Evidence for f theory, Nuclear Phys. B, 469, 403-418 (1996), hep-th/9602022 · Zbl 1003.81531
[11] Candelas, P.; Horowitz, G. T.; Strominger, A.; Witten, E., Vacuum configurations for superstrings, Nuclear Phys. B, 258, 46-74 (1985)
[12] Douglas, M. R., The statistics of string / M theory vacua, J. High Energy Phys., 05, 046 (2003), hep-th/0303194
[13] C. Vafa, The String landscape and the swampland, hep-th/0509212. · Zbl 1117.81117
[14] Ashok, S.; Douglas, M. R., Counting flux vacua, J. High Energy Phys., 01, 060 (2004), hep-th/0307049 · Zbl 1243.83060
[15] Halverson, J.; Long, C.; Sung, B., Algorithmic universality in F-theory compactifications, Phys. Rev. D, 96, 12, Article 126006 pp. (2017), 1706.02299
[16] Taylor, W.; Wang, Y.-N., The F-theory geometry with most flux vacua, J. High Energy Phys., 12, 164 (2015), 1511.03209 · Zbl 1388.81367
[17] Coppersmith, D.; Winograd, S., Matrix multiplication via arithmetic progressions, J. Symbolic Comput., 9, 3, 251-280 (1990), Computational algebraic complexity editorial · Zbl 0702.65046
[18] Douglas, M. R.; Taylor, W., The landscape of intersecting brane models, J. High Energy Phys., 01, 031 (2007), hep-th/0606109
[19] Halverson, J.; Ruehle, F., Computational complexity of vacua and near-vacua in field and string theory, Phys. Rev. D, 99, 4, Article 046015 pp. (2019), 1809.08279
[20] (Mayr, E. W.; Prömel, H. J.; Steger, A., Lectures on Proof Verification and Approximation Algorithms. Lectures on Proof Verification and Approximation Algorithms, Lecture Notes in Computer Science, vol. 1367 (1998), Springer) · Zbl 1043.68579
[21] Siegel, C. L., Zur theorie der quadratischen formen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl II, 21-46 (1972) · Zbl 0252.10019
[22] Manders, K.; Adleman, L., Np-complete decision problems for quadratic polynomials, (Proceedings of the Eighth Annual ACM Symposium on Theory of Computing STOC ’76 (1976), ACM New York: ACM New York NY, USA), 23-29 · Zbl 0381.68044
[23] Lenstra, A. K.; Lenstra, H. W.; L, B., Lovasz factoring polynomials with rational coefficients, Math. Ann., 261, 515-534 (1982) · Zbl 0488.12001
[24] Halverson, J.; Nelson, B.; Ruehle, F., Branes with brains: Exploring string vacua with deep reinforcement learning, J. High Energy Phys., 06, 003 (2019), 1903.11616 · Zbl 1416.83125
[25] W.R. Inc, Mathematica. Champaign, IL, 2019.
[26] Fabian Ruehle, Github page with material for this review, URL http://github.com/ruehlef/Physics-Reports,. · Zbl 1381.83128
[27] Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A., Automatic differentiation in pytorch, (NIPS Autodiff Workshop (2017))
[28] Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G. S.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Goodfellow, I.; Harp, A.; Irving, G.; Isard, M.; Jia, Y.; Jozefowicz, R.; Kaiser, L.; Kudlur, M.; Levenberg, J.; Mané, D.; Monga, R.; Moore, S.; Murray, D.; Olah, C.; Schuster, M.; Shlens, J.; Steiner, B.; Sutskever, I.; Talwar, K.; Tucker, P.; Vanhoucke, V.; Vasudevan, V.; Viégas, F.; Vinyals, O.; Warden, P.; Wattenberg, M.; Wicke, M.; Yu, Y.; Zheng, X., Tensorflow: Large-scale machine learning on heterogeneous systems (2015), Software available from tensorflow.org
[29] F. Chollet, et al. Keras, 2015.
[30] S. Tokui, K. Oono, S. Hido, J. Clayton, Chainer: a next-generation open source framework for deep learning, in: Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). 2015.
[31] ChainerRL.
[32] Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; Zaremba, W., Openai gym (2016)
[33] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; Duchesnay, E., Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12, 2825-2830 (2011) · Zbl 1280.68189
[34] De Rainville, F.-M.; Fortin, F.-A.; Gardner, M.-A.; Parizeau, M.; Gagné, C., Deap: A python framework for evolutionary algorithms, (Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation GECCO ’12 (2012), ACM New York: ACM New York NY, USA), 85-92, Software availabe at http://deap.readthedocs.io/
[35] Charbonneau, P., Release notes for PIKAIA 1.2 (2002)
[36] J. Williams, Pikaia, FORTRAN90.
[37] Otter, N.; Porter, M. A.; Tillmann, U.; Grindrod, P.; Harrington, H. A., A roadmap for the computation of persistent homology, EPJ Data Sci., 6, 17 (2017)
[38] Tausz, A.; Vejdemo-Johansson, M.; Adams, H., JavaPlex: A research software package for persistent (co)homology, (Hong, H.; Yap, C., Proceedings of ICMS 2014. Proceedings of ICMS 2014, Lecture Notes in Computer Science, vol. 8592 (2014)), 129-136, Software available at http://appliedtopology.github.io/javaplex/ · Zbl 1402.65186
[39] Bauer, U., Ripser: a lean c++ code for the computation of vietoris-rips persistence barcodes (2017)
[40] The GUDHI Project, U., GUDHI User and Reference Manual (2015), GUDHI Editorial Board
[41] Lee, J.; Bahri, Y.; Novak, R.; Schoenholz, S. S.; Pennington, J.; Sohl-Dickstein, J., Deep neural networks as Gaussian processes (2017), arXiv e-prints, arXiv:1711.001651711.00165
[42] Zhang, L.; Naitzat, G.; Lim, L.-H., Tropical geometry of deep neural networks, (Dy, J.; Krause, A., Proceedings of the 35th International Conference on Machine Learning. Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 80 (2018), PMLR Stockholmsmässan: PMLR Stockholmsmässan Stockholm Sweden), 5824-5832, 1805.07091
[43] Su, J.; Vasconcellos Vargas, D.; Kouichi, S., One pixel attack for fooling deep neural networks (2017), arXiv e-prints, arXiv:1710.088641710.08864
[44] Goodfellow, I. J.; Shlens, J.; Szegedy, C., Explaining and harnessing adversarial examples, arxiv e-prints (2014), arXiv:1412.657214126572
[45] Zeiler, M. D.; Fergus, R., Visualizing and understanding convolutional networks, (Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T., Computer Vision - ECCV 2014 (2014), Springer International Publishing: Springer International Publishing Cham), 818-833
[46] D. Masters, C. Luschi, Revisiting small batch training for deep neural networks, 1804.07612.
[47] Chawla, N. V.; Bowyer, K. W.; Hall, L. O.; Kegelmeyer, W. P., Smote: Synthetic minority over-sampling technique, J. Artif. Int. Res., 16, 321-357 (2002), 1106.1813 · Zbl 0994.68128
[48] Kreuzer, M.; Skarke, H., PALP: A package for analyzing lattice polytopes with applications to toric geometry, Comput. Phys. Comm., 157, 87-106 (2004), math/0204356 · Zbl 1196.14007
[49] Grinis, R.; Kasprzyk, A., Normal forms of convex lattice polytopes, arxiv e-prints (2013), arXiv:1301.66411301.6641
[50] Murty, K. G.; Kabadi, S. N., Some np-complete problems in quadratic and nonlinear programming, Math. Program., 39, 117-129 (1987) · Zbl 0637.90078
[51] Anandkumar, A.; Ge, R., Efficient approaches for escaping higher order saddle points in non-convex optimization (2016), arXiv e-prints arXiv:1602.059081602.05908
[52] Nie, J., Optimality conditions and finite convergence of lasserre’s hierarchy (2012), arXiv e-prints arXiv:1206.0319, 1206.0319
[53] Rumelhart, D. E.; Hinton, G. E.; Williams, R. J., Learning representations by back-propagating errors, (Anderson, J. A.; Rosenfeld, E., Neurocomputing: Foundations of Research (1988), MIT Press Cambridge: MIT Press Cambridge MA, USA), 696-699, ch. Learning Representations by Back-propagating Errors
[54] Y. LeCun, Une procédure d’apprentissage pour réseau à seuil asymétrique, in: Proceedings of Cognitiva, Vol. 85, Paris, 1985, pp. 599-604.
[55] Parker, D. B., Learning-Logic, Tech. Rep. TR-47 (1985), Center for Comp. Research in Economics and Management Sci. MIT
[56] Schmidhuber, J., Deep learning in neural networks: An overview, Neural Netw., 61, 85-117 (2015), Published online 2014; based on TR arXiv:14047828 [cs.NE]
[57] S. Ruder, An overview of gradient descent optimization algorithms, 1609.04747.
[58] Ge, R.; Huang, F.; Jin, C.; Yuan, Y., Escaping from saddle points — Online stochastic gradient for tensor decomposition (2015), arXiv e-prints arXiv:1503.021011503.02101
[59] Sagun, L.; Ugur Guney, V.; Ben Arous, G.; LeCun, Y., Explorations on high dimensional landscapes (2014), arXiv e-prints arXiv:1412.66151412.6615
[60] Yao, Z.; Gholami, A.; Lei, Q.; Keutzer, K.; Mahoney, M. W., Hessian-based analysis of large batch training and robustness to adversaries, (Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; Garnett, R., Advances in Neural Information Processing Systems 31 (2018), Curran Associates, Inc.), 4949-4959, 1802.08241
[61] N.S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, P.T.P. Tang, On large-batch training for deep learning: Generalization gap and sharp minima, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April, 24-26, 2017, Conference Track Proceedings. 2017. 1609.04836.
[62] N. Golmant, N. Vemuri, Z. Yao, V. Feinberg, A. Gholami, K. Rothauge, M.W. Mahoney, J. Gonzalez, On the computational inefficiency of large batch sizes for stochastic gradient descent, 1811.12941.
[63] S. McCandlish, J. Kaplan, D. Amodei, O.D. Team, An empirical model of large-batch training, 1812.06162.
[64] Nesterov, Y., A method for unconstrained convex minimization problem with the rate of convergence \(o ( 1 / k^2 )\), Dokl. AN USSR, 269, 543-547 (1983)
[65] Duchi, J.; Hazan, E.; Singer, Y., Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res., 12, 2121-2159 (2011) · Zbl 1280.68164
[66] M.D. Zeiler, ADADELTA: an adaptive learning rate method, 1212.5701.
[67] D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, in: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May, 7-9, 2015, Conference Track Proceedings. 2015. 1412.6980.
[68] He, K.; Zhang, X.; Ren, S.; Sun, J., Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, (Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) ICCV ’15 (2015), IEEE Computer Society: IEEE Computer Society Washington, DC, USA), 1026-1034
[69] Glorot, X.; Bengio, Y., Understanding the difficulty of training deep feedforward neural networks, (Teh, Y. W.; Titterington, M., Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, vol. 9 (2010), PMLR Chia Laguna Resort: PMLR Chia Laguna Resort Sardinia, Italy), 249-256
[70] Ioffe, S.; Szegedy, C., Batch normalization: Accelerating deep network training by reducing internal covariate shift, (Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37 ICML’15 (2015), JMLR.org), 448-456, 1502.03167
[71] G. Klambauer, T. Unterthiner, A. Mayr, S. Hochreiter, Self-normalizing neural networks, in: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 2017, Long Beach, CA, USA, pp. 971-980. 2017. 1706.02515.
[72] Janocha, K.; Czarnecki, W. M., On loss functions for deep neural networks in classification (2017), arXiv e-prints, arXiv:1702.056591702.05659
[73] Kullback, S.; Leibler, R. A., On information and sufficiency, Ann. Math. Stat., 22, 79-86 (1951) · Zbl 0042.38403
[74] Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; R, M. W., Salakhutdinov dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15, 1929-1958 (2014) · Zbl 1318.68153
[75] Hastie, T.; Tibshirani, R.; Friedman, J., (The Elements of Statistical Learning. The Elements of Statistical Learning, Springer Series in Statistics (2001), Springer New York Inc.: Springer New York Inc. New York, NY, USA) · Zbl 0973.62007
[76] Yao, Y.; Rosasco, L.; Caponnetto, A., On early stopping in gradient descent learning, Constr. Approx., 26, 289-315 (2007) · Zbl 1125.62035
[77] M. Kubo, R. Banno, H. Manabe, M. Minoji, Implicit regularization in over-parameterized neural networks, 1903.01997.
[78] C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, Understanding deep learning requires rethinking generalization, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April (2017) 24-26, Conference Track Proceedings. 2016. 1611.03530.
[79] Gunasekar, S.; Woodworth, B. E.; Bhojanapalli, S.; Neyshabur, B.; Srebro, N., Implicit regularization in matrix factorization, (Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; Garnett, R., Advances in Neural Information Processing Systems, Vol. 30 (2017), Curran Associates, Inc.), 6151-6159
[80] Novak, R.; Bahri, Y.; Abolafia, D. A.; Pennington, J.; Sohl-Dickstein, J., Sensitivity and generalization in neural networks: an empirical study (2018), arXiv e-prints, arXiv:1802.08760, 1802.08760
[81] Z. Allen-Zhu, Y. Li, Y. Liang, Learning and generalization in overparameterized neural networks, going beyond two layers, 1811.04918.
[82] Salimans, T.; Kingma, D. P., Weight normalization: A simple reparameterization to accelerate training of deep neural networks, (Lee, D. D.; Sugiyama, M.; Luxburg, U. V.; Guyon, I.; Garnett, R., Advances in Neural Information Processing Systems, Vol. 29 (2016), Curran Associates, Inc.), 901-909, 1602.07868
[83] Lei Ba, J.; Kiros, J. R.; Hinton, G. E., Layer normalization (2016), arXiv e-prints, arXiv:1607.064501607.06450
[84] T. van Laarhoven, L2 regularization versus batch and weight normalization, 1706.05350.
[85] Cybenko, G., Approximations by superpositions of sigmoidal functions, Math. Control Signals Systems, 2, 303 (1989) · Zbl 0679.94019
[86] Nielsen, M. A., Neural Networks and Deep Learning (2015), Determination Press
[87] Wilson, A. C.; Roelofs, R.; Stern, M.; Srebro, N.; Recht, B., The marginal value of adaptive gradient methods in machine learning (2017), arXiv e-prints, arXiv:1705.082921705.08292
[88] Neal, R. M., Bayesian Learning for Neural Networks (1996), Springer-Verlag: Springer-Verlag Berlin, Heidelberg · Zbl 0888.62021
[89] Lee, J.; Xiao, L.; Schoenholz, S. S.; Bahri, Y.; Sohl-Dickstein, J.; Pennington, J., Wide neural networks of any depth evolve as linear models under gradient descent (2019), arXiv e-prints, arXiv:1902.067201902.06720
[90] Carpenter, G. A.; Grossberg, S.; Reynolds, J. H., Artmap: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network, Neural Netw., 4, 5, 565-588 (1991)
[91] Tan, A.-H., Self-organizing neural architecture for reinforcement learning, (Wang, J.; Yi, Z.; Zurada, J. M.; B.-Lu, L.; Yin, H., Advances in Neural Networks - ISNN 2006 (2006), Springer Berlin Heidelberg: Springer Berlin Heidelberg Berlin, Heidelberg), 470-475
[92] Grossberg, S., Adaptive resonance theory: How a brain learns to consciously attend, learn, and recognize a changing world. neural, Neural Netw. : Off. J. Int. Neural Netw. Soc., 37 (2012)
[93] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, in: Proceedings of the IEEE, 1998, pp. 2278-2324.
[94] Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A., Going deeper with convolutions (2014), arXiv e-prints, arXiv:1409.48421409.4842
[95] Lin, M.; Chen, Q.; Yan, S., Network in network (2013), arXiv e-prints, arXiv:1312.44001312.4400
[96] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z., Rethinking the inception architecture for computer vision (2015), arXiv e-prints, arXiv:1512.005671512.00567
[97] Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A., Inception-v4, inception-resnet and the impact of residual connections on learning (2016), arXiv e-prints, arXiv:1602.072611602.07261
[98] He, K.; Zhang, X.; Ren, S.; Sun, J., Deep residual learning for image recognition (2015), arXiv e-prints, arXiv:1512.033851512.03385
[99] Bianco, S.; Cadène, R.; Celona, L.; Napoletano, P., Benchmark analysis of representative deep neural network architectures, IEEE Access, 6, 64270-64277 (2018), 1810.00736
[100] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; Fei-Fei, L., Imagenet large scale visual recognition challenge (2014), arXiv e-prints, 1409.0575
[101] Krizhevsky, A.; Sutskever, I.; Hinton, G. E., Imagenet classification with deep convolutional neural networks, Commun. ACM, 60, 84-90 (2017)
[102] Ackley, D. H.; Hinton, G. E.; Sejnowski, T. J., A learning algorithm for boltzmann machines, Cogn. Sci., 9, 1, 147-169 (1985)
[103] Hinton, G. E.; Osindero, S.; Teh, Y.-W., A fast learning algorithm for deep belief nets, Neural Comput., 18, 1527-1554 (2006) · Zbl 1106.68094
[104] Larochelle, H.; Bengio, Y., Classification using discriminative restricted boltzmann machines, (Proceedings of the 25th International Conference on Machine Learning ICML ’08 (2008), ACM New York: ACM New York NY, USA), 536-543
[105] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y., Generative adversarial nets, (Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; Weinberger, K. Q., Advances in Neural Information Processing Systems, Vol. 27 (2014), Curran Associates, Inc.), 2672-2680
[106] Goodfellow, I., NIPS 2016 tutorial: Generative adversarial networks (2016), arXiv e-prints, arXiv:1701.001601701.00160
[107] Pfau, D.; Vinyals, O., Connecting generative adversarial networks and actor-critic methods, (NIPS Workshop on Adversarial Training (2016)), 1610.01945
[108] Lucic, M.; Kurach, K.; Michalski, M.; Gelly, S.; Bousquet, O., Are GANs created equal? a large-scale study (2017), arXiv e-prints, 1711.10337
[109] Kurach, K.; Lucic, M.; Zhai, X.; Michalski, M.; Gelly, S., The GAN landscape: Losses, architectures, regularization, and normalization (2018), arXiv e-prints, 1807.04720
[110] Lin, J., Divergence measures based on the shannon entropy, IEEE Trans. Inform. Theory, 37, 145-151 (1991) · Zbl 0712.94004
[111] Arjovsky, M.; Bottou, L., Towards principled methods for training generative adversarial networks (2017), arXiv e-prints, 1701.04862
[112] Arjovsky, M.; Chintala, S.; Bottou, L., Wasserstein GAN (2017), arXiv e-prints, 1701.07875
[113] Kingma, D. P.; Welling, M., Auto-encoding variational Bayes (2013), arXiv e-prints, arXiv:1312.61141312.6114
[114] Jimenez Rezende, D.; Mohamed, S.; Wierstra, D., Stochastic backpropagation and approximate inference in deep generative models (2014), arXiv e-prints, arXiv:1401.40821401.4082
[115] Doersch, C., Tutorial on variational autoencoders (2016), arXiv e-prints, arXiv:1606.059081606.05908
[116] Goldberg, D. E., Genetic Algorithms in Search, Optimization and Machine Learning (1989), Addison-Wesley Longman Publishing Co. Inc.: Addison-Wesley Longman Publishing Co. Inc. Boston, MA, USA · Zbl 0721.68056
[117] Yamaguchi, A.; Nakajima, H., Landau gauge fixing supported by genetic algorithm, Nuclear Phys. Proc. Suppl., 83, 840-842 (2000), hep-lat/9909064
[118] Allanach, B. C.; Grellscheid, D.; Quevedo, F., Genetic algorithms and experimental discrimination of SUSY models, J. High Energy Phys., 07, 069 (2004), hep-ph/0406277
[119] Akrami, Y.; Scott, P.; Edsjo, J.; Conrad, J.; Bergstrom, L., A profile likelihood analysis of the constrained MSSM with genetic algorithms, J. High Energy Phys., 04, 057 (2010), 0910.3950 · Zbl 1272.81204
[120] Blåbäck, J.; Danielsson, U.; Dibitetto, G., Fully stable ds vacua from generalised fluxes, J. High Energy Phys., 08, 054 (2013), 1301.7073 · Zbl 1342.83334
[121] Blåbäck, J.; Danielsson, U.; Dibitetto, G., Accelerated universes from type IIA compactifications, J. Cosmol. Astropart. Phys., 1403, 003 (2014), 1310.8300
[122] Abel, S.; Rizos, J., Genetic algorithms and the search for viable string vacua, J. High Energy Phys., 08, 010 (2014), 1404.7359
[123] S. Abel, D.G. Cerdeo, S. Robles, The Power of Genetic Algorithms: what remains of the pMSSM?, 1805.03615.
[124] Altenberg, L., The schema theorem and price’s theorem, (Whitley, L. D.; Vose, M. D., Foundations of Genetic Algorithms. Foundations of Genetic Algorithms, Foundations of Genetic Algorithms, vol. 3 (1995), Elsevier), 23-49
[125] Holland, J. H., Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence (1992), MIT Press Cambridge: MIT Press Cambridge MA, USA
[126] Bridges, C. L.; D. E., Goldberg, An analysis of reproduction and crossover in a binary-coded genetic algorithm, (Proceedings of the Second International Conference on Genetic Algorithms on Genetic Algorithms and their Application (1987), L. Erlbaum Associates Inc.: L. Erlbaum Associates Inc. Hillsdale, NJ, USA), 9-13
[127] Blickle, T.; Thiele, L., A comparison of selection schemes used in evolutionary algorithms, Evol. Comput., 4, 361-394 (1996)
[128] Charbonneau, P.; Knapp, B., A user’s guide to PIKAIA 1.0 (1995)
[129] N. Mohd Razali, J. Geraghty, Genetic algorithm performance with different selection strategies in solving tsp, in: Proceedings of the World Congress on Engineering, vol. 2. 2011.
[130] Mendes, J., A comparative study of crossover operators for genetic algorithms to solve the job shop scheduling problem, WSEAS Trans. Comput., 12, 164-173 (2013)
[131] (Davis, L., Handbook of Genetic Algorithms, Vol. 115 (1991), Van Nostrand Reinhold: Van Nostrand Reinhold New York)
[132] Luke, S.; Spector, L., A revised comparison of crossover and mutation in genetic programming, (Genetic Programming 1997: Proceedings of the Second Annual Conference (1998), Morgan Kaufmann), 240-248
[133] Cole, A.; Shiu, G., Persistent homology and non-gaussianity, J. Cosmol. Astropart. Phys., 1803, 03, 025 (2018), [1712.08159]
[134] Cirafici, M., Persistent homology and string vacua, J. High Energy Phys., 03, 045 (2016), [1512.01170] · Zbl 1388.81505
[135] Cole, A.; Shiu, G., Topological data analysis for the string landscape, J. High Energy Phys., 03, 054 (2019), [1812.06960]
[136] Letscher, Edelsbrunner; Zomorodian, G., Topological persistence and simplification, Discrete Comput. Geom., 28, 511-533 (2002) · Zbl 1011.68152
[137] Oudot, S., (Persistence Theory: From Quiver Representations to Data Analysis. Persistence Theory: From Quiver Representations to Data Analysis, Mathematical Surveys and Monographs (2015), American Mathematical Society) · Zbl 1335.55001
[138] Zomorodian, A.; Carlsson, G., Computing persistent homology, Discrete Comput. Geom., 33, 249-274 (2005) · Zbl 1069.55003
[139] Bellman, R., Dynamic Programming (1957), Princeton University Press: Princeton University Press Princeton, NJ, USA · Zbl 0077.13605
[140] Lloyd, S., Least squares quantization in pcm, IEEE Trans. Inform. Theory, 28, 129-137 (1982) · Zbl 0504.94015
[141] Du, Q.; Faber, V.; Gunzburger, M., Centroidal voronoi tessellations: Applications and algorithms, SIAM Rev., 41, 4, 637-676 (1999), [http://doi.org/10.1137/S0036144599352836] · Zbl 0983.65021
[142] Bottou, L.; Bengio, Y., Convergence properties of the k-means algorithms, (Proceedings of the 7th International Conference on Neural Information Processing Systems NIPS’94 (1994), MIT Press: MIT Press Cambridge, MA, USA), 585-592
[143] Zhang, T.; Ramakrishnan, R.; Livny, M., Birch: An efficient data clustering method for very large databases, SIGMOD Rec., 25, 103-114 (1996)
[144] Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X., A density-based algorithm for discovering clusters in large spatial databases with noise, (Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, USA (1996), AAAI Press), 226-231
[145] Sutton, R. S.; Barto, A. G., Reinforcement Learning: An Introduction (1998), MIT Press Cambridge: MIT Press Cambridge MA
[146] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; Hassabis, D., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 484-489 (2016)
[147] Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; Chen, Y.; Lillicrap, T.; Hui, F.; Sifre, L.; van den Driessche, G.; Graepel, T.; Hassabis, D., Mastering the game of go without human knowledge, Nature, 550, 354 (2017)
[148] Yellott, J. I., The relationship between luce’s choice axiom, thurstone’s theory of comparative judgment, and the double exponential distribution, J. Math. Psych., 15, 2, 109-144 (1977) · Zbl 0362.92024
[149] Maddison, C. J.; Tarlow, D.; Minka, T., A \({}^\ast\) sampling, (Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; Weinberger, K. Q., Advances in Neural Information Processing Systems 27 (2014), Curran Associates, Inc.), 3086-3094
[150] Mahalanobis, P. C., On the generalized distance in statistics, Proc. Natl. Inst. Sci. (Calcutta), 2, 49-55 (1936) · Zbl 0015.03302
[151] Goldberger, J.; Hinton, G. E.; Roweis, S. T.; Salakhutdinov, R. R., Neighbourhood components analysis, (Saul, L. K.; Weiss, Y.; Bottou, L., Advances in Neural Information Processing Systems 17 (2005), MIT Press), 513-520
[152] Weinberger, K. Q.; Blitzer, J.; Saul, L. K., Distance metric learning for large margin nearest neighbor classification, (Weiss, Y.; Schölkopf, B.; Platt, J. C., Advances in Neural Information Processing Systems 18 (2006), MIT Press), 1473-1480
[153] Hyafil, L.; Rivest, R. L., Constructing optimal binary decision trees is np-complete, Inform. Process. Lett., 5, 1, 15-17 (1976) · Zbl 0333.68029
[154] Murthy, S. K., Automatic Construction of Decision Trees from Data: A Multidisciplinary Survey, in Data Mining and Knowledge Discovery, 1-49 (1998), Kluwer academic publishers: Kluwer academic publishers Boston
[155] Breiman, L.; Friedman, J. H.; Olshen, R. A.; Stone, C. J., Classification and regression trees, (The Wadsworth statistics/probability series (1984), Wadsworth & Brooks/Cole Advanced Books & Software: Wadsworth & Brooks/Cole Advanced Books & Software Monterey, CA) · Zbl 0541.62042
[156] Quinlan, J. R., Induction of decision trees, Mach. Learn., 1, 81-106 (1986)
[157] Salzberg, S. L., C4.5: Programs for machine learning by j. ross quinlan. morgan kaufmann publishers, inc. 1993, Mach. Learn., 16, 235-240 (1994)
[158] Chen, T.; Guestrin, C., Xgboost: A scalable tree boosting system, (Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining KDD ’16 Pp (2016), ACM: ACM New York, NY, USA), 785-794
[159] Freund, Y.; Schapire, R. E., A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. System Sci., 55, 119-139 (1997) · Zbl 0880.68103
[160] Cortes, C.; Vapnik, V., Support-vector networks, Mach. Learn., 20, 273-297 (1995) · Zbl 0831.68098
[161] Aizerman, M. A.; Braverman, E. A.; Rozonoer, L., Theoretical foundations of the potential function method in pattern recognition learning, (Automation and Remote Control No. 25 in Automation and Remote Control (1964)), 821-837 · Zbl 0151.24701
[162] Boser, B. E.; Guyon, I. M.; Vapnik, V. N., A training algorithm for optimal margin classifiers, (Proceedings of the Fifth Annual Workshop on Computational Learning Theory COLT ’92 (1992), ACM: ACM New York, NY, USA), 144-152
[163] Vapnik, V. N., The Nature of Statistical Learning Theory (1995), Springer-Verlag: Springer-Verlag Berlin, Heidelberg · Zbl 0833.62008
[164] Schwarz, J. H., The power of M theory, Phys. Lett. B, 367, 97-103 (1996), hep-th/9510086 [, 390(1995)]
[165] Horava, P.; Witten, E., Heterotic and type I string dynamics from eleven-dimensions, Nuclear Phys. B, 460, 506-524 (1996), [hep-th/9510209] [, 397(1995)] · Zbl 1004.81525
[166] Candelas, P.; Dale, A. M.; Lutken, C. A.; Schimmrigk, R., Complete intersection calabi-yau manifolds, Nuclear Phys. B, 298, 493 (1988)
[167] Gray, J.; Haupt, A. S.; Lukas, A., All complete intersection calabi-yau four-folds, J. High Energy Phys., 07, 070 (2013), [1303.1832] · Zbl 1342.14086
[168] Kreuzer, M.; Skarke, H., Complete classification of reflexive polyhedra in four-dimensions, Adv. Theor. Math. Phys., 4, 1209-1230 (2002), [hep-th/0002240] · Zbl 1017.52007
[169] F. Schöller, H. Skarke, All Weight Systems for Calabi-Yau Fourfolds from Reflexive Polyhedra, [1808.02422]. · Zbl 1429.32042
[170] Bouchard, V.; Skarke, H., Affine Kac-Moody algebras, CHL strings and the classification of tops, Adv. Theor. Math. Phys., 7, 2, 205-232 (2003), [hep-th/0303218]
[171] Candelas, P.; Font, A., Duality between the webs of heterotic and type II vacua, Nuclear Phys. B, 511, 295-325 (1998), [hep-th/9603170] · Zbl 0947.81054
[172] Braun, V.; Grimm, T. W.; Keitel, J., Complete intersection fibers in F-theory, J. High Energy Phys., 03, 125 (2015), [1411.2615] · Zbl 1388.81294
[173] Morrison, D. R.; Taylor, W., Toric bases for 6D F-theory models, Fortschr. Phys., 60, 1187-1216 (2012), [1204.0283] · Zbl 1255.81210
[174] Taylor, W.; Wang, Y.-N., Scanning the skeleton of the 4D F-theory landscape, J. High Energy Phys., 01, 111 (2018), [1710.11235] · Zbl 1384.83066
[175] Wall, C., Classification problems in differential topology. v. on certain 6-manifolds, Invent. Math., 1, 355-374 (1966) · Zbl 0149.20601
[176] Reid, M., The moduli space of 3-folds with k = 0 may nevertheless be irreducible, Math. Ann., 278, 329-334 (1987) · Zbl 0649.14021
[177] Grassi, A., On minimal models of elliptic threefolds, Math. Ann., 290, 2, 287-302 (1991) · Zbl 0719.14006
[178] Gross, M., A finiteness theorem for elliptic calabi-yau threefolds, Duke Math. J., 74, 271-299 (1994) · Zbl 0838.14033
[179] Di Cerbo, G.; Svaldi, R., Birational boundedness of low dimensional elliptic Calabi-Yau varieties with a section (2016), arXiv e-prints, arXiv:1608.02997[1608.02997]
[180] Anderson, L. B.; Gao, X.; Gray, J.; Lee, S.-J., Fibrations in CICY threefolds, J. High Energy Phys., 10, 077 (2017), [1708.07907] · Zbl 1383.83147
[181] Huang, Y.-C.; Taylor, W., On the prevalence of elliptic and genus one fibrations among toric hypersurface Calabi-Yau threefolds, J. High Energy Phys., 03, 014 (2019), [1809.05160]
[182] Denef, F.; Douglas, M. R., Distributions of flux vacua, J. High Energy Phys., 05, 072 (2004), [hep-th/0404116]
[183] Dijkstra, T. P.T.; Huiszoon, L. R.; Schellekens, A. N., Supersymmetric standard model spectra from RCFT orientifolds, Nuclear Phys. B, 710, 3-57 (2005), [hep-th/0411129] · Zbl 1115.81378
[184] Nilles, H. P.; Vaudrevange, P. K.S., Geography of fields in extra dimensions: String theory lessons for particle physics, Modern Phys. Lett. A, 30, 10, Article 1530008 pp. (2015), [1403.1597] · Zbl 1310.81007
[185] Anderson, L. B.; Constantin, A.; Gray, J.; Lukas, A.; Palti, E., A comprehensive scan for heterotic SU(5) GUT models, J. High Energy Phys., 01, 047 (2014), [1307.4787]
[186] Groot Nibbelink, S.; Loukas, O.; Ruehle, F., (MS)SM-like models on smooth Calabi-Yau manifolds from all three heterotic string theories, Fortschr. Phys., 63, 609-632 (2015), [1507.07559] · Zbl 1338.81337
[187] Faraggi, A. E.; Rizos, J.; Sonmez, H., Classification of standard-like heterotic-string vacua, Nuclear Phys. B, 927, 1-34 (2018), [1709.08229] · Zbl 1380.81267
[188] Cvetič, M.; Halverson, J.; Lin, L.; Liu, M.; Tian, J., Quadrillion \(F\)-theory compactifications with the exact chiral spectrum of the standard model, Phys. Rev. Lett., 123, 10, Article 101601 pp. (2019), [1903.00009]
[189] Constantin, A.; He, Y.-H.; Lukas, A., Counting string theory standard models, Phys. Lett. B, 792, 258-262 (2019), [1810.00444] · Zbl 1416.81128
[190] DeWolfe, O.; Giryavets, A.; Kachru, S.; Taylor, W., Type IIA moduli stabilization, J. High Energy Phys., 07, 066 (2005), [hep-th/0505160]
[191] B.S. Acharya, M.R. Douglas, A Finite landscape? [hep-th/0606212].
[192] Buchbinder, E. I.; Constantin, A.; Lukas, A., The moduli space of heterotic line bundle models: a case study for the tetra-quadric, J. High Energy Phys., 03, 025 (2014), [1311.1941]
[193] Groot Nibbelink, S.; Loukas, O.; Ruehle, F.; Vaudrevange, P. K.S., Infinite number of MSSMs from heterotic line bundles?, Phys. Rev. D, 92, 4, Article 046002 pp. (2015), [1506.00879]
[194] Denef, F.; Douglas, M. R., Computational complexity of the landscape. i., Ann. Phys., 322, 1096-1142 (2007), [hep-th/0602072] · Zbl 1113.83007
[195] Denef, F.; Douglas, M. R.; Greene, B.; Zukowski, C., Computational complexity of the landscape II—Cosmological considerations, Ann. Physics, 392, 93-127 (2018), [1706.06430] · Zbl 1390.83337
[196] Cvetic, M.; Garcia-Etxebarria, I.; Halverson, J., On the computation of non-perturbative effective potentials in the string theory landscape: IIB/F-theory perspective, Fortschr. Phys., 59, 243-283 (2011), [1009.5386] · Zbl 1209.81162
[197] Garey, M. R.; Johnson, D. S., Computers and Intractability: A Guide to the Theory of NP-Completeness (1979), W. H. Freeman & Co.: W. H. Freeman & Co. New York, NY, USA · Zbl 0411.68039
[198] Vavasis, S., (Nonlinear Optimization: Complexity Issues. Nonlinear Optimization: Complexity Issues, International Series of Monographs on Computer Science Series (1991), Oxford University Press) · Zbl 0785.90091
[199] Aaronson, S., P=?np, Electronic Colloquium on Computational Complexity (ECCC), 24, 4 (2017)
[200] Shor, P. W., Polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Sci. Stat. Comput., 26, 1484 (1997), [quant-ph/9508027] · Zbl 1005.11065
[201] Matijasevič, J. V., The Diophantineness of enumerable sets, Dokl. Akad. Nauk SSSR, 191, 279-282 (1970)
[202] Altman, R.; Carifio, J.; Halverson, J.; Nelson, B. D., Estimating Calabi-Yau hypersurface and triangulation counts with equation learners, J. High Energy Phys., 03, 186 (2019), [1811.06490] · Zbl 1414.83080
[203] Klaewer, D.; Schlechter, L., Machine learning line bundle cohomologies of hypersurfaces in toric varieties, Phys. Lett. B, 789, 438-443 (2019), [1809.02547] · Zbl 1406.14001
[204] A. Constantin, A. Lukas, Formulae for Line Bundle Cohomology on Calabi-Yau Threefolds, [1808.09992].
[205] Bull, K.; He, Y.-H.; Jejjala, V.; Mishra, C., Getting CICY high, Phys. Lett. B, 795, 700-706 (2019), [1903.03113] · Zbl 1420.14002
[206] Bull, K.; He, Y.-H.; Jejjala, V.; Mishra, C., Machine learning CICY threefolds, Phys. Lett. B, 785, 65-72 (2018), [1806.03121]
[207] Y.-H. He, S.-J. Lee, Distinguishing Elliptic Fibrations with AI, [1904.08530].
[208] Halverson, J.; Tian, J., Cost of seven-brane gauge symmetry in a quadrillion F-theory compactifications, Phys. Rev. D, 95, 2, Article 026005 pp. (2017), [1610.08864]
[209] G. Martius, C.H. Lampert, Extrapolation and learning equations, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. 2017. [1610.02995].
[210] Wang, Y.-N.; Zhang, Z., Learning non-higgsable gauge groups in 4D F-theory, J. High Energy Phys., 08, 009 (2018), [1804.07296]
[211] H. Erbin, S. Krippendorf, GANs for generating EFT models, [1809.02612].
[212] Kreuzer, M.; Skarke, H., No mirror symmetry in Landau-Ginzburg spectra!, Nuclear Phys. B, 388, 113-130 (1992), [hep-th/9205004]
[213] Kreuzer, M.; Skarke, H., All Abelian symmetries of Landau-Ginzburg potentials, Nuclear Phys. B, 405, 305-325 (1993), [hep-th/9211047] · Zbl 0990.81635
[214] Mütter, A.; Parr, E.; Vaudrevange, P. K.S., Deep learning in the heterotic orbifold landscape, Nuclear Phys. B, 940, 113-129 (2019), [1811.05993] · Zbl 1409.81099
[215] Nilles, H. P.; Ramos-Sanchez, S.; Vaudrevange, P. K.S.; Wingerter, A., The orbifolder: A tool to study the low energy effective theory of heterotic orbifolds, Comput. Phys. Comm., 183, 1363-1380 (2012), [1110.5229]
[216] Gmeiner, F.; Blumenhagen, R.; Honecker, G.; Lust, D.; Weigand, T., One in a billion: MSSM-like D-brane statistics, J. High Energy Phys., 01, 004 (2006), [hep-th/0510170]
[217] Cvetic, M.; Li, T.; Liu, T., Supersymmetric patiSalam models from intersecting D6-branes: A road to the standard model, Nuclear Phys. B, 698, 163-201 (2004), [hep-th/0403061] · Zbl 1123.81392
[218] Plauschinn, E., Non-geometric backgrounds in string theory, Phys. Rep., 798, 1-122 (2019), [1811.11203]
[219] G. Obied, H. Ooguri, L. Spodyneiko, C. Vafa, De sitter space and the swampland, [1806.08362].
[220] Maldacena, J. M., The Large N limit of superconformal field theories and supergravity, Internat. J. Theoret. Phys.. Internat. J. Theoret. Phys., Adv. Theor. Math. Phys., 2, 231-1133 (1998), [hep-th/9711200] · Zbl 0914.53047
[221] Hashimoto, K.; Sugishita, S.; Tanaka, A.; Tomiya, A., Deep learning and the AdS/CFT correspondence, Phys. Rev. D, 98, 4, Article 046019 pp. (2018), [1802.08313]
[222] Hashimoto, K.; Sugishita, S.; Tanaka, A.; Tomiya, A., Deep learning and holographic QCD, Phys. Rev. D, 98, 10, Article 106014 pp. (2018), [1809.10536]
[223] Hashimoto, K., AdS/CFT correspondence as a deep Boltzmann machine, Phys. Rev. D, 99, 10, Article 106017 pp. (2019), [1903.04951]
[224] D. Krefl, S. Carrazza, B. Haghighat, J. Kahlen, Riemann-Theta Boltzmann Machine, [1712.07581].
[225] S. Carrazza, D. Krefl, Sampling the Riemann-Theta Boltzmann Machine, [1804.07768].
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.