×

Fully parallel mesh I/O using PETSc DMPlex with an application to waveform modeling. (English) Zbl 1462.65005

Summary: Large-scale PDE simulations using high-order finite-element methods on unstructured meshes are an indispensable tool in science and engineering. The widely used open-source PETSc library offers an efficient representation of generic unstructured meshes within its DMPlex module. This paper details our recent implementation of parallel mesh reading and topological interpolation (computation of edges and faces from a cell-vertex mesh) into DMPlex. We apply these developments to seismic wave propagation scenarios on Mars as an example application. The principal motivation is to overcome single-node memory limits and reach mesh sizes which were impossible before. Moreover, we demonstrate that scalability of I/O and topological interpolation goes beyond 12,000 cores, and memory-imposed limits on mesh size vanish.

MSC:

65-04 Software, source code, etc. for problems pertaining to numerical analysis
65Y05 Parallel numerical computation
65M50 Mesh generation, refinement, and adaptive methods for the numerical solution of initial value and initial-boundary value problems involving PDEs
05C90 Applications of graph theory
35L05 Wave equation
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] FEniCS Project, https://fenicsproject.org/ (10 September 2020).
[2] MOAB: Mesh Oriented datABase, User’s Guide, https://ftp.mcs.anl.gov/pub/fathom/moab-docs/contents.html (10 September 2020).
[3] The PUMI User’s Guide, https://scorec.rpi.edu/pumi/PUMI.pdf (11 September 2020).
[4] MeshToolkit/MSTK, 2020, https://github.com/MeshToolkit/MSTK (11 September 2020).
[5] M. Afanasiev, C. Boehm, M. van Driel, L. Krischer, M. Rietmann, D. A. May, M. G. Knepley, and A. Fichtner, Modular and flexible spectral-element waveform modelling in two and three dimensions, Geophys. J. Int., 216 (2019), pp. 1675-1692, https://doi.org/10.1093/gji/ggy469.
[6] M. S. Aln\aes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells, The FeniCS Project Version 1.5, Arch. Numer. Softw., 3 (2015), https://doi.org/10.11588/ans.2015.100.20553.
[7] R. Anderson, J. Andrej, A. Barker, J. Bramwell, J.-S. Camier, J. Cerveny, V. Dobrev, Y. Dudouit, A. Fisher, T. Kolev, W. Pazner, M. Stowell, V. Tomov, I. Akkerman, J. Dahm, D. Medina, and S. Zampini, MFEM: A Modular Finite Element Methods Library, 2020, https://arxiv.org/abs/1911.09220. · Zbl 1524.65001
[8] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. A. May, L. C. McInnes, R. T. Mills, T. Munson, K. Rupp, P. Sanan, B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc, https://www.mcs.anl.gov/petsc (30 March 2020).
[9] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. A. May, L. C. McInnes, R. T. Mills, T. Munson, K. Rupp, P. Sanan, B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc Users Manual, Technical report ANL-95/11-Revision 3.13, Argonne National Laboratory, 2020, https://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf (30 March 2020).
[10] S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith, Efficient management of parallelism in object oriented numerical software libraries, in Modern Software Tools in Scientific Computing, E. Arge, A. M. Bruaset, and H. P. Langtangen, eds., Birkhäuser, Basel, 1997, pp. 163-202, https://doi.org/10.1007/978-1-4612-1986-6_8. · Zbl 0882.65154
[11] W. B. Banerdt, S. E. Smrekar, D. Banfield, D. Giardini, M. Golombek, C. L. Johnson, P. Lognonné, A. Spiga, T. Spohn, C. Perrin, S. C. Stähler, D. Antonangeli, S. Asmar, C. Beghein, N. Bowles, E. Bozdag, P. Chi, U. Christensen, J. Clinton, G. S. Collins, I. Daubar, V. Dehant, M. Drilleau, M. Fillingim, W. Folkner, R. F. Garcia, J. Garvin, J. Grant, M. Grott, J. Grygorczuk, T. Hudson, J. C. E. Irving, G. Kargl, T. Kawamura, S. Kedar, S. King, B. Knapmeyer-Endrun, M. Knapmeyer, M. Lemmon, R. Lorenz, J. N. Maki, L. Margerin, S. M. McLennan, C. Michaut, D. Mimoun, A. Mittelholz, A. Mocquet, P. Morgan, N. T. Mueller, N. Murdoch, S. Nagihara, C. Newman, F. Nimmo, M. Panning, W. T. Pike, A.-C. Plesa, S. Rodriguez, J. A. Rodriguez-Manfredi, C. T. Russell, N. Schmerr, M. Siegler, S. Stanley, E. Stutzmann, N. Teanby, J. Tromp, M. van Driel, N. Warner, R. Weber, and M. Wieczorek, Initial results from the InSight mission on Mars, Nat. Geosci., 13 (2020), pp. 183-189, https://doi.org/10.1038/s41561-020-0544-y.
[12] N. Barral, M. G. Knepley, M. Lange, M. D. Piggott, and G. J. Gorman, Anisotropic Mesh Adaptation in Firedrake with PETSc DMPlex, 2016, https://arxiv.org/abs/1610.09874.
[13] D. G. Baur, H. C. Edwards, W. K. Cochran, A. B. Williams, and G. D. Sjaardema, SIERRA Toolkit computational mesh conceptual model, Technical report SAND2010-1192, Sandia National Laboratories, 2010, https://doi.org/10.2172/976950.
[14] E. Boman, K. Devine, L. A. Fisk, R. Heaphy, B. Hendrickson, V. Leung, C. Vaughan, U. Catalyurek, D. Bozdag, and W. Mitchell, Zoltan, https://cs.sandia.gov/Zoltan, 1999.
[15] E. Boman, K. Devine, L. A. Fisk, R. Heaphy, B. Hendrickson, C. Vaughan, U. Catalyurek, D. Bozdag, W. Mitchell, and J. Teresco, Zoltan 3.0: Parallel Partitioning, Load-Balancing, and Data Management Services; User’s Guide, Technical report SAND2007-4748W, Sandia National Laboratories, Albuquerque, NM, 2007, https://cs.sandia.gov/Zoltan/ug_html/ug.html.
[16] H. Bériot, A. Prinn, and G. Gabard, Efficient implementation of high-order finite elements for Helmholtz problems, Internat. J. Numer. Methods Engrg., 106 (2016), pp. 213-240, https://doi.org/10.1002/nme.5172. · Zbl 1352.76042
[17] J. Brown, Star Forests as a Parallel Communication Model, 2011, https://jedbrown.org/files/StarForest.pdf (17 March 2020).
[18] J. Brown, M. G. Knepley, D. A. May, L. C. McInnes, and B. F. Smith, Composable linear solvers for multiphysics, in Proceeedings of the 11th International Symposium on Parallel and Distributed Computing (ISPDC 2012), IEEE Computer Society, 2012, pp. 55-62, https://doi.org/10.1109/ISPDC.2012.16.
[19] J. Brown, M. G. Knepley, and B. F. Smith, Run-time extensibility and librarization of simulation software, Comput. Sci. Eng., 17 (2015), pp. 38-45, https://doi.org/10.1109/MCSE.2014.95.
[20] P. R. Brune, M. G. Knepley, and L. R. Scott, Unstructured geometric multigrid in two and three dimensions on complex and graded meshes, SIAM J. Sci. Comput., 35 (2013), pp. A173-A191, https://doi.org/10.1137/110827077. · Zbl 1264.65203
[21] C. Burstedde, O. Ghattas, M. Gurnis, T. Isaac, G. Stadler, T. Warburton, and L. Wilcox, Extreme-scale AMR, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, 2010, pp. 1-12, https://doi.org/10.1109/SC.2010.25.
[22] C. Burstedde, L. C. Wilcox, and O. Ghattas, p4est: Scalable algorithms for parallel adaptive mesh refinement on forests of octrees, SIAM J. Sci. Comput., 33 (2011), pp. 1103-1133, https://doi.org/10.1137/100791634. · Zbl 1230.65106
[23] W. Celes, G. H. Paulino, and R. Espinha, Efficient handling of implicit entities in reduced mesh representations, J. Comput. Inform. Sci. Eng., 5 (2005), pp. 348-359, https://doi.org/10.1115/1.2052830. · Zbl 1122.74504
[24] C. Chevalier and F. Pellegrini, PT-Scotch: A tool for efficient parallel graph ordering, Parallel Comput., 34 (2008), pp. 318-331, https://doi.org/10.1016/j.parco.2007.12.001.
[25] Cray, Hewlett-Packard Enterprise, Sonexion Administration Guide, 2018, https://pubs.cray.com/bundle/Sonexion_Administration_Guide_3.0.0_Rev_B_S-2537/page/System_Architecture_Sonexion_3000.html (16 February 2021). Cray Sonexion 3000 Storage System, 2016, https://www.cray.com/sites/default/files/resources/WP-Cray-Sonexion-3000-Storage-Systems.pdf (20 March 2020).
[26] L. Dalcin, N. O. Collier, P. A. Vignal, A. M. A. Cortes, and V. M. Calo, PetIGA: A framework for high-performance isogeometric analysis, Comput. Methods Appl. Mech. Engrg., 308 (2016), pp. 151-181, https://doi.org/10.1016/j.cma.2016.05.011. · Zbl 1439.65003
[27] P. E. Farrell, L. Mitchell, and F. Wechsung, An augmented Lagrangian preconditioner for the 3D stationary incompressible Navier-Stokes equations at high Reynolds number, SIAM J. Sci. Comput., 41 (2019), pp. A3073-A3096, https://doi.org/10.1137/18M1219370. · Zbl 1448.65261
[28] A. Ferroni, P. Antonietti, I. Mazzieri, and A. Quarteroni, Dispersion-dissipation analysis of 3-D continuous and discontinuous spectral element methods for the elastodynamics equation, Geophys. J. Int., 211 (2017), pp. 1554-1574, https://doi.org/10.1093/gji/ggx384.
[29] A. Fichtner, Full Seismic Waveform Modelling and Inversion, Adv. Geophys. Environ. Mech. Math., Springer, Berlin, 2011, https://doi.org/10.1007/978-3-642-15807-0.
[30] J. E. Flaherty, R. M. Loy, M. S. Shephard, B. K. Szymanski, J. D. Teresco, and L. H. Ziantz, Adaptive local refinement with octree load balancing for the parallel solution of three-dimensional conservation laws, J. Parallel Distrib. Comput., 47 (1997), pp. 139-152, https://doi.org/10.1006/jpdc.1997.1412.
[31] R. V. Garimella, MSTK: Mesh Toolkit, v 1.83, Technical report, https://github.com/MeshToolkit/MSTK/blob/master/docs/MSTK.pdf (11 September 2020).
[32] R. V. Garimella, Mesh data structure selection for mesh generation and FEA applications, Internat. J. Numer. Methods Engrg., 55 (2002), pp. 451-478, https://doi.org/10.1002/nme.509. · Zbl 1017.65096
[33] D. Giardini, P. Lognonné, W. B. Banerdt, W. T. Pike, U. Christensen, S. Ceylan, J. F. Clinton, M. van Driel, S. C. Stähler, M. Böse, R. F. Garcia, A. Khan, M. Panning, C. Perrin, D. Banfield, E. Beucler, C. Charalambous, F. Euchner, A. Horleston, A. Jacob, T. Kawamura, S. Kedar, G. Mainsant, J.-R. Scholz, S. E. Smrekar, A. Spiga, C. Agard, D. Antonangeli, S. Barkaoui, E. Barrett, P. Combes, V. Conejero, I. Daubar, M. Drilleau, C. Ferrier, T. Gabsi, T. Gudkova, K. Hurst, F. Karakostas, S. King, M. Knapmeyer, B. Knapmeyer-Endrun, R. Llorca-Cejudo, A. Lucas, L. Luno, L. Margerin, J. B. McClean, D. Mimoun, N. Murdoch, F. Nimmo, M. Nonon, C. Pardo, A. Rivoldini, J. A. R. Manfredi, H. Samuel, M. Schimmel, A. E. Stott, E. Stutzmann, N. Teanby, T. Warren, R. C. Weber, M. Wieczorek, and C. Yana, The seismicity of Mars, Nat. Geosci., 13 (2020), pp. 205-212, https://doi.org/10.1038/s41561-020-0539-8.
[34] HDF5 Group, HDF5, https://portal.hdfgroup.org/display/HDF5/HDF5 (20 March 2020).
[35] HDF5 Group, Parallel HDF5, https://portal.hdfgroup.org/display/HDF5/Parallel+HDF5 (20 March 2020).
[36] T. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover, New York, 2000. · Zbl 1191.74002
[37] T. Isaac, C. Burstedde, L. C. Wilcox, and O. Ghattas, Recursive algorithms for distributed forests of octrees, SIAM J. Sci. Comput., 37 (2015), pp. C497-C531, https://doi.org/10.1137/140970963. · Zbl 1323.65105
[38] T. Isaac and M. G. Knepley, Support for Non-Conformal Meshes in PETSc’s DMPlex Interface, 2015, https://arxiv.org/abs/1508.02470.
[39] P. Jolivet, F. Nataf et al., HPDDM-High-Performance Unified Framework for Domain Decomposition Methods, https://github.com/hpddm/hpddm (18 April 2020).
[40] G. Karypis et al., METIS-Serial Graph Partitioning and Fill-Reducing Matrix Ordering, http://glaros.dtc.umn.edu/gkhome/metis/metis/overview (17 March 2020).
[41] G. Karypis et al., ParMETIS-Parallel Graph Partitioning and Fill-Reducing Matrix Ordering, http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview (17 March 2020).
[42] G. Karypis and V. Kumar, A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput., 20 (1998), pp. 359-392, https://doi.org/10.1137/S1064827595287997. · Zbl 0915.68129
[43] G. Karypis and V. Kumar, A parallel algorithm for multilevel graph partitioning and sparse matrix ordering, J. Parallel Distrib. Comput., 48 (1998), pp. 71-95, https://doi.org/10.1006/jpdc.1997.1403.
[44] M. G. Knepley and D. A. Karpeev, Mesh algorithms for PDE with Sieve \textupI: Mesh distribution, Sci. Program., 17 (2009), pp. 215-230, https://doi.org/10.3233/SPR-2009-0249.
[45] M. G. Knepley, M. Lange, and G. J. Gorman, Unstructured Overlapping Mesh Distribution in Parallel, 2015, https://arxiv.org/abs/1506.06194.
[46] T. Kärnä, S. C. Kramer, L. Mitchell, D. A. Ham, M. D. Piggott, and A. M. Baptista, Thetis coastal ocean model: Discontinuous Galerkin discretization for the three-dimensional hydrostatic equations, Geosci. Model Dev., 11 (2018), pp. 4359-4382, https://doi.org/10.5194/gmd-11-4359-2018.
[47] M. Lange, M. G. Knepley, and G. J. Gorman, Flexible, scalable mesh and data management using PETSc DMPlex, in Proceedings of the 3rd International Conference on Exascale Applications and Software, EASC ’15, Edinburgh, Scotland, UK, 2015, pp. 71-76, https://dl.acm.org/doi/10.5555/2820083.2820097.
[48] M. Lange, L. Mitchell, M. G. Knepley, and G. J. Gorman, Efficient mesh management in Firedrake using PETSc DMPlex, SIAM J. Sci. Comput., 38 (2016), pp. S143-S155, https://doi.org/10.1137/15M1026092. · Zbl 1352.65613
[49] M. G. Larson and F. Bengzon, The finite element method: Theory, implementation, and applications, Texts Comput. Sci. Eng. 10, Springer, Berlin, 2013, https://doi.org/10.1007/978-3-642-33287-6. · Zbl 1263.65116
[50] A. Logg, Efficient representation of computational meshes, Int. J. Comput. Sci. Eng., 4 (2009), pp. 283-295, https://doi.org/10.1504/IJCSE.2009.029164, https://arxiv.org/abs/1205.3081.
[51] A. Logg and G. N. Wells, Dolfin: Automated finite element computing, ACM Trans. Math. Software, 37 (2010), https://doi.org/10.1145/1731022.1731030. · Zbl 1364.65254
[52] P. H. Lognonné, W. B. Banerdt, D. Giardini, W. T. Pike, U. Christensen, P. Laudet, S. de Raucourt, P. Zweifel, S. Calcutt, M. Bierwirth, K. J. Hurst, F. Ijpelaan, J. W. Umland, R. Llorca-Cejudo, S. A. Larson, R. F. Garcia, S. Kedar, B. Knapmeyer-Endrun, D. Mimoun, A. Mocquet, M. P. Panning, R. C. Weber, A. Sylvestre-Baron, G. Pont, N. Verdier, L. Kerjean, L. J. Facto, V. Gharakanian, J. E. Feldman, T. L. Hoffman, D. B. Klein, K. Klein, N. P. Onufer, J. Paredes-Garcia, M. P. Petkov, J. R. Willis, S. E. Smrekar, M. Drilleau, T. Gabsi, T. Nebut, O. Robert, S. Tillier, C. Moreau, M. Parise, G. Aveni, S. Ben Charef, Y. Bennour, T. Camus, P. A. Dandonneau, C. Desfoux, B. Lecomte, O. Pot, P. Revuz, D. Mance, J. TenPierick, N. E. Bowles, C. Charalambous, A. K. Delahunty, J. Hurley, R. Irshad, H. Liu, A. G. Mukherjee, I. M. Standley, A. E. Stott, J. Temple, T. Warren, M. Eberhardt, A. Kramer, W. Kühne, E.-P. Miettinen, M. Monecke, C. Aicardi, M. André, J. Baroukh, A. Borrien, A. Bouisset, P. Boutte, K. Brethomé, C. Brysbaert, T. Carlier, M. Deleuze, J. M. Desmarres, D. Dilhan, C. Doucet, D. Faye, N. Faye-Refalo, R. Gonzalez, C. Imbert, C. Larigauderie, E. Locatelli, L. Luno, J.-R. Meyer, F. Mialhe, J. M. Mouret, M. Nonon, Y. Pahn, A. Paillet, P. Pasquier, G. Perez, R. Perez, L. Perrin, B. Pouilloux, A. Rosak, I. Savin de Larclause, J. Sicre, M. Sodki, N. Toulemont, B. Vella, C. Yana, F. Alibay, O. M. Avalos, M. A. Balzer, P. Bhandari, E. Blanco, B. D. Bone, J. C. Bousman, P. Bruneau, F. J. Calef, R. J. Calvet, S. A. D’Agostino, G. de los Santos, R. G. Deen, R. W. Denise, J. Ervin, N. W. Ferraro, H. E. Gengl, F. Grinblat, D. Hernandez, M. Hetzel, M. E. Johnson, L. Khachikyan, J. Y. Lin, S. M. Madzunkov, S. L. Marshall, I. G. Mikellides, E. A. Miller, W. Raff, J. E. Singer, C. M. Sunday, J. F. Villalvazo, M. C. Wallace, D. Banfield, J. A. Rodriguez-Manfredi, C. T. Russell, A. Trebi-Ollennu, J. N. Maki, É. Beucler, M. Böse, C. Bonjour, J. L. Berenguer, S. Ceylan, J. F. Clinton, V. Conejero, I. J. Daubar, V. Dehant, P. Delage, F. Euchner, I. Estève, L. Fayon, L. Ferraioli, C. L. Johnson, J. Gagnepain-Beyneix, M. Golombek, A. Khan, T. Kawamura, B. Kenda, P. Labrot, N. Murdoch, C. Pardo, C. Perrin, L. Pou, A. Sauron, D. Savoie, S. C. Stähler, É. Stutzmann, N. A. Teanby, J. Tromp, M. van Driel, M. A. Wieczorek, R. Widmer-Schnidrig, and J. Wookey, SEIS: Insight’s seismic experiment for internal structure of Mars, Space Sci. Rev., 215 (2019), p. 12, https://doi.org/10.1007/s11214-018-0574-6.
[53] Message Passing Interface Forum, MPI: A Message-Passing Interface Standard. Version 3.1, 2015, https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf (accessed 2020-03-20).
[54] T. Nissen-Meyer, A. Fournier, and F. A. Dahlen, A two-dimensional spectral-element method for computing spherical-earth seismograms - II. Waves in solid-fluid media, Geophys. J. Int., 174 (2008), pp. 873-888.
[55] Oracle and Intel Corporation, Lustre Software Release 2.x - Operations Manual, 2017, http://doc.lustre.org/lustre_manual.pdf (accessed 2020-03-20).
[56] A. T. Patera, A spectral element method for fluid dynamics: Laminar flow in a channel expansion, J. Comput. Phys., 54 (1984), pp. 468-488. · Zbl 0535.76035
[57] D. Peter, D. Komatitsch, Y. Luo, R. Martin, N. Le Goff, E. Casarotti, P. Le Loher, F. Magnoni, Q. Liu, C. Blitz, T. Nissen-Meyer, P. Basini, and J. Tromp, Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes, Geophys. J. Int., 186 (2011), pp. 721-739.
[58] F. Rathgeber, D. A. Ham, L. Mitchell, M. Lange, F. Luporini, A. T. T. Mcrae, G.-T. Bercea, G. R. Markall, and P. H. J. Kelly, Firedrake: Automating the finite element method by composing abstractions, ACM Trans. Math. Software, 43 (2016), https://doi.org/10.1145/2998441. · Zbl 1396.65144
[59] C. Richardson and G. Wells, Parallel I/O and parallel refinement, https://fenicsproject.org/pub/workshops/fenics13/slides/Richardson.pdf (accessed 2020-09-10).
[60] J. Rudi, A. C. I. Malossi, T. Isaac, G. Stadler, M. Gurnis, P. W. J. Staar, Y. Ineichen, C. Bekas, A. Curioni, and O. Ghattas, An extreme-scale implicit solver for complex PDEs: highly heterogeneous flow in earth’s mantle, in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’15, Association for Computing Machinery, 2015, pp. 1-12, https://doi.org/10.1145/2807591.2807675.
[61] R. Schneiders, Octree-based hexahedral mesh generation, Internat. J. Comput. Geom. Appl., 10 (2000), pp. 383-398, https://doi.org/10.1142/S021819590000022X. · Zbl 1074.65512
[62] L. Schoof and V. Yarberry, EXODUS II: A finite element data model, Technical report SAND-92-2137, 1994, https://doi.org/10.2172/10102115.
[63] S. Seol, C. W. Smith, D. A. Ibanez, and M. S. Shephard, A parallel unstructured mesh infrastructure, in 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, Salt Lake City, UT, 2012, IEEE, pp. 1124-1132, https://doi.org/10.1109/SC.Companion.2012.135.
[64] M. S. Shephard and S. Seol, Flexible distributed mesh data structure for parallel adaptive analysis, in Advanced Computational Infrastructures for Parallel and Distributed Adaptive Applications, M. Parashar and X. Li, eds., John Wiley & Sons, Inc., Hoboken, NJ, USA, 2009, pp. 407-435, https://doi.org/10.1002/9780470558027.ch19.
[65] Sierra Toolkit Development Team, Sierra toolkit manual version \textup4.48, Technical report, Sandia National Laboratories, 2018, https://doi.org/10.2172/1429968.
[66] T. J. Tautges, C. Ernst, C. Stimpson, R. J. Meyers, and K. Merkley, MOAB: a mesh-oriented database, Technical report SAND2004-1592, Sandia National Laboratories, 2004, https://doi.org/10.2172/970174.
[67] T. Jones, CPFDSoftware/gmv, 2020, https://github.com/CPFDSoftware/gmv (accessed 2020-09-11).
[68] M. van Driel, C. Boehm, L. Krischer, and M. Afanasiev, Accelerating numerical wave propagation using wavefield adapted meshes. Part I: forward and adjoint modelling, Geophysical Journal International, 221 (2020), pp. 1580-1590, https://doi.org/10.1093/gji/ggaa058.
[69] XDMF – eXtensible Data Model and Format, http://www.xdmf.org/ (accessed 2020-03-20).
[70] O. Zienkiewicz, R. Taylor, and J. Zhu, eds., The Finite Element Method: its Basis and Fundamentals (Seventh Edition), Butterworth-Heinemann, Oxford, 7th ed., 2013, https://doi.org/10.1016/C2009-0-24909-9. · Zbl 1307.74005
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.