×

zbMATH — the first resource for mathematics

A new multi-resolution parallel framework for SPH. (English) Zbl 1440.76112
Summary: In this paper we present a new multi-resolution parallel framework, which is designed for large-scale SPH simulations of fluid dynamics. An adaptive rebalancing criterion and monitoring system is developed to integrate the CVP partitioning method as rebalancer to achieve dynamic load balancing of the system. A localized nested hierarchical data structure is developed in cooperation with a tailored parallel fast-neighbor-search algorithm to handle problems with arbitrarily adaptive smoothing-length and to construct ghost buffer particles in remote processors. The concept of “diffused graph” is proposed in this paper to improve the performance of the graph-based communication strategy. By utilizing the hybrid parallel model, the framework is able to exploit the full parallel potential of current state-of-the-art clusters based on Distributed Shared Memory (DSM) architectures. A range of gas dynamics benchmarks are investigated to demonstrate the capability of the framework and its unique characteristics. The performance is assessed in detail through intensive numerical experiments at various scales.

MSC:
76M28 Particle methods and lattice-gas methods
65M75 Probabilistic methods, particle methods, etc. for initial value and initial-boundary value problems involving PDEs
65Y05 Parallel numerical computation
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Gingold, R.a.; Monaghan, J., Smoothed particle hydrodynamics: theory and application to non-spherical stars, Mon. Not. R. Astron. Soc., 375-389 (1977)
[2] Monaghan, J. J., Smoothed particle hydrodynamics, Rep. Progr. Phys., 68, 1703-1759 (2005), arXiv:0507472v1
[3] Hu, C.-Y.; Naab, T.; Walch, S.; Moster, B. P.; Oser, L., SPHGal: smoothed particle hydrodynamics with improved accuracy for galaxy simulations, Mon. Not. R. Astron. Soc., 443, 2, 1173-1191 (2014)
[4] Springel, V., The cosmological simulation code GADGET-2, Mon. Not. R. Astron. Soc., 364, 4, 1105-1134 (2005)
[5] Hopkins, P. F., A new class of accurate, mesh-free hydrodynamic simulation methods, Mon. Not. R. Astron. Soc., 450, 1, 53-110 (2015)
[7] Barcarolo, D. A.; Le Touzé, D.; Oger, G.; De Vuyst, F., Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method, J. Comput. Phys., 273, 640-657 (2014)
[8] Vacondio, R.; Rogers, B.; Stansby, P.; Mignosa, P., Variable resolution for SPH in three dimensions: towards optimal splitting and coalescing for dynamic adaptivity, Comput. Methods Appl. Mech. Engrg., 300, 442-460 (2016)
[9] Vacondio, R.; Rogers, B.; Stansby, P.; Mignosa, P.; Feldman, J., Variable resolution for SPH: a dynamic particle coalescing and splitting scheme, Comput. Methods Appl. Mech. Engrg., 256, 132-148 (2013)
[10] Crespo, A. J.; Domínguez, J. M.; Rogers, B. D.; Gómez-Gesteira, M.; Longshaw, S.; Canelas, R.; Vacondio, R.; Barreiro, A.; García-Feal, O., DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH), Comput. Phys. Comm., 187, 204-216 (2015)
[12] Nickolls, J.; Buck, I.; Garland, M.; Skadron, K., Scalable parallel programming with CUDA, Queue, 6, 2, 40-53 (2008), URL http://doi.acm.org/10.1145/1365490.1365500
[13] Plimpton, S., Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys., 117, 1, 1-19 (1995)
[15] Reynders, J. V.; Hinker, P. J.; Cummings, J. C.; Atlas, S. R.; Banerjee, S.; Humphrey, W. F.; Karmesin, S. R.; Keahey, K.; Srikant, M.; Tholburn, M. D., POOMA: A framework for scientific simulations on parallel architectures, (Parallel Programming in C+ (1996)), 547-588
[16] Sbalzarini, I. F.; Walther, J. H.; Bergdorf, M.; Hieber, S. E.; Kotsalis, E. M.; Koumoutsakos, P., PPM-A highly efficient parallel particle – mesh library for the simulation of continuum systems, J. Comput. Phys., 215, 2, 566-588 (2006)
[18] Awile, O.; Büyükkeçeci, F.; Reboux, S.; Sbalzarini, I. F., Fast neighbor lists for adaptive-resolution particle simulations, Comput. Phys. Comm., 183, 5, 1073-1081 (2012)
[19] Reboux, S.; Schrader, B.; Sbalzarini, I. F., A self-organizing lagrangian particle method for adaptive-resolution advection-diffusion simulations, J. Comput. Phys., 231, 9, 3623-3646 (2012)
[20] Fu, L.; Hu, X. Y.; Adams, N. A., A physics-motivated Centroidal Voronoi Particle domain decomposition method, J. Comput. Phys., 335, 718-735 (2017)
[22] Fu, L.; Litvinov, S.; Hu, X. Y.; Adams, N. A., A novel partitioning method for block-structured adaptive meshes, J. Comput. Phys., 341, 447-473 (2017)
[24] Karypis, G.; Kumar, V., A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput., 20, 1, 359-392 (1998)
[25] Okabe, A.; Boots, B.; Sugihara, K.; Chiu, S. N., Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, Vol.. 501 (2009), John Wiley & Sons
[26] Hoefler, T.; Traff, J. L., Sparse collective operations for MPI, (IEEE International Symposium on Parallel & Distributed Processing, 2009. IPDPS 2009 (2009), IEEE), 1-8
[27] Balaji, P.; Buntinas, D.; Goodell, D.; Gropp, W.; Kumar, S.; Lusk, E.; Thakur, R.; Träff, J. L., Mpi on a million processors, (European Parallel Virtual Machine/Message Passing Interface Users Group Meeting (2009), Springer), 20-30
[28] Ovcharenko, A.; Ibanez, D.; Delalondre, F.; Sahni, O.; Jansen, K. E.; Carothers, C. D.; Shephard, M. S., Neighborhood communication paradigm to increase scalability in large-scale dynamic scientific applications, Parallel Comput., 38, 3, 140-156 (2012)
[29] Djordjević, G. L.; Tošić, M. B., A heuristic for scheduling task graphs with communication delays onto multiprocessors, Parallel Comput., 22, 9, 1197-1214 (1996)
[30] Durand, D.; Jain, R.; Tseytlin, D., Parallel I/O scheduling using randomized, distributed edge coloring algorithms, J. Parallel Distrib. Comput., 63, 6, 611-618 (2003)
[31] Du, Q.; Faber, V.; Gunzburger, M., Centroidal Voronoi Tessellations: applications and algorithms, SIAM Rev., 41, 4, 637-676 (1999)
[32] Iri, M.; Murota, K.; Ohya, T., A fast Voronoi-diagram algorithm with applications to geographical optimization problems, (System Modelling and Optimization (1984), Springer), 273-288
[33] Liu, Y.; Wang, W.; Lévy, B.; Sun, F.; Yan, D.-M.; Lu, L.; Yang, C., On Centroidal Voronoi Tessellationenergy smoothness and fast computation, ACM Trans. Graph. (ToG), 28, 4, 101 (2009)
[34] Lloyd, S., Least squares quantization in PCM, IEEE Trans. Inf. Theory, 28, 2, 129-137 (1982)
[35] Du, Q.; Emelianenko, M.; Ju, L., Convergence of the Lloyd algorithm for computing Centroidal Voronoi Tessellations, SIAM J. Numer. Anal., 44, 1, 102-119 (2006)
[36] Contreras, G.; Martonosi, M., Characterizing and improving the performance of intel threading building blocks, (IEEE International Symposium on Workload Characterization, 2008. IISWC 2008 (2008), IEEE), 57-66
[37] Misra, J.; Gries, D., A constructive proof of Vizing’s theorem, Inform. Process. Lett., 41, 3, 131-133 (1992)
[38] Siek, J. G.; Lee, L.-Q.; Lumsdaine, A., Boost Graph Library: User Guide and Reference Manual (2001), Pearson Education
[39] Devine, K. D.; Boman, E. G.; Heaphy, R. T.; Hendrickson, B. A.; Teresco, J. D.; Faik, J.; Flaherty, J. E.; Gervasio, L. G., New challenges in dynamic load balancing, Appl. Numer. Math., 52, 2-3, 133-152 (2005)
[40] Verlet, L., Computer “experiments” on classical fluids. I. Thermodynamical properties of Lennard-Jones molecules, Phys. Rev., 159, 1, 98 (1967)
[41] Hu, X.; Adams, N. A., An incompressible multi-phase SPH method, J. Comput. Phys., 227, 1, 264-278 (2007)
[42] Monaghan, J. J., Smoothed particle hydrodynamics, Annu. Rev. Astron. Astrophys., 30, 543-574 (1992)
[43] Adami, S.; Hu, X.; Adams, N. A., A transport-velocity formulation for smoothed particle hydrodynamics, J. Comput. Phys., 241, 292-307 (2013)
[44] Inutsuka, S.-i., Reformulation of smoothed particle hydrodynamics with Riemann solver, J. Comput. Phys., 179, 1, 238-267 (2002)
[45] Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H., Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics, Mon. Not. R. Astron. Soc., 417, 1, 136-153 (2011)
[46] Puri, K.; Ramachandran, P., Approximate Riemann solvers for the Godunov SPH (GSPH), J. Comput. Phys., 270, 432-458 (2014)
[47] Price, D. J., Smoothed particle hydrodynamics and magnetohydrodynamics, J. Comput. Phys., 231, 3, 759-794 (2012)
[48] Van Leer, B., Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method, J. Comput. Phys., 32, 1, 101-136 (1979)
[49] Schäling, B., The boost C++ libraries (2011), Boris Schäling
[50] Rycroft, C., Voro++: A three-dimensional Voronoi cell library in C++ (2009), Lawrence Berkeley National Laboratory
[51] Sod, G. A., A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws, J. Comput. Phys., 27, 1, 1-31 (1978)
[52] Lax, P. D., Weak solutions of nonlinear hyperbolic equations and their numerical computation, Comm. Pure Appl. Math., 7, 1, 159-193 (1954)
[53] Avesani, D.; Dumbser, M.; Bellin, A., A new class of Moving-Least-Squares WENO-SPH schemes, J. Comput. Phys., 270, 278-299 (2014)
[54] Noh, W. F., Errors for calculations of strong shocks using an artificial viscosity and an artificial heat flux, J. Comput. Phys., 72, 1, 78-120 (1987)
[55] Sigalotti, L. D.G.; López, H.; Donoso, A.; Sira, E.; Klapp, J., A shock-capturing SPH scheme based on adaptive kernel estimation, J. Comput. Phys., 212, 1, 124-149 (2006)
[56] Xu, Z.; Shu, C.-W., Anti-diffusive flux corrections for high order finite difference WENO schemes, J. Comput. Phys., 205, 2, 458-485 (2005)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.