GRADSPMHD: A parallel MHD code based on the SPH formalism

https://doi.org/10.1016/j.cpc.2013.11.006Get rights and content

Abstract

We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the B constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture.

Program summary

Program title: GRADSPMHD 1.0

Catalogue identifier: AERP_v1_0

Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html

Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html

No. of lines in distributed program, including test data, etc.: 620503

No. of bytes in distributed program, including test data, etc.: 19837671

Distribution format: tar.gz

Programming language: FORTRAN 90/MPI.

Computer: HPC cluster.

Operating system: Unix.

Has the code been vectorized or parallelized?: Yes, parallelized using MPI.

RAM:  ∼30 MB for a Sedov test including 15625 particles on a single CPU.

Classification: 12.

Nature of problem:

Evolution of a plasma in the ideal MHD approximation.

Solution method:

The equations of magnetohydrodynamics are solved using the SPH method.

Running time:

The test provided takes approximately 20 min using 4 processors.

Introduction

Smoothed Particle Hydrodynamics (SPH) is by now widely used in many areas of astrophysics and engineering. Numerous reviews on the basic concepts and applications of SPH exist in the literature  [1], [2], [3] and we added our own implementation of this algorithm in the form of the publicly available GRADSPH code([4], hereafter called paper I). At the same time, in many areas of astrophysics, a transition from purely hydrodynamical to magnetohydrodynamical (MHD) simulations is evident. While many grid-based methods have been in active use to solve the MHD equations on fixed up to adaptive grids (e.g.  [5], [6], [7]), meshfree simulation methods applied to MHD are still an area of active research  [8], [9], [10], [11], [12], [13], [14], [15]. Examples of recent efforts in algorithmic design are the weighted particle MHD scheme introduced by Gaburov et al.  [11] and the Phurbas code by McNally et al.  [12], [13]. Novel meshfree methods for simulating magnetohydrodynamic flows have already found a diverse range of applications in astrophysical MHD, including star formation [16], [17], accretion disks and spiral structure in galactic disks [11], [18], [19], collisions of magnetized neutron stars  [20] and structure formation in cosmology  [21], [22].

Although attempts to include magnetic fields into the elegant framework provided by SPH date back to the earliest papers on SPH  [23], a numerically stable implementation of an SPMHD algorithm (an abbreviation for “Smoothed Particle Magnetohydrodynamics”) has proven challenging, with pioneering efforts including the work by Price and collaborators  [8], [9], [10], [24]. An excellent review article  [25] explains how the main technical difficulties relate to the tensile instability and the maintenance of the divergence constraint B=0. The tensile instability manifests itself as particle clumping in the regime where the magnetic pressure exceeds the gas pressure  [25], [26], [27], [28]. Several ways have been invented to cure the problem, including introducing repulsive correction forces between particles and modifying the tension forces in the SPMHD equation  [29], [24]. Børve et al.  [26], [27], [28] reported a satisfactory way to remove the instability by adding a source term proportional to the divergence of the magnetic field. Børve’s approach is in fact a special case of the source terms introduced by Powell  [30] to stabilize the MHD equations by subtracting the effect of spurious magnetic tension forces induced by B errors.

The second problem of maintaining the divergence constraint has been tackled by means of artificial resistivity terms and periodic smoothing to damp away the divergence error  [21], as well as by vector potential schemes  [29], [18], [19]. Although the latter are manifestly divergence-free by construction, they can be limited to topologically trivial field configurations and corresponding boundary conditions  [29], [19]. Attempts to remove this limitation on vector potential schemes in SPMHD have met with serious numerical difficulties  [31]. An important advance was made recently  [32], in which the authors describe a stable and conservative SPMHD implementation of the hyperbolic/parabolic correction scheme introduced by Dedner et al. in 2000  [33]. This scheme is also used in many grid-based MHD codes (e.g.  [34], [35]). The scheme reduces the divergence errors by an order of magnitude and has been successfully validated in long term simulations of magnetic cloud collapse  [36]. The number of applications to which SPMHD has been applied is constantly growing due to these technical improvements, and here we contribute to testing an SPMHD scheme to simulate MHD turbulence resulting from the magnetorotational instability (MRI)  [37], [38]. This paper therefore presents results of SPMHD simulations of the magnetorotational instability in a 2.5D shearing box test and a global accretion disk simulation.

The structure of the paper is as follows. In Section  2, we describe the general formalism of the SPMHD equations implemented in our code, the treatment of dissipative terms, as well as the correction scheme used for maintaining the divergence constraint. This serves to provide a detailed description of our implementation of SPMHD in the framework of our parallel SPH code GRADSPH  [4], with this new code referred to as GRADSPMHD. In Section  3, we validate the scheme by presenting the results of various standard MHD code tests in 1D, 2D and 3D. We apply the code to simulations of the magnetorotational instability in a 2.5D shearing box test, and a 3D simulation of the development of the magnetorotational instability in a global disk setting. The performance of the code is illustrated in Section  4. Finally, our conclusions are summarized in Section  5.

Section snippets

Implementation of SPMHD in GRADSPH

The equations of ideal MHD in the Lagrangian form can be written down as dρdt=ρv,dvdt=Pρ+(×B)×Bρ+g,dudt=Pρv,dBdt=Bv+Bv, where ρ denotes the density, v is the fluid velocity, P denotes the pressure, u is the thermal energy per unit mass, and B is the magnetic field. Ohmic dissipation is neglected and the constraint B=0 is used in the induction equation. We choose a system of units in which μ0=1. The vector g optionally includes the effect of an external

Test simulations

We checked the overall performance of our SPMHD implementation by running a series of well known MHD test problems from the literature. Besides one dimensional shock tube problems, we performed a number of standard two dimensional planar tests with a non-trivial solution, including the Orszag–Tang vortex test  [47] and the fast rotor test  [48]. The Orszag–Tang test involves the complex interaction of MHD shock waves and the gradual transition to MHD turbulence. The rotor test, on the other

Scalability and performance

Finally we present a number of scalability tests of GRADSPMHD performed on VIC3, the successor of the VIC HPC cluster at KU Leuven which was used before to develop and test GRADSPH. We used the intel FORTRAN compiler with aggressive optimization

and we measure the performance of the code by means of the science rate, which is defined as the average number of SPH particles processed per second by the HPC system  [4]. Fig. 17, Fig. 18 illustrate the typical scalability of GRADSPMHD on the KU

Summary and conclusions

We have given a detailed account of the implementation of the system of MHD equations into our SPH code GRADSPMHD. Following Price et al.  [25], [24], the SPMHD equations are derived self-consistently from a Lagrangian principle and include the grad-h correction terms which include the effect of a spatially varying smoothing length. We suppress the tensile instability in low-β plasmas by adding the source terms to the equation of motion which subtract the effect of a non-zero B according to

Acknowledgments

We would like to thank Dr. Daniel Price and Dr. Evghenii Gaburov for interesting discussions on meshfree methods for MHD. We gratefully acknowledge the assistance of Dr. Steven Delrue while preparing some of the figures in this work. The visualizations presented in this paper were produced using matlab and the visit visualization framework, which is freely available at https://wci.llnl.gov/codes/visit/. We thank Jim Stone for making his ATHENA code publicly available to the scientific

References (68)

  • S. Vanaverbeke et al.

    CPC

    (2009)
  • R. Keppens et al.

    CPC

    (2003)
  • B. Van der Holst et al.

    JCP

    (2007)
  • D.J. Price

    JCP

    (2012)
  • K.G. Powell et al.

    JCP

    (1999)
  • T.S. Tricco et al.

    JCP

    (2012)
  • A. Dedner et al.

    JCP

    (2002)
  • A. Mignone et al.

    JCP

    (2010)
  • G. Toth et al.

    JCP

    (2012)
  • D.J. Price

    JCP

    (2008)
  • J.J. Monaghan

    JCP

    (1997)
  • V. Springel et al.

    New Astronom.

    (2001)
  • D.S. Balsara et al.

    JCP

    (1999)
  • M. Brio et al.

    JCP

    (1988)
  • D.S. Balsara

    JCP

    (2001)
  • N. Dzyurkevich et al.

    A&A

    (2010)
  • J.J. Monaghan

    ARA&A

    (1992)
  • J.J. Monaghan

    Rep. Progr. Phys.

    (2005)
  • P. Bodenheimer et al.
  • A. Mignone et al.

    ApJS

    (2007)
  • D.J. Price et al.

    MNRAS

    (2004)
  • D.J. Price et al.

    MNRAS

    (2004)
  • D.J. Price et al.

    MNRAS

    (2005)
  • E. Gaburov et al.

    MNRAS

    (2011)
  • J.L. Maron et al.

    ApJS

    (2012)
  • C.P. McNally et al.

    ApJS

    (2012)
  • D.J. Barnes et al.

    MNRAS

    (2012)
  • F.A. Stasyszyn et al.

    MNRAS

    (2013)
  • F. Bürzle et al.

    MNRAS

    (2011)
  • D.J. Price et al.

    MNRAS

    (2007)
  • H. Kotarba et al.

    ApJ

    (2010)
  • H. Kotarba et al.

    MNRAS

    (2009)
  • D.J. Price et al.

    Science

    (2006)
  • K. Dolag et al.

    MNRAS

    (2009)
  • Cited by (11)

    View all citing articles on Scopus

    This paper and its associated computer program are available via the Computer Physics Communication homepage on ScienceDirect (http://www.sciencedirect.com/science/journal/00104655).

    View full text