GPU accelerated molecular dynamics simulation of thermal conductivities
Introduction
Molecular dynamics (MD) is a widely-used computational tool for simulating the properties of liquids and solids at atomistic level in research areas such as chemistry, thermodynamics and several other areas [1]. However, MD simulation is based on an intensive computation process. So far, the time scale and the system size that can be probed using MD are still limited due to the computational power of current computers. For the study of problems such as nanofluids and stiction in MEMS devices, millions even billions of atoms are required. As a result, the computation time will be too long to bear.
Over the past few years, modern graphics processing units (GPUs) for commodity PC hardware has evolved into a powerful programmable parallel streaming processor. For example, the G70, Nvidia’s most recent GPU, performs up to 165 billion floating-point multiplies per second (165 Gflops) [2]. However, a 3 GHz Pentium 4 CPU can only theoretically issue around 6 Gflops [3]. The enormous computational potential of GPUs has led to an explosion of research in ways to leverage GPUs for general-purpose computation on GPUs (GPGPU) [4], [5]. As pointed out by Macedonia [6] the GPU has entered computing’s mainstream.
At present, Monte Carlo and lattice-Boltzmann model (LBM) have been implemented on GPU. Tomov et al. [7] reported their implementation of Monte Carlo type simulation on GPU. Their experimental results have shown that the GPU-based simulation is about three times faster than the CPU version. Li et al. [8], [9] used GPU to accelerate the computation of the LBM, and applied it to a variety of fluid flow problems. They got a speedup of a factor 8–15 [9]. Buck et al. [10] are planning to use the massive compute power available in today’s GPU to accelerate all the key components of Gromacs which is one of the leading software packages used to simulate protein folding.
In this paper, we present an implementation of MD simulation on GPU. As an example, the MD algorithm will be used to calculate the thermal conductivities of solid argon. Our goal is to reduce the total computational time of MD simulation at a very high performance/cost ratio with the introduction of the GPU algorithm.
The paper is organized as follows. In Section 2 the MD algorithm for thermal conductivity calculation is described. The details of the implementation of MD simulation on GPU are presented in Section 3. In Section 4 the performance results for our implementation are given and compared with the CPU version. Finally, our conclusions are summarized in Section 5.
Section snippets
MD simulation
In MD simulation, each of the atoms or molecules is treated as a point mass. Given the interaction potential between atoms, the force acting on each atom can be calculated. Based on Newton’s second law, the motion of a large number of atoms can be described. From the motion of the ensemble of atoms, a variety of useful microscopic and macroscopic information can be extracted such as transport coefficients, structural properties, etc.
In this paper, the Lennard-Jones (LJ) potential is used to
Details of GPU implementation
Currently, GPUs contain two types of programmable processors: the vertex processors and the fragment processors. In this study, we implement the MD simulation on fragment processors. There are two main reasons for this [13]. First, there are more fragment processors than vertex processors on a typical programmable GPU. For example, the GeForce 7800 GTX graphics card used in this paper has 24 fragment processors, and only 8 vertex processors. Second, the output of the fragment processors goes
Experimental results
The following simulations were carried out on a PC with Intel Pentium 3.0 GHz, 1 G main memory. The graphics chip is GeForce 7800 GTX with 256M video memory. The fragment program was written in Cg [16].
In all the following simulations, a 10 fs time step was adopted. Each simulation involves 600,000 iterations (6 ns), the first 4 ns was used to attain the thermal equilibrium. And the heat flux and instant temperature were calculated in the later 2 ns.
After simulation, the HCACF in Eq. (5) was
Conclusion
An equilibrium MD simulation of thermal conductivity based on programmable graphics hardware is presented in this paper. The calculated thermal conductivities of solid argon are consistent with the experimental data. The GPU-based implementation is faster than that of CPU-based one. For a 4000-atom system or a larger one, we get a speedup factor between 10 and 11. To attain these speeds, we have incorporated several optimization techniques into the GPU implementation.
Acknowledgements
We acknowledge Jesper Mosegaard and Youquan Liu for productive discussions. This work is supported by the National Natural Science Foundation of China (50506008 and 50505007).
References (18)
- et al.
Benchmarking and implementation of probability-based simulations on programmable graphics card
Comput. Graph.
(2005) - et al.
Thermal conductivity decomposition and analysis using molecular dynamics simulations. Part I. Lennard-Jones argon
Int. J. Heat Mass Transfer
(2004) - et al.
Computer Simulation of Liquids
(1987) Taking the graphics processor beyond graphics
IEEE Computer
(2005)- et al.
A toolkit for computation on GPUs
- et al.
Computation on programmable graphics hardware
IEEE Comput. Graph. Appl.
(2005) - J.D. Owens, D. Luebke, N. Govindaraju, M. Harris, J. Krüger, A.E. Lefohn, T.J. Purcell, A survey of general-purpose...
The GPU enters computing’s mainstream
IEEE Computer
(2003)- et al.
Implementing lattice Boltzmann computation on graphics hardware
Visual Comput.
(2003)
Cited by (107)
Protein structural bioinformatics: An overview
2022, Computers in Biology and MedicineOpenACC + Athread collaborative optimization of Silicon-Crystal application on Sunway TaihuLight
2022, Parallel ComputingAutomated enumeration of block cipher differentials: An optimized branch-and-bound GPU framework
2022, Journal of Information Security and ApplicationsMDScale: Scalable multi-GPU bonded and short-range molecular dynamics
2021, Journal of Parallel and Distributed ComputingIn silico methods and tools for drug discovery
2021, Computers in Biology and Medicine