Molecular dynamics with Asap and ASE

Typical computer simulations involve moving the atoms around, either to optimize a structure (energy minimization) or to do molecular dynamics. The Atomic Simulation Environment (ASE) defines a number of object doing just that. Documentation of these methods can be found on the page called Molecular Dynamics in the ASE online documentation.

Using ASE Molecular Dynamics objects with Asap

The standard ASE molecular dynamics objects can directly be used with Asap simulations, except if the Asap simulation is on a parallel computer, where some of the objects will fail. To handle this situation, special versions of most of the ASE dynamics objects are provided, which are modified to handle both serial and parallel simulations. We recommend always to use the Asap-provided dynamics objects, also for serial simulations. In that way you will not forget to modify your script when switching to parallel simulations, and you will benefit from any Asap-specific optimizations.

The Asap molecular dynamics objects are used exactly as the ASE version, as documented on the ASE Molecular Dynamics page. The only difference is that you should import them from asap3.md instead of from ase.md, i.e. replace

from ase.md.verlet import VelocityVerlet

with

from asap.md.verlet import VelocityVerlet

Overview of available MD algorithms

Numerical integration of Newton’s 2nd law conserves the total energy. It therefore generates the NVE or microcanonical ensemble. To generate ensembles with constant temperature (NVT) or constant pressure (NpT) a thermostat and possibly a barostat couples the system to a simple model of the surroundings. This can be done in multiple ways.

The “strength” thermostats and barostats is typically given by a characteristic time, often called \(\tau_T\) and \(\tau_p\), respectively. In algorithms, in particular Langevin, Bussi and Berendsen, starting a simulation with an ideal gas with a temperature far from the specified temperature will lead to the temperature approaching the desired temperature exponentially as \(\exp(-t/\tau_T)\). Doing the same with a liquid or solid will instead give a temperature evolving as \(\exp(-t/(2\tau_T))\), since there is twice as much thermal energy. Other algorithms, such as NoseHooverChainNVT, will approach the desired temperature in a more complicated matter, typically taking somewhat longer to reach a stable temperature. The old NPT module will not be able to equilibrate properly.

In Langevin dynamics, a friction is specified instead. It is related to the damping time as \(\lambda = \frac{1}{2 \tau_T}\).

Problematic algorithms

Correct

Serial

Parallel

Algorithm

Ensemble

fluctuations

perf (1)

perf (1)

Comment

NVTBerendsen

NVT

No

1.82

2.47

Fluctuations suppressed (4,5)

NPTBerendsen

NpT

No

4.19

5.35

Fluctuations suppressed (4,6)

Inhomogeneous_NPTBerendsen

N \(\sigma\) T

No

4.19

5.34

Flucuations suppressed (4,6)

NPT

NPT/N \(\sigma\) T

Maybe

2.79

3.67

Often oscillates in T and p/ \(\sigma\)

Notes:

  1. Performance measured in \(\mu\text{s/atom/core}\) with 100000 atoms per core on a Xeon24 (our slowest nodes) with the script performance/algoperformance.py. Parallel simulations are on 24 cores. The VelocityVerlet and Langevin algorithms are rewritten in C++ for improved performance (requires at most a single FixedAtoms constraint and no other constraints). Performance for the ASE version is given for comparison.

  2. See documentation of the Langevin dynamics below for information about seeding the RNG for deterministic runs and selecting the friction.

  3. Bussi dynamics uses a single stochastic variable for its thermostat. With ASE version 3.24.0, this variable is not shared between cores and each core is thermalized independently. This could perhaps lead to slightly wrong fluctuations. Fixed in ASE 3.25.

  4. The Berendsen thermostat essentially switches off when the temperature is correct, leading to something close to a microcanonical simulation. This leads to almost complete suppression of the fluctuations of the total energy that should otherwise be present in the canonical ensemble. Something similar happens with the pressure. The fluctuations are only relevant for very small systems.

  5. Consider using Bussi dynamics instead. It is just as efficient as Berendsen during initial thermalization, but gives correct fluctuations once the right temperature is reached.

  6. BerendsenNPT is very efficient to initially set up a system with correct temperature and pressure/stress, before switching to another algorithm for data collection.

Notes on Langevin dynamics

The friction coefficient (sometimes called \(\lambda\)) has units of reciprocal time, and is related to the thermalization time as \(\lambda = \frac{1}{2\tau_T}\). The factor 2 is because \(1/\lambda\) gives the decay time of the velocities, but the kinetic energy decays twice as fast as it is proportional to the square of the velocity.

Inhomogeneous thermalization. Langevin dynamics is the only dynamics that supports inhomogeneous thermalization. It is possible to specify a friction coefficient per atom (give a numpy array the same length as the atoms). This can be used to thermalize the boundaries of the system, while setting the friction to zero in the interesting region, and thus avoiding perturbing the dynamics there. It is also possible to set a per-atom temperature, e.g. to introduce a temperature gradient, but be aware that heat flow to/from the thermalized regions may not keep up with thermal conductivity if the friction coefficient is low. If there is significant diffusion, be aware that atoms retain their friction and temperature as they diffuse, you may need to reset them regularly.

Reproducible simulations. Langevin dynamics uses a random force on each atom in each time step. Providing a pre-seeded RNG to the dynamics (as it is usually done in ASE) runs the risk of generating the same numbers on different cores in parallel simulations. To avoid this, the Asap version of the Langevin object does not take a rng argument. Instead it takes a seed argument, and expects a numpy.random.SeedSequence object, which it uses to generate different seeds on the different cores. It does not matter if these seed objects are the same or different on all cores, the seeds used are guaranteed to be different.