Simulations on parallel computers

The newest versions of Asap supports two methods for running simulations on parallel computers

Parallelization using Message Passing

For clusters

A large number of computers in a cluster can collaborate on a simulation, communicating with the Message Passing Interface. Atoms are distributed among the participating cpus, and will migrate between them as the atoms move. The simulation script must be able to handle this migration.

Read more here on the page Parallel simulations on clusters using Message Passing.

Performance

With a large number of atoms per CPU, performance scales almost linearly with the number of CPUs. See also the page Parallel Performance.

Supported potentials

Not all potentials support all parallelization methods. This is summarized in this table

Potential

Message passing

Multi-threading

Combined

Effective Medium Theory (EMT)

YES

YES

YES

MonteCarloEMT

NO

NO

NO

The Molybdenum potential (MoPotential)

YES

NO

NO

The Lennard-Jones potential

YES

YES

YES

The Brenner potential

NO

NO

NO

Multi-threaded (or shared memory) parallelization

For multi-CPU and multi-core computers - requires a specially compiled Asap

The python script runs on a single processor, and parallelization happens entirely “behind the scenes”, so the python scripts do not have to be modified.

Read more on the page Multi-threaded parallelization

Performance

This parallelization strategy was not introduced for performance reasons, but because it is so much easier to use than Message Passing.

Performance is fine on up to eight cores for most systems, and acceptable up to 20 cores for large systems (a million atoms or more).

Combined parallelization

Message passing and multithreading can be combined without problems - but also without any obvious gains.

Performance

This is currently slower than a pure Message Passing simulation.