Asap on Niflheim¶
Enabling the pre-installed Asap¶
Currently (July 2023) the recommended pre-installed version of Asap on
Niflheim is ASAP3/3.12.7-intel-2020b-ASE-3.21.1
. Newer releases
of Asap mainly contrain improvements to the installation procedure and
documentation. Note that this uses the Intel toolchain, there is a
similar module using the foss toolchain, but that can be up to a
factor of two slower.
To load the module (and the prerequisites) do:
module load ASAP3/3.12.7-intel-2020b-ASE-3.21.1
For more information about modules on Niflheim, read the description on the Niflheim wiki.
Check the installation¶
Please check that you can load Asap, and that you get the version you expect:
[thul] ~$python
Python 2.4.3 (#1, Jul 27 2009, 17:56:30)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import asap3
>>> asap3.print_version()
ASAP version 3.2.2 serial, compiled 2009-08-31 15:10 on thul.fysik.dtu.dk using 'icpc -O3 -g -xS -fPIC -vec_report0 -DSTACKTRACE=linux'
>>>
>>>
>>> asap3.print_version(1)
ASAP version 3.2.2 serial, compiled 2009-08-31 15:10 on thul.fysik.dtu.dk using 'icpc -O3 -g -xS -fPIC -vec_report0 -DSTACKTRACE=linux'
Python module: /opt/campos-asap3/3.2.2/1.el5.fys.ifort.11.0.python2.4.openmpi.1.3.3/lib64/python2.4/site-packages/asap3/__init__.pyc
C++ module: /opt/campos-asap3/3.2.2/1.el5.fys.ifort.11.0.python2.4.openmpi.1.3.3/lib64/python2.4/site-packages/asapserial3.so
ase module: /opt/campos-ase3/3.1.0.846/1.el5.fys.python2.4/lib64/python2.4/site-packages/ase/__init__.pyc
>>>
The second form also shows where Asap is installed.
Since the developers use their own installation, there is a risk that the default installation is outdated. If you discover that the installed version is ancient, complain to Jakob Schiøtz.
Developer installation on Niflheim¶
If you need the newest (unreleased) version of Asap, or if you contribute to Asap, you need a developer installation. It is recommended to do this in a Virtual Environment (venv), together with a developer installation of ASE and possibly of GPAW.
You can use both the Intel and the FOSS toolchains, but the Intel toolchain is strongly recommended as it gives around a factor two in performance boost for common Asap operations.
Developer installation with GPAW¶
Install GPAW in a venv (if you have not already done so): See the corresponding GPAW installation guide. Remember to choose the Intel toolchain, if desired.
Edit the file
bin/activate
(inside the virtual environment folder), and add these lines to the bottom of the file:export FYS_PLATFORM=Nifl7_${CPU_ARCH}_${TCHAIN} export PYTHONPATH=$VIRTUAL_ENV/asap/Python:$VIRTUAL_ENV/asap/$FYS_PLATFORM:$PYTHONPATH export PATH=$VIRTUAL_ENV/asap/${FYS_PLATFORM}:$PATH
Activate the virtual environment (assuming that you have already changed into the directory of the virtual environment):
source bin/activate
Go to point 6 in the next section.
Developer installation without GPAW¶
If you do not need GPAW, then there is no need to install it.
Import Python for the relevant toolchain. For the 2023a toolchains import the module
Python/3.11.3-GCCcore-12.3.0
:module purge module load Python/3.11.3-GCCcore-12.3.0
Create the virtual environment, giving it a sensible name (here: asap-intel):
python -m venv --system-site-packages asap-intel
Edit the file
bin/activate
(in theasap-intel
folder). Add these lines to the top of the file:TCHAIN=intel SUBTCHAIN=iimkl module purge unset PYTHONPATH module load matplotlib/3.7.2-${SUBTCHAIN}-2023a module load ${TCHAIN}/2023a #module load openkim-models/20210128-GCCcore-10.2.0
and the following lines to the bottom of the file:
export FYS_PLATFORM=Nifl7_${CPU_ARCH}_${TCHAIN} export PYTHONPATH=$VIRTUAL_ENV/asap/Python:$VIRTUAL_ENV/asap/$FYS_PLATFORM:$PYTHONPATH export PATH=$VIRTUAL_ENV/asap/${FYS_PLATFORM}:$PATH
If you use the foss toolchain, the first line should read
TCHAIN=foss
andSUBTCHAIN=gfbf
instead.Activate the virtual environment:
cd asap-intel source bin/activate
Install ASE and prerequisites:
pip install --upgrade pip git clone git@gitlab.com:ase/ase.git pip install -e ase
Note that the
git
command assumes that you already have an account on gitlab.com, and have set up your user for command line access with an SSH key (if not, see https://gitlab.com/help/ssh/README). If you do not desire to use a gitlab.com account, you can make an anonymous checkout with the commandgit clone https://gitlab.com/ase/ase.git
.Clone Asap, and check that the right makefile is used:
git clone git@gitlab.com:asap/asap.git cd asap make version
You should see output like this. It is important that the
FYS_PLATFORM detected
line is present, it indicates that the Niflheim architecture is being detected.:(asap-intel) 15:50 [slid] asap$ make version Getting configuration from makefile-Nifl7_broadwell_intel FYS_PLATFORM detected - building for CAMd/CINF/Niflheim with OBJDIR=Nifl7_broadwell_intel. ASAP version 3.12.1
Compile asap for all the Niflheim architectures (currently four, so this takes some time):
./compile-niflheim.sh
Test that it works:
cd Test python TestAll.py ./TestParallel.sh
Submitting ASAP jobs to Niflheim¶
A special program, asap-sbatch
, is used to submit
ASAP jobs to Niflheim. You specify Slurm options on the command line or
in comments beginning with #SBATCH
, just as when submitting with the
ordinary sbatch
command. The special asap-sbatch
command detects
if you are submitting serial or parallel jobs, and submits a helper
script starting your simulation in the right way.
Example:
(asap-intel) 08:49 [slid] Test$ asap-sbatch TestAll.py
Virtual environment detected: /home/niflheim/schiotz/development/asap-intel
Submitted batch job 3397275
The TestAll.py begins like this:
#SBATCH --partition=xeon8
#SBATCH -N 1
#SBATCH -n 4
#SBATCH --time=1:00:00
#SBATCH --mem=2G
#SBATCH --output=Timing-%J.out
The first comments specify the Slurm queue, the next two that the job should run on a single node and uses a single cpu core. Then the maximal wall time is specified, and the maximal memory usage per cpu core. The output (both stdout and stderr) goes to a file named Timing-NNNNN.out, where NNNNN is the job number. See the Slurm section on the Asap on Niflheim wiki for more information. Options specified on the command line will overrule options in the Python script.
Serial simulations¶
Serial simulations are submitted as described above. Please submit serial scripts to the xeon-8 partition, as the machines with more cores should preferably be used for parallel jobs.
Parallel simulations¶
Parallel simulations are submitted in exactly the same way. Your
script will be started on all the requested CPUs. Just be sure to
request the right number of nodes and processors using the -N nodes
and -n processors
sbatch
options, either on the asap-sbatch command line or as a #SLURM
comment
in the Python script.
Aim to use a number of processors (cpu cores) that is divisible by the number of cpu cores in the partition of Niflheim you are using. For example, on the xeon-40 partitions, the number of cores should be a multiple of 40.
Multithreaded simulations¶
Multithreaded simulations are currently unsupported by ``asap-sbatch``