Job Scripts¶
For each calculation, its job is handled by JobManager. This class can submit a job, store its jobid, and monitor its progress.
The creation of the actual creation of the vasp.q
SLURM submission script is
handled by VaspInputCreator
using settings from calc_config.json
and computing_config.json
.
Implementation Details
The vasp.q
file
is just a template string that needs to be passed 3 separate sections:
vasp.q
#!/bin/bash
{sbatch_params}
{preamble}
starttime=$(date +%s)
{command}
stoptime=$(date +%s)
tottime=$(echo "$stoptime - $starttime" | bc -l)
echo "total time (s): $tottime"
to_hours=$(echo "scale=3; $tottime/3600" | bc -l)
echo "total time (hr): $to_hours"
{sbatch_params}
: sbatch tags (e.g.#SBATCH -N
){preamble}
: environment or module settings (e.g.export OMP_NUM_THREADS=1
){command}
: thempi
orsrun
command to launchVASP
The actual commands for each supercomputer and job type are determined by
q_mapper.json
which lists the path of a yaml file that contains the sbatch_params
,
preamble
, and command
to use.
You can modify the exact commands in the yaml file of your choice if needed. For reference, here's the yaml file for normal jobs on Perlmutter:
vasp.yml
sbatch_params: |-
#SBATCH -N {n_nodes}
#SBATCH -q {queuetype}
#SBATCH -J {jobname}
#SBATCH -A {allocation}
#SBATCH -t {walltime}
#SBATCH -C {constraint}
#SBATCH --mem=0
preamble: |-
ulimit -s unlimited
export OMP_NUM_THREADS=1
module load {vasp_module}
mpitasks=$(echo "$SLURM_JOB_NUM_NODES * {ncore_per_node}" | bc)
command: |-
srun -t {timeout} -u -n $mpitasks --cpu_bind=cores vasp_std > stdout.txt 2> stderr.txt