Setting Up¶
In order to use this package, you MUST
- Create a calculations folder where you'd like to run your calculations.
Each subfolder of
calculations/should have a unique name and contain aPOSCAR. A sample method of creating the calculations folder from ajsonwith names and cifs is available inrun_vasp_calculations.py, and an example calculations folder is provided incalculations/. - Configure
computing_config.jsonand place it in thecalculations/directory. - Make any desired modifications to
calc_config.jsonand place it in thecalculations/directory.
When you're done, your calculation directory should look roughly like this:
graph TD
A[calculations/] --> B([calc_config.json]) & C([computing_config.json])
A[calculations/] --> D[Material 1] --> E([POSCAR])
A[calculations/] --> F[Material 2] --> G([POSCAR])
A[calculations/] --> H[Material ...] --> I([POSCAR])
Computing Configuration¶
The computing configuration is set up in computing_config.json.
Here, you can configure settings for each supercomputer you would like to use.
Be sure to check that computer at the top matches the supercomputer you're
running jobs on.
- As of now, only Perlmutter and Bridges2 at NERSC and QUEST at Northwestern University are supported. Any other SLURM based supercomputers can be easily added, but modifications could be made for other queue management systems.
Supported tags¶
user_id(str)potcar_dir(str)queuetype(str)allocation(str)constraint(str)vasp_module(str)ncore(int)ncore_per_node(int)atoms_per_node(int, optional) — default32rerun_increase(str, optional) —"nodes"or"walltime", default"walltime"rerun_increase_factor(int, optional) — default2
Note
potcar_dirshould be a global path to a folder containing VASP POTCARs.constraintis only needed for Perlmutter (to specify cpu nodes).ncoreis NCORE in the VASP INCAR.ncore_per_nodeshould be the number of CPUs on each node.- For
vasp_module, a VASP 6 module is strongly recommended. - The
personal"computer" is only used for internal unit testing, not to run any actual jobs.
Job resource tuning
The following fields are optional — omitting them preserves the previous default behavior.
atoms_per_nodecontrols how many compute nodes are requested per job (nodes = floor(total atoms /atoms_per_node) + 1).rerun_increasechooses which resource to scale when a job times out:"walltime"(increase the walltime) or"nodes"(increase the node count).rerun_increase_factoris the multiplier applied to the chosen resource on rerun (e.g.,2means double).
Warning
Be sure that your settings of KPAR and NCORE are compatible with the computing architecture you're using!
- It's okay if
ncore * kpar != ncore_per_node. All cores on the node will still be requested, and some of them will be left empty for extra memory. This can be useful for computing architectures with a weird number of cores (e.g. 52 on Quest).
Example
computing_config.json
{
"computer": "perlmutter",
"personal": {
"user_id": "dwg4898",
"potcar_dir": "vasp_manager/tests/POTCARS",
"queuetype": "regular",
"allocation": "m1673",
"constraint": "cpu",
"vasp_module": "vasp/6.4.3-cpu",
"ncore": 16,
"ncore_per_node": 128
},
"quest": {
"user_id": "dwg4898",
"potcar_dir": "/projects/b1004/potpaw_PBE_OQMD",
"queuetype": "short",
"allocation": "p31151",
"vasp_module": "vasp/6.4.3-openmpi-intel-hdf5-cpu-only",
"ncore": 12,
"ncore_per_node": 52,
"atoms_per_node": 16,
"rerun_increase": "walltime",
"rerun_increase_factor": 2
},
"perlmutter": {
"user_id": "dwg4898",
"potcar_dir": "/global/homes/d/dwg4898/vasp_potentials/54/potpaw_pbe",
"queuetype": "regular",
"allocation": "m4545",
"constraint": "cpu",
"vasp_module": "vasp/6.4.3-cpu",
"ncore": 16,
"ncore_per_node": 128,
"atoms_per_node": 32,
"rerun_increase": "nodes",
"rerun_increase_factor": 2
},
"bridges2": {
"user_id": "daleg",
"potcar_dir": "/jet/home/daleg/vasp_potentials/potpaw_PBE_OQMD",
"queuetype": "RM",
"allocation": "dmr160027p",
"vasp_module": "",
"ncore": 16,
"ncore_per_node": 128,
"atoms_per_node": 32,
"rerun_increase": "nodes",
"rerun_increase_factor": 2
}
}
Calculation Configuration¶
Tip
See more about VASP INCAR tags here
The calculation configuration is set up in calc_config.json.
For each desired calculation mode, set the INCAR tags in this json.
- Each mode has its own configuration settings with sensible defaults, but these can be easily customized by the user.
- Note:
bulkmoddoes not have its own section incalc_config.json— it uses thestaticconfiguration. - See more about about spin polarization settings (
"ispin": "auto") here: Spin Configuration - See more about DFT+U settings (
"hubbards": "wang") here: DFT+U Configuration - Note:
VaspManageruses 1 compute node peratoms_per_nodeatoms (see Computing Configuration), so don't be surprised if your job requests more than a single node.
Supported tags:¶
prec(str): ["Normal" | "Accurate"]ispin(str | int): ["auto" | 1]hubbards(str | None): ["wang" | null]kspacing(float)symprec(float)nsw(int)ibrion(int)isif(int)lreal(bool)potim(float)ediffg(float)iopt(int)nelm(int)encut(float)ediff(float)algo(str): ["Normal" | "Fast" | "VeryFast"]ismear(int)sigma(float)amix(float)bmix(float)lwave(bool)lcharge(bool)lvot(bool)kpar(int)gga(str)walltime(str)
Note
- In
jsonfiles, the equivalent of python'sNoneisnull. VASP(Fortran) expectsbooldata to be passed as".FALSE."or".TRUE."; this is automatically converted for you.walltimeshould be passed as"hh:mm:ss". You should try to set this such that your job hits NSW before it runs out of time. If your job times out, an archive will be made and the calculation will be resubmitted with increased resources according torerun_increaseandrerun_increase_factorin your computing configuration.- You can place another
calc_config.jsonfile in a specific material's calculation directory (e.g.NaCl/rlx) to use custom settings for only that material. You only need to include settings you want to change from the maincalc_config.json.
Example
calc_config.json
{
"rlx-coarse": {
"prec": "ACC",
"ispin": "auto",
"hubbards": "wang",
"kspacing": 0.2,
"symprec": "1e-05",
"nsw": 60,
"ibrion": 2,
"isif": 3,
"lreal": false,
"potim": 0.1,
"ediffg": "1e-04",
"iopt": 0,
"nelm": 60,
"encut": 520,
"ediff": "1e-05",
"algo": "Normal",
"ismear": 0,
"sigma": 0.05,
"kpar": 4,
"gga": "PE",
"walltime": "01:00:00"
},
"rlx": {
"prec": "ACC",
"ispin": "auto",
"hubbards": "wang",
"kspacing": 0.15,
"symprec": "1e-05",
"nsw": 60,
"ibrion": 3,
"isif": 3,
"lreal": false,
"potim": 0.1,
"ediffg": "-1e-02",
"iopt": 0,
"nelm": 60,
"encut": 520,
"ediff": "1e-07",
"algo": "Normal",
"ismear": 0,
"sigma": 0.05,
"kpar": 4,
"gga": "PE",
"walltime": "01:00:00"
},
"static": {
"prec": "ACC",
"ispin": "auto",
"hubbards": "wang",
"kspacing": 0.15,
"symprec": "1e-05",
"nsw": 0,
"ibrion": -1,
"isif": 3,
"lreal": false,
"potim": 0,
"ediffg": "-1e-02",
"iopt": 0,
"nelm": 60,
"encut": 520,
"ediff": "1e-07",
"algo": "Normal",
"ismear": -5,
"sigma": 0.05,
"kpar": 4,
"gga": "PE",
"walltime": "01:00:00"
},
"elastic": {
"prec": "ACC",
"ispin": "auto",
"hubbards": "wang",
"kspacing": 0.125,
"write_kpoints": true,
"nfree": 4,
"symprec": "1e-05",
"nsw": 60,
"ibrion": 6,
"isif": 3,
"lreal": false,
"potim": 0.005,
"ediffg": "-1e-02",
"iopt": 0,
"nelm": 60,
"encut": 700,
"ediff": "1e-07",
"algo": "Normal",
"ismear": 0,
"sigma": 0.05,
"kpar": 4,
"gga": "PE",
"walltime": "04:00:00"
}
}