Haddock3 and slurm

Hi
I have successfully installed HADDOCK3 beta version on our cluster with SLURM queue manager. I am new to HADDOCK3 and I like the new approach. Good job.
One rapid question: in order to use it with SLURM, do I need to install mpi4py and just follow one the example in examples dir? Or there is an alternative way to run HADDOCK3 in embarrassingly parallel as HADDOCK2 was doing?
Best

Set the mode to HPC in the config file. This will use slurm

Check our new tutorial for haddock3 - it explains various run modes

https://www.bonvinlab.org/education/HADDOCK3/

1 Like

Thanks Alexandre. Actually I could not find it on the Welcome to HADDOCK3 Documentation! — haddock3 3.0.0 documentation page.

Check: https://www.bonvinlab.org/education/HADDOCK3/HADDOCK3-antibody-antigen/#haddock3-execution-modes

1 Like

Hello there,

HADDOCK3 is different than most application because it handles the scheduling internally. It’s currently shipping with a few modes; Local, HPC and MPI.

This requires some clarification, in our development environment/culture we call “HPC Mode” when HADDOCK is itself making submissions to the queue. So when you are running in hpc mode, the main HADDOCK3 process sits in the login node and handles the submissions to your queue (SLURM/TORQUE) - normally this scheme of submission is called a BATCH mode, it requires extra configuration concering the target partiton to be used, and how many complexes should be calculated in each batch. It’s also a good idea to first talk to the HPC administrator that this mode will be used beforehand, this will generate hundreds of submissions and might go over your queue allocation limit.

Now if you want to send the HADDOCK3 process to the node - which is probably the way its most compliant with the HPC center guidelines - you will instead submit it using either LOCAL or MPI. The different between LOCAL and MPI is the number of nodes they can access, LOCAL is bound to one node and with MPI you can spread it across multiple nodes.

See below two SBATCH scripts;

MPI

# input.toml
# ...
mode = "mpi"
ncores = 192
# ...
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --tasks-per-node=48 
#SBATCH -J haddock3mpi

# This will request 48 cores of 4 nodes = 192 in total

# make sure anaconda environment is activated
conda activate haddock3

# go to the example directory
cd /your/data/directory

# execute
haddock3 input.toml

Local

# input.toml
# ...
mode = "local"
ncores = 48
# ...
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --tasks-per-node=48
#SBATCH -J haddock3local

# This will request 48 cores of 1 nodes = 48 in total

# make sure anaconda environment is activated
conda activate haddock3

# go to the example directory
cd /your/data/directory

# execute
haddock3 input.toml
1 Like

Thanks @honoratorv ! Very clear

1 Like