The HADDOCK category is meant to discuss any HADDOCK-related issue. For general information about HADDOCK refer to HADDOCK – Bonvin Lab
Hi, I apologize for starting so many new topics lately, but beyond setting delenph = true, how can one reduce the computational time of HADDOCK runs without compromising quality of the output? I have a massive number of docking runs to perform, because I have many proteins and DNAs to dock. And it gets worse, because I am using samples of normal modes from each of the partners in ensemble pdb files, so for each HADDOCK run I need to set the sampling parameters to a very high number and partition the normal modes into different ensemble pdb files.
I am using haddock3 locally.
Thanks in advance
Dear stianale,
Thanks for your interest in using haddock3 in for your research.
Unfortunately, running computationally heavy runs will take some time.
To reduce the time, you could:
- increase the number of cores to be used (with the
ncores
parameter)
- reduce the sampling in
[rigidbody]
(at a cost of possible missing good poses)
- provide meaningful ambiguous restraints to better guide the docking and therefore safely reduce the sampling
- reduce the number of selected top models for refinements (
[seletop] select=100
)
- perform some clustering to only refine diverse models (
[clustfcc]
or [rmsdmatrix][clustrmsd]
followed by [seletopclusts]
)
- not do
[mdref]
step, but only rely on the [emref]
instead
Also, please note that you can also provide ensemble pdb files as input to haddock3.
With the hope that this answer will help you.