Random docking fails in it0 and question about running job via batch system

Hi, Admin! I come here for help again. Now I have two questions to ask:

  1. protein-protein random docking fails
    (a) Firstly, I chose surface residues with relative SASA larger than 50% as active site. In my experience, the choice of active sites in random docking doesn’t affect on the restraints in simulation So I choice the active site casually. Is my understanding right??
    (b) Secondly, I modified ranair=true in run.cns for random docking. Other modifications include iseed, number of structures in each stage. (50000 in it0, 400 in it1 and it1/water)
    © This works for other proteins. But one of cases fails in it0 stage with error that “HADDOCK cannot continue due to too many (>20%) failed structures in it0 The following structures could not be docked”. But their *.out.gz file seems normal, although it ends with these output:
 PRIEND:    1 levels not terminated
                    LEVEL=   1 KEY=CNSsolve>        ACTION=GO
 ============================================================
 Maximum dynamic memory allocation:   973500288 bytes
Maximum dynamic memory overhead:          2656 bytes

But, this also appears in the successed jobs. The failed job’s docked complex is also normal. So what is the reason why they fail? Is there any other output infomation I shoud check.

(d) I also check other successed cases in which there are also about hundards of failed jobs less than 20%. So it can go on.

  1. Is it possible triggered by job system. I modify jobmax[‘it0’]= 200, jobmax[“it1”] = 10 in QueueSubmit.py and QueueSubmit_cont.py. And I want to use 40 cpus to run the job. So I modify cpunumber_1=8000. t
    {===>} queue_1="/home/work/dock1/run2/ssub";
    {===>} cns_exe_1="/home/soft/haddock/cns_solve_1.3/intel-x86_64bit-linux/bin/cns";
    {===>} cpunumber_1=8000;

    I hope it will run about 8000 docking poses on 40 jobs. Each job includes 200 docking poses’ calculation. But beyond my expection, it submited many jobs via job system much large than 40.
    Is my understanding wrong and how to limited only 40 jobs submitted by job system.

The question description is really long. Thanks for your time and patience.

In my other test on the same docking case, it still fails with error in log file:

HADDOCK cannot continue due to failed structures in it0
HADDOCK could not copy failed structures from previous iteration
The following structures could not be docked:

Are all structures failing at it1?

Do check the content of an it1 out file in the run directory and search for error messages (start from the end of the file)

No, it fails at it0 stage.
How can I control the number of job submitted so that it only run on 40 cores, rather than submiting hundards of submited jobs. Now I have set jobmax[“it0”]=50 and cpunumber_1=2000 which equals 50 jobs/per * 40 cores in my understanding. But more than 40 jobs were submited simultaneously.

No, it fails at it0 stage.
How can I control the number of job submitted so that it only run on 40 cores, rather than submiting hundards of submited jobs. Now I have set jobmax[“it0”]=50 and cpunumber_1=2000 which equals 50 jobs/per * 40 cores in my understanding. But more than 40 jobs
were submited simultaneously.

cpunumber effectively defines the number of jobs your are sending to the batch system, each job will compute 50 models (the jobmax[“it0”] value

And each jobs uses only one core.

Did you check for error messages in the output files?