Error running haddock2.5

Hi,
I am getting this error when running local version haddock2.5

[2025-05-04 21:27:54] Waiting to restart job…
Modifying random seed for it 0 structure 14
[2025-05-04 21:27:38] Waiting to restart job…
[2025-05-04 21:27:38] FIX: Modifying random seed for it 0 structure 3
[2025-05-04 21:27:38] Waiting to restart job…
[2025-05-04 21:27:38] stage 0: 9889 structures remaining, 20 running, 11 completed, 9900 total
[2025-05-04 21:27:40] stage 0: 9872 structures remaining, 21 running, 28 completed, 9900 total
[2025-05-04 21:27:40] stage 0: 9868 structures remaining, 19 running, 32 completed, 9900 total
[2025-05-04 21:27:40] stage 0: 9868 structures remaining, 22 running, 32 completed, 9900 total
[2025-05-04 21:27:41] stage 0: 9861 structures remaining, 19 running, 39 completed, 9900 total
[2025-05-04 21:27:41] stage 0: 9861 structures remaining, 22 running, 39 completed, 9900 total
[2025-05-04 21:27:43] FIX: Modifying random seed for it 0 structure 51
[2025-05-04 21:27:43] Waiting to restart job…
[2025-05-04 21:27:43] FIX: Modifying random seed for it 0 structure 52
[2025-05-04 21:27:43] Waiting to restart job…
[2025-05-04 21:27:43] HADDOCK has detected an error
[2025-05-04 21:27:43] Check the FAILED file in /var/spool/scratch/drorimi2/7pll/runs_s917/run_7pll_s917_3_rep_2
[2025-05-04 21:27:43] Stopping…
[2025-05-04 21:27:43] Cleaning up the run directory…
Only files for structure #1 will be kept…
[2025-05-04 21:27:44] ##############################################################################
[2025-05-04 21:27:44] Finishing HADDOCK
[2025-05-04 21:27:44] Au revoir. Tot ziens. Bye bye.

what can this be?
Thank you

as the log file states: check the content of the FAILED file in your run directory

I did
Its empty

Are there any pdb files written in structures/it0?

Check also some out files in the run directory. Look at the end of those files for possible error messages.

I dont see anything weird..
When I run the same command again exactly (with same files) the run does work, sometimes on the second or third try.
What can cause this?

Can be various things:

  • the way you are running it (how did you configure it)?
  • if running on a cluster, could be one node is misconfigured

I assume not the full run is failing but just a few models? I.e. are the models generated in structures/it0?

2.5 is the version running behind the HADDOCK web server

Yes, there are still models generated in it0, but not all, and that causes the run to stop

I configured HADDOCK using
cd /home/qnt/drorimi2/software/haddock2.5-2024-12
source /home/qnt/drorimi2/anaconda3/etc/profile.d/conda.csh
conda activate haddock2.5
source ./haddock_configure.csh

and running on a cluster

Also, I have made these changes to the run.cns file

sed -i ‘s/delenph=true/delenph=false/g’ run.cns
sed -i ‘s/{===>} runana=“cluster”;/{===>} runana=“full”;/g’ run.cns
sed -i ‘s/{===>} prot_link_mol1=“protein-allhdg5-4-noter.link”;/{===>} prot_link_mol1=“protein-allhdg5-4.link”;/g’ run.cns
sed -i ‘s/{===>} structures_0=1000;/{===>} structures_0=9900;/g’ run.cns
sed -i ‘s/{===>} structures_1=200;/{===>} structures_1=400;/g’ run.cns
sed -i ‘s/{===>} waterrefine=200;/{===>} waterrefine=400;/g’ run.cns
sed -i ‘s/{===>} initiosteps=500;/{===>} initiosteps=2000;/g’ run.cns
sed -i ‘s/{===>} cool1_steps=500;/{===>} cool1_steps=2000;/g’ run.cns
sed -i ‘s/{===>} cool2_steps=1000;/{===>} cool2_steps=4000;/g’ run.cns
sed -i ‘s/{===>} cool3_steps=1000;/{===>} cool3_steps=4000;/g’ run.cns
sed -i ‘s/{===>} nfle_2=0;/{===>} nfle_2=1;/g’ run.cns
sed -i ‘s/{===>} start_fle_2_1=“”;/{===>} start_fle_2_1=“1”;/g’ run.cns
sed -i ‘s/{===>} end_fle_2_1=“”;/{===>} end_fle_2_1=“70”;/g’ run.cns

How did you configure haddock2.5?

What are your queue commands defined in run.cns? Are you using slurm?

And other parameters to check that might affect execution on a cluster are in haddock/main/__init__.py

Check the INSTALLATION.md file

I am not using slurm

In run.cns
{===>} queue_1=“/bin/bash”;
{===>} cns_exe_1=“/home/qnt/drorimi2/software/cns_solve-1.31-UU-Linux-x86.exe”;
{===>} cpunumber_1=20;

in haddock/main/init.py

define job concatenation option

jobconcat = dict()

values for running locally in bash (or node) mode

jobconcat[“0”] = 1
jobconcat[“1”] = 1
jobconcat[“2”] = 1

values for running via a batch system

#jobconcat[“0”] = 20
#jobconcat[“1”] = 5
#jobconcat[“2”] = 5

values for grid submission

jobconcat[“0”] = 20

jobconcat[“1”] = 5

jobconcat[“2”] = 5

define the job behavior (using local /tmp or not)

- for grid submission set to false

- for local runs, if the network is a bottleneck better set it to true

tmpout = False

in case of grid or batch package mode submission define a wait behavior

to lower the CPU load

- for grid or batch package submission set to true

batchmode = False

The file INSTALLATION.md is long. What do you want to see from it?

The installation instructions :slight_smile:

But your setup seems perfectly fine.
Could simply be “bad luck”.
Does this also happens with the examples provided with haddock2.5? Or only for your specific system?

Only noticed with my examples, and randomly (I’d say maybe a quarter of the runs fail)

May-be you are hitting some memory issues while running it… Difficult to tell

Is there a way to set up the run so it will print out more logs and errors? maybe then I can find the error

Nope - but you could turn off the cleaning in run.cns

In that way out files are kept for each model. Check for missing models if there is an error in the corresponding out file.

And some issues might be linked to the batch system / cluster you are running on