Hi!
I’ve been able to complete CNS setup, but the final steps of the HADDOCK setup (for Haddock2.4 from the instruction manual webpage) don’t seem to be working for me:
Edit then a configuration file specific to your system.
This configuration file should contain the following information:
CNSTMP defining the location of your CNS executable
QUEUETMP defining the submission command for running the jobs (e.g. either via csh or through a specific command submitting to your local batch system)
NUMJOBs defining the number of concurrent jobs executed (or submitted).
QUEUESUB defining the HADDOCK python script used to run the jobs (the default QueueSubmit_concat.py should do in most cases).
And example configuration file for running on local resources assuming a 4 core system would be:
set CNSTMP=/home/software/cns/cns_solve-1.31-UU-Linux64bits.exe set QUEUETMP=/bin/csh set NUMJOB=4 set QUEUESUB=QueueSubmit_concat.py
For submitting to a batch system instead you might want to use a wrapper script. An example for torque can be found here.
In order to configure HADDOCK, call the install.csh script with as argument the configuration script you just created:
./install.csh my-config-file
I also tried using a wrapper script for running on slurm instead but this also didn’t seem to work for me.
This is my current configuration file:
cat haddock_config.ini
[HADDOCK]
CNS_EXE=/scratch/dkarunat/software/haddock2.5-2024-12/bin/cns
QUEUE_CMD=/cvmfs/soft.computecanada.ca/gentoo/2023/x86-64-v3/usr/bin/csh
NUM_JOBS=20
SUBMIT_SCRIPT=ssub_slurm
This is my current SLURM wrapper script:
#!/bin/csh -f
if ($#argv < 1) then
** echo “Usage : ssub_slurm jobname”**
** exit 1**
endif
# check if job exists + make it executable
set jobname=$1
if (! -e $1) then
** echo “job file does not exist”**
** exit 1**
endif
if (! -x $jobname) chmod +x $jobname
# write temporary slurm script
set slurmjob=$jobname.slurmjob.$$
if (! -e $slurmjob) then
** touch $slurmjob**
else
** \rm $slurmjob**
** touch $slurmjob**
endif
set PWD=pwd
echo “#!/bin/csh” >> $slurmjob
echo “#SBATCH --job-name=$jobname” >> $slurmjob
echo “#SBATCH --output=$PWD/$jobname.out.%j” >> $slurmjob
echo “#SBATCH --error=$PWD/$jobname.err.%j” >> $slurmjob
echo “#SBATCH --ntasks=1” >> $slurmjob
echo “#SBATCH --cpus-per-task=1” >> $slurmjob
echo “cd $PWD” >> $slurmjob
echo “./$jobname” >> $slurmjob
chmod +x $slurmjob
sbatch $slurmjob
rm -f $slurmjob
exit