Using a cluster
This section will focus on using a HPC cluster with a qsub
interface like SGE and/or a DRMAAv1 implemention like Slurm.
The workflow has to be in a folder which is available on all nodes, like a NFS.
Comand line
You can use all flags described in the local section. The -j
flag now instead gives the number of jobs to run on the cluster at the same time. Rules marked as localrule
still run on the machine executing snakemake.
snakemake --use-conda --cluster "qsub -t {threads} -l mem={resources.mem_mb}mb" -j 256 -kpr --ri data
Profiles
To reduce the length of your Snakemake calls, you can save common configurations as profiles.
Every profile is a folder with a config.yaml
in ~/.config/snakemake
or /etc/xdg/snakemake/
Content of ~/.config/snakemake/cluster/config.yaml
:
cluster: "qsub -t {threads} -l mem={resources.mem_mb}mb"
jobs: 256
keep-going: true
printshellcmds: true
use-conda: true
reason: true
rerun-incomplete: true
This is the same call as the one shown above:
snakemake --profile cluster data
Per Unneberg created a profile cookiecutter for Slurm.