Hi everyone,
Is it possible to run multiple thread processes on Docker Galaxy via SLURM? According to SLURM documentation, using the --cpus-per-task flag allows for multiple CPUs to run. However, running this on Galaxy results in the following error:
Unable to run job due to a misconfiguration of the Galaxy job running system. Please contact a site administrator
This flag seems to work via command line, so it does not seem to be an issue with the container installation of SLURM. Is there a special flag or parameter option that has to be used? Currently, the relevant portion of the job conf file looks like this (most of it was from the sample file):
<destinations default_from_environ="GALAXY_DESTINATIONS_DEFAULT" default="slurm_cluster">
<destination id="slurm_cluster" runner="slurm">
<param id="nativeSpecification" from_environ="NATIVE_SPEC">--cpus-per-task=2 --share</param>
<env file="/galaxy_venv/bin/activate"/>
<param id="docker_enabled" from_environ="GALAXY_DOCKER_ENABLED">False</param>
<param id="docker_sudo" from_environ="GALAXY_DOCKER_SUDO">False</param>
<!-- The empty volumes from shouldn't affect Galaxy, set GALAXY_DOCKER_VOLUMES_FROM to use. -->
<param id="docker_volumes_from" from_environ="GALAXY_DOCKER_VOLUMES_FROM"></param>
<!-- For a stock Galaxy instance and traditional job runner $defaults will expand out as: $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw -->
<param id="docker_volumes" from_environ="GALAXY_DOCKER_VOLUMES">$defaults</param>
</destination>
</destinations>
<tools>
<tool id="bowtie2" destination="slurm_cluster"/>
</tools>
There was another post here (https://biostar.usegalaxy.org/p/21044/#21068) which seems to address using multiple cpus using a request_cpus id, but that results in the same error as above.
Thanks in advance for the help!