View markdown source on GitHub

Connecting Galaxy to a compute cluster

Contributors

Questions

Objectives

last_modification Last modification: Apr 6, 2021

Galaxy Job Configuration

Speaker Notes


Why cluster?

Running jobs on the Galaxy server negatively impacts Galaxy UI performance

Even adding one other host helps

Can restart Galaxy without interrupting jobs

Speaker Notes


Plugins

Correspond to job runner plugins in lib/galaxy/jobs/runners

.left[Plugins for:]

Speaker Notes


Cluster library stack (DRMAA)

Cluster library stack

Speaker Notes


Handlers

Define which Galaxy processes are job handlers

Speaker Notes


Destinations

Define how jobs should be run

Speaker Notes


The default job configuration

.left[config/job_conf.xml.sample_basic:]

<?xml version="1.0"?>
<job_conf>
    <plugins>
        <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
    </plugins>
    <destinations>
        <destination id="local" runner="local"/>
    </destinations>
</job_conf>

Speaker Notes


Job Config - Tags

Both destinations and handlers can be grouped by tags

Speaker Notes


Job Environment

<env> tag in destinations: configure the job exec environment

tag syntax function
<env id="NAME">VALUE</env> Set $NAME to VALUE
<env file="/path/to/file" /> Source shell file at /path/to/file
<env exec="CMD" /> Execute CMD

Source and command execution will be handled on the remote destination, don’t need to work on the Galaxy server

Speaker Notes


Limits

Available limits

Speaker Notes


Concurrency Limits

Available limits

Speaker Notes


Shared Filesystem

Most job plugins require a shared filesystem between the Galaxy server and compute.

The exception is Pulsar. More on this in Using heterogeneous compute resources

Speaker Notes


Shared Filesystem

Our simple example works because of two important principles:

  1. Some things are located at the same path on Galaxy server and node(s)
    • Galaxy application (/srv/galaxy/server)
    • Tool dependencies
  2. Some things are the same on Galaxy server and node(s)
    • Job working directory
    • Input and output datasets

The first can be worked around with symlinks or Pulsar embedded (later)

The second can be worked around with Pulsar REST/MQ (with a performance/throughput penalty)

Speaker Notes


Multiprocessing

Some tools can greatly improve performance by using multiple cores

Galaxy automatically sets $GALAXY_SLOTS to the CPU/core count you specify when submitting, for example, 4:

Tool configs: Consume \${GALAXY_SLOTS:-4}

Speaker Notes


Memory requirements

For Slurm only, Galaxy will set $GALAXY_MEMORY_MB and $GALAXY_MEMORY_MB_PER_SLOT as integers.

Other DRMs: Please PR the appropriate code.

For Java tools, be sure to set -Xmx, e.g.:

<destination id="foo" ...>
    <env id="_JAVA_OPTIONS">-Xmx4096m</env>
</destination>

Speaker Notes


Run jobs as the “real” user

If your Galaxy users == System users:

See: Cluster documentation

Speaker Notes


Job Config - Mapping Tools to Destinations

Problem: Tool A uses single core, Tool B uses multiple

Speaker Notes


Job Config - Mapping Tools to Destinations

Solution:

    <destinations default="single">
        <destination id="single" runner="slurm" />
        <destination id="multi" runner="slurm">
            <param id="nativeSpecification">--ntasks=4</param>
        </destination>
    </destinations>
    <tools>
        <tool id="hisat2" destination="multi"/>
    </tools>

Speaker Notes


The Dynamic Job Runner

For when basic tool-to-destination mapping isn’t enough

Speaker Notes


The Dynamic Job Runner

A special built-in job runner plugin

Map jobs to destinations on more than just tool IDs

.left[Two types:]

See: Dynamic Destination Mapping

Speaker Notes


Dynamic Tool Destinations

.left[Configurable mappings without programming:]

Speaker Notes


Arbitrary Python Functions

.left[Programmable mappings:]

Speaker Notes


Key Points

Thank you!

This material is the result of a collaborative work. Thanks to the Galaxy Training Network and all the contributors! page logo This material is licensed under the Creative Commons Attribution 4.0 International License.