Managing Galaxy on Kubernetes

Overview
Questions:
  • How do I change Galaxy configs?

  • How can I upgrade to a new version?

  • How do I rollback my changes?

  • How do I scale Galaxy?

Objectives:
  • Have an understanding of how to modify Galaxy configuration

  • Be able to upgrade and scale galaxy

Requirements:
Time estimation: 30 minutes
Level: Intermediate Intermediate
Supporting Materials:
Last modification: Oct 18, 2022
License: Tutorial Content is licensed under Creative Commons Attribution 4.0 International License The GTN Framework is licensed under MIT

Managing Galaxy on Kubernetes

Overview

A primary advantage of Galaxy on Kubernetes is the ease with which common administrative tasks can be performed reliably and without disruption of service. In particular, because of containerization, Kubernetes provides a significant advantage over managing individual virtual machines, where updates to system libraries or components can cause unexpected breakage of dependent components. With containerization, this becomes a simpler problem of swapping out a container and replacing it with an updated version. It also reduces reliance on the underlying operating system, allowing the OS to be upgraded and have the latest security patches applied without having to worry about how it will affect the applications running within. Kubernetes has built-in functionality for draining a node of all containers and for transparently moving those containers to a different node, allowing maintenance tasks to be performed on the underlying node without disruption of service.

In this section, we will look at how to perform common management tasks on a Galaxy deployment on Kubernetes, including:

  • How to upgrade a deployment
  • Change the configuration of a running Galaxy instance
  • Map arbitrary files into Galaxy’s config folder
  • Rollback changes in the case of an error
  • Scale the number of job and web handlers
  • Delete a deployment
Agenda
  1. Managing Galaxy on Kubernetes
    1. Overview
    2. Prerequisites
  2. Changing the configuration of a Galaxy instance
    1. Changing tool configuration
    2. Setting the admin user and changing the brand
  3. Scaling Galaxy
  4. Testing Kubernetes resilience
  5. Deleting Galaxy
  6. Next Steps

Prerequisites

This tutorial builds on the material of the previous tutorial and we recommend following it first to setup the required environment. You must have some familiarity with Helm commands, know how to change values in a Helm Chart and how to use the kubectl command.

Changing the configuration of a Galaxy instance

We will start off by looking at how to change the configuration of a Galaxy instance. We will first reduce the number of tools that are loaded for faster startup, and then change some common settings in galaxy.yml.

Changing tool configuration

We will change the Galaxy configuration to limit the initial list of tools that Galaxy will use by pointing to our custom tool_conf.xml. This will make your chart start up much faster for the remainder of this tutorial, as the default configuration loads the full list of tools used by https://usegalaxy.org/.

Hands-on: Creating a custom tool set
  1. First, let’s create a simpler list of tools by saving the following tool config as a file called custom_tool_conf.xml.

    <?xml version="1.0" ?>
    <toolbox tool_path="/cvmfs/main.galaxyproject.org/shed_tools">
        <section id="get_data" name="Get Data">
            <tool file="data_source/upload.xml" />
        </section>
        <section id="chip_seq" name="ChIP-seq" version="">
            <tool file="toolshed.g2.bx.psu.edu/repos/rnateam/chipseeker/1b9a9409831d/chipseeker/chipseeker.xml" guid="toolshed.g2.bx.psu.edu/repos/rnateam/chipseeker/chipseeker/1.18.0+galaxy1">
                <tool_shed>toolshed.g2.bx.psu.edu</tool_shed>
                <repository_name>chipseeker</repository_name>
                <repository_owner>rnateam</repository_owner>
                <installed_changeset_revision>1b9a9409831d</installed_changeset_revision>
                <id>toolshed.g2.bx.psu.edu/repos/rnateam/chipseeker/chipseeker/1.18.0+galaxy1</id>
                <version>1.18.0+galaxy1</version>
            </tool>
        </section>
        <section id="fastq_quality_control" name="FASTQ Quality Control" version="">
            <tool file="toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/51b771646466/trimmomatic/trimmomatic.xml" guid="toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/trimmomatic/0.36.6">
                <tool_shed>toolshed.g2.bx.psu.edu</tool_shed>
                <repository_name>trimmomatic</repository_name>
                <repository_owner>pjbriggs</repository_owner>
                <installed_changeset_revision>51b771646466</installed_changeset_revision>
                <id>toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/trimmomatic/0.36.6</id>
                <version>0.36.6</version>
            </tool>
        </section>
        <section id="fastq_quality_control" name="FASTQ Quality Control" version="">
            <tool file="toolshed.g2.bx.psu.edu/repos/devteam/fastqc/e7b2202befea/fastqc/rgFastQC.xml" guid="toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1">
                <tool_shed>toolshed.g2.bx.psu.edu</tool_shed>
                <repository_name>fastqc</repository_name>
                <repository_owner>devteam</repository_owner>
                <installed_changeset_revision>e7b2202befea</installed_changeset_revision>
                <id>toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1</id>
                <version>0.72+galaxy1</version>
            </tool>
        </section>
    </toolbox>
    
  2. Next, let’s create a new galaxy.yml file that uses this custom_tool_conf.xml.

    Note that the content below is the same as the configs section of values-cvmfs.yaml file from the Galaxy Helm chart with one exception: tool_config_file entry is pointing to our custom tool list instead of the full list from CVMFS.

    uwsgi:
      virtualenv: /galaxy/server/.venv
      processes: 1
      http: 0.0.0.0:8080
      static-map: /static/style=/galaxy/server/static/style/blue
      static-map: /static=/galaxy/server/static
      static-map: /favicon.ico=/galaxy/server/static/favicon.ico
      pythonpath: /galaxy/server/lib
      thunder-lock: true
      manage-script-name: true
      mount: {{.Values.ingress.path}}=galaxy.webapps.galaxy.buildapp:uwsgi_app()
      buffer-size: 16384
      offload-threads: 2
      threads: 4
      die-on-term: true
      master: true
      hook-master-start: unix_signal:2 gracefully_kill_them_all
      enable-threads: true
      py-call-osafterfork: true
    galaxy:
      database_connection: 'postgresql://{{.Values.postgresql.galaxyDatabaseUser}}:{{.Values.postgresql.galaxyDatabasePassword}}@{{ template "galaxy-postgresql.fullname" . }}/galaxy'
      integrated_tool_panel_config: "/galaxy/server/config/mutable/integrated_tool_panel.xml"
      sanitize_whitelist_file: "/galaxy/server/config/mutable/sanitize_whitelist.txt"
      tool_config_file: "{{.Values.persistence.mountPath}}/config/editable_shed_tool_conf.xml,/galaxy/server/config/custom_tool_conf.xml"
      tool_data_table_config_path: "{{ .Values.cvmfs.main.mountPath }}/config/shed_tool_data_table_conf.xml,{{.Values.cvmfs.data.mountPath}}/managed/location/tool_data_table_conf.xml,{{.Values.cvmfs.data.mountPath}}/byhand/location/tool_data_table_conf.xml"
      tool_dependency_dir: "{{.Values.persistence.mountPath}}/deps"
      builds_file_path: "{{.Values.cvmfs.data.mountPath}}/managed/location/builds.txt"
      datatypes_config_file: "{{ .Values.cvmfs.main.mountPath }}/config/datatypes_conf.xml"
      containers_resolvers_config_file: "/galaxy/server/config/container_resolvers_conf.xml"
      workflow_schedulers_config_file: "/galaxy/server/config/workflow_schedulers_conf.xml"
      build_sites_config_file: "/galaxy/server/config/build_sites.yml"
    
  3. Now, let’s upgrade the chart to use custom_tool_conf.xml and galaxy.yml by running the helm upgrade command.

    helm upgrade --reuse-values --set-file "configs.custom_tool_conf\.xml"=custom_tool_conf.xml --set-file "configs.galaxy\.yml"=configs/galaxy.yml galaxy galaxy/galaxy
    

    Note the --reuse-values flag, which instructs Helm to reuse any previously set values, and apply the new ones on top. The --set-file option will set the value of the configs.custom_tool_conf.xml key in your values file to the contents of the specified file, as a text string. Each file under configs key in values.yaml is automatically mapped into Galaxy’s config directory within the running container.

  4. Notice that while the chart is upgrading, the existing version continues to function. The changeover will occur when the new container is online and signals readiness to Kubernetes by responding to web requests on the relevant port. Log into the Kubernetes dashboard and watch the logs as the new pods come online.

  5. List the installed helm charts again and note that the revision of the chart has changed. These revisions are useful because it allows us to rollback our changes if they are incorrect. This will be covered in a later section.

    helm list
    NAME  	REVISION	UPDATED                 	STATUS  	CHART                 	APP VERSION	NAMESPACE
    cvmfs 	1       	Wed Jun 26 14:47:46 2019	DEPLOYED	galaxy-cvmfs-csi-1.0.1	1.0        	cvmfs
    galaxy	2       	Wed Jun 26 14:51:17 2019	DEPLOYED	galaxy-3.0.0          	v19.05     	default
    
  6. Let’s now exec into the running container and check where the files were mapped in. First, let’s get a list of running pods.

    kubectl get pods
    NAME                          READY   STATUS    RESTARTS   AGE
    galaxy-galaxy-postgres-0      1/1     Running   0          2d6h
    galaxy-job-69864b6797-zs5mn   1/1     Running   0          2d6h
    galaxy-web-7568c58b94-jzkvm   1/1     Running   0          2d6h
    

    Exec into the web pod by running:

    kubectl exec -it galaxy-web-7568c58b94-jzkvm /bin/bash
    

    Now run ls /galaxy/server/config/ and note that the galaxy.yml contains the content that you’ve provided and that custom_tool_conf.xml has also been mapped into the config folder. In this same way, any of Galaxy’s config files can be overridden by simply mapping in the relevant file into the config folder.

Setting the admin user and changing the brand

Next, we will set the admin user and change the brand in galaxy.yml. We will rollback our change to understand how Helm manages configuration.

Hands-on: Setting admin user and changing the brand
  1. Modify the following entries in your galaxy.yml. Make sure to add these keys under the galaxy: section of the file.

    brand: "Hello World"
    admin_users: "admin@mydomain.com"
    
  2. Now, let’s upgrade the chart to apply the new configuration.

    helm upgrade --reuse-values --set-file "configs.galaxy\.yml"=galaxy.yml galaxy galaxy/galaxy
    
  3. Inspect the currently set Helm values by:

    helm get values galaxy
    
  4. List the installed Helm charts again and note that the revision of the chart has changed as expected.

    helm list
    NAME  	REVISION	UPDATED                 	STATUS  	CHART                 	APP VERSION	NAMESPACE
    cvmfs 	1       	Wed Jun 26 14:47:46 2019	DEPLOYED	galaxy-cvmfs-csi-1.0.1	1.0        	cvmfs
    galaxy	3       	Wed Jun 26 14:51:17 2019	DEPLOYED	galaxy-3.0.0          	v19.05     	default
    
  5. Let’s now roll back to the previous revision.

    helm rollback galaxy 2
    

    Use helm get values again to observe that the values have reverted to the previous revision. After a short while, once the new container is up and running, Kubernetes will automatically switch over to it and you can see that the previous configuration has been restored.

Scaling Galaxy

In Galaxy deployment on Kubernetes, there are two containers by default, one web handler and one job handler. We will now look at how these can be scaled.

Hands-on: Setting admin user and changing the brand
  1. View the values-cvmfs.yaml file in the Galaxy Helm chart and note down the number of web and job handlers.

    webHandlers:
        replicaCount: 1
    jobHandlers:
        replicaCount: 1
    
  2. Let’s increase the number of web handlers by simply setting new values for the number of replicas.

    helm upgrade --reuse-values --set webHandlers.replicaCount=2 galaxy galaxy/galaxy
    
  3. Check whether the new replicas have been created.

    kubectl get pods
    NAME                          READY   STATUS    RESTARTS   AGE
    galaxy-galaxy-postgres-0      1/1     Running   0          2d9h
    galaxy-job-5cc75c6588-8dsbg   1/1     Running   0          7m13s
    galaxy-web-7c9576cf89-49nlm   1/1     Running   0          7m13s
    galaxy-web-7c9576cf89-r6rcj   0/1     Running   0          9s
    
  4. Follow the pod logs and check whether the new handler is receiving web requests as expected.

    kubectl logs -f galaxy-web-7c9576cf89-r6rcj
    

    You will notice that Kubernetes automatically load balances requests between the available web handler replicas in a round-robin fashion.

Testing Kubernetes resilience

To observe how Kubernetes handles failures, let’s exec into a running container and manually kill a process to simulate a possible process failure. Kubernetes continuously monitors running containers, and attempts to bring the environment back to the “desired” state. The moment it notices a failure, it will respawn a new pod to replace the failed one. Typically, a Kubernetes container will also have a liveness probe defined. A liveness probe can be an http request to a port, or even a manually executed shell script, which will test whether the relevant container is healthy, and if not, Kubernetes will immediately provision a new replacement.

Hands-on: Handling failures
  1. First list the available pods.

    kubectl get pods
    NAME                          READY   STATUS    RESTARTS   AGE
    galaxy-galaxy-postgres-0      1/1     Running   0          2d9h
    galaxy-job-5cc75c6588-8dsbg   1/1     Running   0          14m
    galaxy-web-7c9576cf89-49nlm   1/1     Running   0          14m
    galaxy-web-7c9576cf89-r6rcj   0/1     Running   1          7m36s
    

    Then exec into one:

    kubectl exec -it galaxy-web-7c9576cf89-r6rcj /bin/bash
    
  2. Now kill the main container process.

    kill 1
    
  3. If we run kubectl get pods, we can notice how Kubernetes immediately starts a new pod to replace the failed one, bringing the environment back to the desired state. Take a look at the liveness probe defined for the galaxy web container in the helm chart source code (templates/deployment-web.yaml).

Deleting Galaxy

Finally, let’s take a look at how we can uninstall Galaxy and remove all related containers.

Hands-on: Deleting Galaxy
  1. To permanently delete the Galaxy release, run:

    helm delete --purge galaxy
    

    The purge flag instructs helm to permanently remove galaxy from its history.

  2. Use kubectl get pods to verify that the pods have been deleted.

Next Steps

This tutorial provided an overview of some common Galaxy administration tasks. Advanced customizations would include running custom shell scripts on Galaxy startup to perform additional tasks, running additional containers on startup, administering and managing storage, building custom Galaxy containers with desired modifications etc. For more info on some of these topics, take a look at the Galaxy Helm chart repository as well as other tutorials tagged with kubernetes. Also, feel free to reach out on Gitter: https://gitter.im/galaxyproject/FederatedGalaxy.

Key points
  • Modifying configuration is a matter of having some local config files that are mapped in their entirety into the Galaxy container.

  • Scaling is a simple matter of changing the number of replicas.

  • K8S enables zero downtime upgrades and sets the stage for continuous delivery

Frequently Asked Questions

Have questions about this tutorial? Check out the tutorial FAQ page or the FAQ page for the Galaxy Server administration topic to see if your question is listed there. If not, please ask your question on the GTN Gitter Channel or the Galaxy Help Forum

Feedback

Did you use this material as an instructor? Feel free to give us feedback on how it went.
Did you use this material as a learner or student? Click the form below to leave feedback.

Click here to load Google feedback frame

Citing this Tutorial

  1. Nuwan Goonasekera, Enis Afgan, Alex Mahmoud, Pablo Moreno, John Davis, 2022 Managing Galaxy on Kubernetes (Galaxy Training Materials). https://training.galaxyproject.org/training-material/topics/admin/tutorials/k8s-managing-galaxy/tutorial.html Online; accessed TODAY
  2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012


@misc{admin-k8s-managing-galaxy,
author = "Nuwan Goonasekera and Enis Afgan and Alex Mahmoud and Pablo Moreno and John Davis",
title = "Managing Galaxy on Kubernetes (Galaxy Training Materials)",
year = "2022",
month = "10",
day = "18"
url = "\url{https://training.galaxyproject.org/training-material/topics/admin/tutorials/k8s-managing-galaxy/tutorial.html}",
note = "[Online; accessed TODAY]"
}
@article{Batut_2018,
    doi = {10.1016/j.cels.2018.05.012},
    url = {https://doi.org/10.1016%2Fj.cels.2018.05.012},
    year = 2018,
    month = {jun},
    publisher = {Elsevier {BV}},
    volume = {6},
    number = {6},
    pages = {752--758.e1},
    author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\`{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning},
    title = {Community-Driven Data Analysis Training for Biology},
    journal = {Cell Systems}
}
                   

Congratulations on successfully completing this tutorial!