Scripting Galaxy using the API and BioBlend
Author(s) | Nicola Soranzo |
Editor(s) | Clare Sloggett Nitesh Turaga Helena Rasche |
OverviewQuestions:Objectives:
What is a REST API?
How to interact with Galaxy programmatically?
Why and when should I use BioBlend?
Requirements:
Interact with Galaxy via BioBlend.
Time estimation: 2 hoursLevel: Introductory IntroductorySupporting Materials:Last modification: Sep 28, 2022
Best viewed in a Jupyter NotebookThis tutorial is best viewed in a Jupyter notebook! You can load this notebook one of the following ways
Launching the notebook in Jupyter in Galaxy
- Instructions to Launch JupyterLab
- Open a Terminal in JupyterLab with File -> New -> Terminal
- Run
wget https://training.galaxyproject.org/training-material/topics/dev/tutorials/bioblend-api/dev-bioblend-api.ipynb
- Select the notebook that appears in the list of files on the left.
Downloading the notebook
- Right click one of these links: Jupyter Notebook (With Solutions), Jupyter Notebook (Without Solutions)
- Save Link As..
AgendaIn this tutorial, we will cover:
Interacting with histories in Galaxy API
We are going to use the requests Python library to communicate via HTTP with the Galaxy server. To start, let’s define the connection parameters.
You need to insert the API key for your Galaxy server in the cell below:
- Open the Galaxy server in another browser tab
- Click on “User” on the top menu, then “Preferences”
- Click on “Manage API key”
- Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the
api_key
variable below.
import json
from pprint import pprint
from urllib.parse import urljoin
import requests
server = 'https://usegalaxy.eu/'
api_key = ''
base_url = urljoin(server, 'api')
base_url
We now make a GET request to retrieve all histories owned by a user:
headers = {"Content-Type": "application/json", "x-api-key": api_key}
r = requests.get(base_url + "/histories", headers=headers)
print(r.text)
hists = r.json()
pprint(hists)
As you can see, GET requests in Galaxy API return JSON strings, which need to be deserialized into Python data structures. In particular, GETting a resource collection returns a list of dictionaries.
Each dictionary returned when GETting a resource collection gives basic info about a resource, e.g. for a history you have:
id
: the unique identifier of the history, needed for all specific requests about this resourcename
: the name of this history as given by the userdeleted
: whether the history has been deleted.
There is no readily-available filtering capability, but it’s not difficult to filter histories by name:
pprint([_ for _ in hists if _['name'] == 'Unnamed history'])
If you are interested in more details about a given resource, you just need to append its id
to the previous collection request, e.g. to the get more info for a history:
hist0_id = hists[0]['id']
print(hist0_id)
r = requests.get(base_url + "/histories/" + hist0_id, headers=headers)
pprint(r.json())
As you can see, there are much more entries in the returned dictionary, e.g.:
create_time
size
: total disk space used by the historystate_ids
: ids of history datasets for each possible state.
To get the list of datasets contained in a history, simply append /contents
to the previous resource request.
r = requests.get(base_url + "/histories/" + hist0_id + "/contents", headers=headers)
hdas = r.json()
pprint(hdas)
The dictionaries returned when GETting the history content give basic info about each dataset, e.g.: id
, name
, deleted
, state
, url
…
To get the details about a specific dataset, you can use the datasets
controller:
hda0_id = hdas[0]['id']
print(hda0_id)
r = requests.get(base_url + "/datasets/" + hda0_id, headers=headers)
pprint(r.json())
Some of the interesting additional dictionary entries are:
create_time
creating job
: id of the job which created this datasetdownload_url
: URL to download the datasetfile_ext
: the Galaxy data type of this datasetfile_size
genome_build
: the genome build (dbkey) associated to this dataset.
New resources are created with POST requests. The uploaded data needs to be serialized in a JSON string. For example, to create a new history:
data = {'name': 'New history'}
r = requests.post(base_url + "/histories", data=json.dumps(data), headers=headers)
new_hist = r.json()
pprint(new_hist)
The return value of a POST request is a dictionary with detailed info about the created resource.
To update a resource, make a PUT request, e.g. to change the history name:
data = {'name': 'Updated history'}
r = requests.put(base_url + "/histories/" + new_hist["id"], json.dumps(data), headers=headers)
print(r.status_code)
pprint(r.json())
The return value of a PUT request is usually a dictionary with detailed info about the updated resource.
Finally to delete a resource, make a DELETE request, e.g.:
r = requests.delete(base_url + "/histories/" + new_hist["id"], headers=headers)
print(r.status_code)
Exercise: Galaxy API
Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.
Question: InitialiseFirst, define the connection parameters. What variables do you need?
import json from pprint import pprint from urllib.parse import urljoin import requests server = 'https://usegalaxy.eu/' api_key = '' base_url = urljoin(server, 'api')
# Try it out here!
Question: New HistoryNext, create a new Galaxy history via POST to the correct API.
headers = {"Content-Type": "application/json", "x-api-key": api_key} data = {"name": "New history"} r = requests.post(base_url + "/histories", data=json.dumps(data), headers=headers) new_hist = r.json() pprint(new_hist)
# Try it out here!
Question: Upload a datasetUpload the local file
1.txt
to the new history. You need to run the specialupload1
tool by making aPOST
request to/api/tools
. You don’t need to pass any inputs to it apart from attaching the file asfiles_0|file_data
. Also, note that when attaching a file you need to dropContent-Type
from the request headers.You can obtain the
1.txt
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt
data = { "history_id": new_hist["id"], "tool_id": "upload1" } with open("1.txt", "rb") as f: files = {"files_0|file_data": f} r = requests.post(base_url + "/tools", data=data, files=files, headers={"x-api-key": api_key}) ret = r.json() pprint(ret)
# Try it out here!
Question: Find the dataset in your historyFind the new uploaded dataset, either from the dict returned by the POST request above or from the history contents.
hda = ret['outputs'][0] pprint(hda)
# Try it out here!
Question: Import a workflowImport a workflow from the local file
convert_to_tab.ga
by making aPOST
request to/api/workflows
. The only needed data isworkflow
, which must be a deserialized JSON representation of the workflow.You can obtain the
convert_to_tab.ga
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga
with open("convert_to_tab.ga", "r") as f: workflow_json = json.load(f) data = {'workflow': workflow_json} r = requests.post(base_url + "/workflows", data=json.dumps(data), headers=headers) wf = r.json() pprint(wf)
# Try it out here!
Question: View the workflow detailsView the details of the imported workflow by making a GET request to
/api/workflows
.r = requests.get(base_url + "/workflows/" + wf["id"], headers=headers) wf = r.json() pprint(wf)
# Try it out here!
Question: Invoke the workflowRun the imported workflow on the uploaded dataset inside the same history by making a
POST
request to/api/workflows/WORKFLOW_ID/invocations
. The only needed data arehistory
andinputs
.inputs = {0: {'id': hda['id'], 'src': 'hda'}} data = { 'history': 'hist_id=' + new_hist['id'], 'inputs': inputs} r = requests.post(base_url + "/workflows/" + wf["id"] + "/invocations", data=json.dumps(data), headers=headers) pprint(r.json())
# Try it out here!
Question: View the resultsView the results on the Galaxy server with your web browser. Were you successful? Did it run?
Interacting with histories in BioBlend
You need to insert the API key for your Galaxy server in the cell below:
- Open the Galaxy server in another browser tab
- Click on “User” on the top menu, then “Preferences”
- Click on “Manage API key”
- Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the
api_key
variable below.
The user interacts with a Galaxy server through a GalaxyInstance
object:
from pprint import pprint
import bioblend.galaxy
server = 'https://usegalaxy.eu/'
api_key = ''
gi = bioblend.galaxy.GalaxyInstance(url=server, key=api_key)
The GalaxyInstance
object gives you access to the various controllers, i.e. the resources you are dealing with, like histories
, tools
and workflows
.
Therefore, method calls will have the format gi.controller.method()
. For example, the call to retrieve all histories owned by the current user is:
pprint(gi.histories.get_histories())
As you can see, methods in BioBlend do not return JSON strings, but deserialize them into Python data structures. In particular, get_
methods return a list of dictionaries.
Each dictionary gives basic info about a resource, e.g. for a history you have:
id
: the unique identifier of the history, needed for all specific requests about this resourcename
: the name of this history as given by the userdeleted
: whether the history has been deleted.
New resources are created with create_
methods, e.g. the call to create a new history is:
new_hist = gi.histories.create_history(name='BioBlend test')
pprint(new_hist)
As you can see, to make POST requests in BioBlend it is not necessary to serialize data, you just pass them explicitly as parameters. The return value is a dictionary with detailed info about the created resource.
get_
methods usually have filtering capabilities, e.g. it is possible to filter histories by name:
pprint(gi.histories.get_histories(name='BioBlend test'))
To upload the local file 1.txt
to the new history, you can run the special upload tool by calling the upload_file
method of the tools
controller.
You can obtain the 1.txt
file from the following URL, you’ll need to download it first.
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt
hist_id = new_hist["id"]
pprint(gi.tools.upload_file("1.txt", hist_id))
If you are interested in more details about a given resource for which you know the id, you can use the corresponding show_
method. For example, to the get more info for the history we have just populated:
pprint(gi.histories.show_history(history_id=hist_id))
As you can see, there are much more entries in the returned dictionary, e.g.:
create_time
size
: total disk space used by the historystate_ids
: ids of history datasets for each possible state.
To get the list of datasets contained in a history, simply add contents=True
to the previous call.
hdas = gi.histories.show_history(history_id=hist_id, contents=True)
pprint(hdas)
The dictionaries returned when showing the history content give basic info about each dataset, e.g.: id
, name
, deleted
, state
, url
…
To get the details about a specific dataset, you can use the datasets
controller:
hda0_id = hdas[0]['id']
print(hda0_id)
pprint(gi.datasets.show_dataset(hda0_id))
Some of the interesting additional dictionary entries are:
create_time
creating job
: id of the job which created this datasetdownload_url
: URL to download the datasetfile_ext
: the Galaxy data type of this datasetfile_size
genome_build
: the genome build (dbkey) associated to this dataset.
To update a resource, use the update_
method, e.g. to change the name of the new history:
pprint(gi.histories.update_history(new_hist['id'], name='Updated history'))
The return value of update_
methods is usually a dictionary with detailed info about the updated resource.
Finally to delete a resource, use the delete_
method, e.g.:
pprint(gi.histories.delete_history(new_hist['id']))
Exercise: BioBlend
Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.
Question: InitialiseCreate a
GalaxyInstance
object.from pprint import pprint import bioblend.galaxy server = 'https://usegalaxy.eu/' api_key = '' gi = bioblend.galaxy.GalaxyInstance(url=server, key=api_key)
# Try it out here!
Question: New HistoryCreate a new Galaxy history.
new_hist = gi.histories.create_history(name='New history') pprint(new_hist)
# Try it out here!
Question: Upload a datasetUpload the local file
1.txt
to the new history usingtools.upload_file()
.You can obtain the
1.txt
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt
ret = gi.tools.upload_file("1.txt", new_hist["id"]) pprint(ret)
# Try it out here!
Question: Find the dataset in your historyFind the new uploaded dataset, either from the dict returned by
tools.upload_file()
or from the history contents.hda = ret['outputs'][0] pprint(hda)
# Try it out here!
Question: Import a workflowImport a workflow from the local file
convert_to_tab.ga
usingworkflows.import_workflow_from_local_path()
.You can obtain the
convert_to_tab.ga
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga
wf = gi.workflows.import_workflow_from_local_path("convert_to_tab.ga") pprint(wf)
# Try it out here!
Question: View the workflow detailsView the details of the imported workflow using
workflows.show_workflow()
wf = gi.workflows.show_workflow(wf['id']) pprint(wf)
# Try it out here!
Question: Invoke the workflowRun the imported workflow on the uploaded dataset inside the same history using
workflows.invoke_workflow()
.inputs = {0: {'id': hda['id'], 'src': 'hda'}} ret = gi.workflows.invoke_workflow(wf['id'], inputs=inputs, history_id=new_hist['id']) pprint(ret)
# Try it out here!
Question: View the resultsView the results on the Galaxy server with your web browser. Were you successful? Did it run?
Interacting with histories in BioBlend.objects
You need to insert the API key for your Galaxy server in the cell below:
- Open the Galaxy server in another browser tab
- Click on “User” on the top menu, then “Preferences”
- Click on “Manage API key”
- Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the
api_key
variable below.
The user interacts with a Galaxy server through a GalaxyInstance
object:
from pprint import pprint
import bioblend.galaxy.objects
server = 'https://usegalaxy.eu/'
api_key = ''
gi = bioblend.galaxy.objects.GalaxyInstance(url=server, api_key=api_key)
All GalaxyInstance
method calls have the client.method()
format, where client
is the name of the resources you dealing with. There are 2 methods to get the list of resources:
get_previews()
: lightweight (one GET request), retrieves basic resources’ info, returns a list of preview objectslist()
: one GET request for each resource, retrieves full resources’ info, returns a list of full objects.
For example, the call to retrieve previews of all histories owned by the current user is:
pprint(gi.histories.get_previews())
New resources are created with create()
methods, e.g. to create a new history:
new_hist = gi.histories.create(name='BioBlend test')
new_hist
As you can see, the create()
methods in BioBlend.objects returns an object, not a dictionary.
Both get_previews()
and list()
methods usually have filtering capabilities, e.g. it is possible to filter histories by name:
pprint(gi.histories.list(name='BioBlend test'))
To upload the local file 1.txt
to the new history, you can run the special upload tool by calling the upload_file
method of the History
object.
You can obtain the 1.txt
file from the following URL, you’ll need to download it first.
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt
hda = new_hist.upload_file("1.txt")
hda
Please note that with BioBlend.objects there is no need to find the upload dataset, since upload_file()
already returns a HistoryDatasetAssociation
object.
Both HistoryPreview
and History
objects have many of their properties available as attributes, e.g. the id.
If you need to specify the unique id of the resource to retrieve, you can use the get()
method, e.g. to get back the history we created before:
gi.histories.get(new_hist.id)
To get the list of datasets contained in a history, simply look at the content_infos
attribute of the History
object.
pprint(new_hist.content_infos)
To get the details about one dataset, you can use the get_dataset()
method of the History
object:
new_hist.get_dataset(hda.id)
You can also filter history datasets by name using the get_datasets()
method of History
objects.
To update a resource, use the update()
method of its object, e.g. to change the history name:
new_hist.update(name='Updated history')
The return value of update()
methods is the updated object.
Finally to delete a resource, you can use the delete()
method of the object, e.g.:
new_hist.delete()
Exercise: BioBlend.objects
Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.
Question: InitialiseCreate a
GalaxyInstance
object.from pprint import pprint import bioblend.galaxy server = 'https://usegalaxy.eu/' api_key = '' gi = bioblend.galaxy.objects.GalaxyInstance(url=server, api_key=api_key)
# Try it out here!
Question: New HistoryCreate a new Galaxy history.
new_hist = gi.histories.create(name='New history') new_hist
# Try it out here!
Question: Upload a datasetUpload the local file
1.txt
to the new history using theupload_file()
method ofHistory
objects.You can obtain the
1.txt
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt
hda = new_hist.upload_file("1.txt") hda
# Try it out here!
Question: Import a workflowImport a workflow from the local file
convert_to_tab.ga
usingworkflows.import_new()
You can obtain the
convert_to_tab.ga
file from the following URL, you’ll need to download it first.https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga
with open("convert_to_tab.ga", "r") as f: wf_string = f.read() wf = gi.workflows.import_new(wf_string) wf
# Try it out here!
Question: View the workflow inputswf.inputs
# Try it out here!
Question: Invoke the workflowRun the imported workflow on the uploaded dataset inside the same history using the
invoke()
method ofWorkflow
objects.inputs = {'0': hda} wf.invoke(inputs=inputs, history=new_hist)
# Try it out here!
Question: View the resultsView the results on the Galaxy server with your web browser. Were you successful? Did it run?
Optional Extra Exercises
If you have completed the exercise, you can try to perform these extra tasks with the help of the online documentation:
- Download the workflow result to your computer
- Publish your history
Key points
The API allows you to use Galaxy’s capabilities programmatically.
BioBlend makes using the Galaxy API from Python easier.
BioBlend objects is an object-oriented interface for interacting with Galaxy.
Frequently Asked Questions
Have questions about this tutorial? Check out the tutorial FAQ page or the FAQ page for the Development in Galaxy topic to see if your question is listed there. If not, please ask your question on the GTN Gitter Channel or the Galaxy Help ForumFeedback
Did you use this material as an instructor? Feel free to give us feedback on how it went.
Did you use this material as a learner or student? Click the form below to leave feedback.
Citing this Tutorial
- , 2022 Scripting Galaxy using the API and BioBlend (Galaxy Training Materials). https://training.galaxyproject.org/training-material/topics/dev/tutorials/bioblend-api/tutorial.html Online; accessed TODAY
- Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012
Congratulations on successfully completing this tutorial!@misc{dev-bioblend-api, author = "Nicola Soranzo", title = "Scripting Galaxy using the API and BioBlend (Galaxy Training Materials)", year = "2022", month = "09", day = "28" url = "\url{https://training.galaxyproject.org/training-material/topics/dev/tutorials/bioblend-api/tutorial.html}", note = "[Online; accessed TODAY]" } @article{Batut_2018, doi = {10.1016/j.cels.2018.05.012}, url = {https://doi.org/10.1016%2Fj.cels.2018.05.012}, year = 2018, month = {jun}, publisher = {Elsevier {BV}}, volume = {6}, number = {6}, pages = {752--758.e1}, author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\`{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning}, title = {Community-Driven Data Analysis Training for Biology}, journal = {Cell Systems} }
Do you want to extend your knowledge? Follow one of our recommended follow-up trainings: