Introducing Scalable and Enterprise-Grade Genomics Workflows in Azure ML
Published Feb 24 2023 07:36 AM 2,994 Views

Genomics workflows are essential in bioinformatics as they help researchers analyse and interpret vast amounts of genomic data. However, creating a consistent and repeatable environment with specialized software and complex dependencies can be challenging, making integration with CI/CD tools difficult, too.

 

Azure Machine Learning (Azure ML) is a cloud-based platform that provides a comprehensive set of tools and services for developing, deploying, and managing machine learning models. Azure ML offers great repeatability and auditability features natively that not many workflow solutions offer. It provides a highly integrated and standardised environment for running workflows, ensuring that each step is executed in a consistent and reproducible manner. This feature is particularly useful for genomics workflows that require the use of multiple tools and software packages of certain versions with specific dependencies.

 

In this blog post, we will show how Azure ML can run genomics workflows efficiently and effectively, in addition to being an end-to-end platform for machine learning model training and deployment. Figure 1 illustrates an example of such a workflow.

 

Dr_Mutlu_Dogruel_0-1677252874231.png
Figure 1: A sample genomics workflow running in Azure ML, consisting of 3 steps. A reference genome input dataset flows into the indexer step, while the sequence quality step gets its data from a folder of DNA sequences (".fastq" files).

 

Azure ML has comprehensive audit and logging capabilities that track and record every step of the workflow, ensuring traceability and repeatability. One of the critical features of Azure ML to achieve these capabilities is its support for users to be able to specify Docker and Conda environments for each workflow step, which guarantees consistent environment execution. These environments can be versioned and centrally shared. Workflow steps within pipelines then can refer to a particular environment. Figure 2 shows one such environment, bwa, version "5". As we make modifications in the environment definition, the new version will be registered as "6", however, we will still be able to continue to use older versions.

 

Dr_Mutlu_Dogruel_1-1677252874375.png

 

 

Figure 2: An example Azure ML environment, defining a Docker image containing the BWA bioinformatics software package. This is the 5th version of this environment registered under the name, "bwa".

 

Like environments, Azure ML supports user created pipeline components that can be centrally registered for reuse in other pipelines, also versioned, and with an audit log of their usage. Runs are logged together with standard out and error streams generated by the underlying processes, automatically. MLflow logging and adding custom tags to all assets and runs are supported, too. This feature ensures that the results are consistent and reproducible, saving users’ time. An example versioned component is shown in Figure 3.

 

Dr_Mutlu_Dogruel_2-1677252874230.png

Figure 3: An Azure ML component named "BWA Indexer". It is a self-contained, re-usable, versioned piece of code that does one step in a machine learning pipeline: running the bwa indexer command, in this instance.

 

Versioning is not limited to environments and pipeline components. Another essential feature of Azure ML is its support for versioning all input datasets and genomic data, including overall pipeline input, and as well as intermediate step and final outputs, if needed. This feature enables users to keep track of dataset changes and ensure that the same version is used consistently across different runs of the workflow, or in others.

 

There are many genomics workflow engines which are very good with multiple parallel execution when it comes to processing files in parallel. However, Azure ML parallel steps support parallel running both at the file-level (one by one, or 3 files at a time etc) and at the file chunk-level (50 MB of data per process, or 20 KB of text per node etc) where appropriate as supported by the consuming application, enabling the processing of large genomic datasets efficiently across elastic compute clusters that can auto-scale. Pipelines can even also run locally on your laptop for test/development phases, and of course support powerful CPU and GPU-based VMs, low priority or on-demand compute clusters, Spark engines, and other compute targets such as Azure Kubernetes, making it flexible for different use cases.

 

Azure ML has integrations with Azure DevOps and GitHub Actions for CI/CD, making it easy to deploy and manage genomics workflows in a production environment, which in turn makes GenomicsOps possible. Well established pipelines ready for "production use" can be published, and called on-demand or from other Azure services including the Azure Data Factory and Synapse. This means we can create a schedule for running pipelines automatically, or whenever data become available.

Thanks to its Python SDK, command line utility (az cli, ml extension), REST-API, and user-friendly UI, it makes it possible to develop pipelines and initiate pipeline runs from any preferred means, also providing easy monitoring and management of workflows. That said, event-based triggers and notifications are also supported. For instance, one can set up an email alert that will be triggered whenever a genomics pipeline finishes execution.

 

As compute and storage are de-coupled, any pipeline input or output stored in an Azure ML datastore or blob storage can also be accessed by Azure ML’s Jupyter Notebooks for any upstream or downstream analysis.

 

Azure ML is a managed PaaS service, making it an accessible and easy to set up platform for genomics researchers and bioinformaticians. Additionally, it has a Visual Studio Code integration for local development and has a workspace concept for managing pipeline projects, enabling collaboration, and Azure role-based access control (RBAC).

 

In conclusion, Azure ML comes with advanced security features, including AD authentication, public & private endpoints, subscription-based event triggers, storage backed by the Azure Storage Service that comes with encryption at rest and in transit, and application insights, making it a reliable and already proven enterprise platform that can also be natively used for genomics research.

 

For a more detailed tutorial that shows how to set up and run the example workflow shown in Figure 1, as well as for all the source code for creating the aforementioned sample environments and components, please check out this GitHub repository:

 

truehand/azureml-genomics (github.com)

Co-Authors
Version history
Last update:
‎Feb 24 2023 09:20 AM
Updated by: