Pipelines Api Examples

Examples for the Google Genomics Pipelines API.
Alternatives To Pipelines Api Examples
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Galaxy1,10521217 hours ago13November 02, 20212,086otherPython
Data intensive science for everyone.
Scrna Seq_notes389
18 days agomitR
A list of scRNA-seq analysis tools
4 months ago40mitShell
A One-Click System for Analyzing Loop-Resolution Hi-C Experiments
10 months ago1March 03, 202117otherC
:microscope: BEDOPS: high-performance genomic feature operations
a day ago124mitNextflow
Analysis pipeline to detect germline or somatic variants (pre-processing, variant calling and annotation) from WGS / targeted sequencing
3 years ago18March 02, 201939gpl-3.0Python
FlowCraft: a component-based pipeline composer for omics analysis using Nextflow. :whale::package:
Aws Batch Genomics181
3 years ago5apache-2.0Python
Software sets up and runs an genome sequencing analysis workflow using AWS Batch and AWS Step Functions.
Viral Ngs129
3 years ago60otherPython
Viral genomics analysis pipelines
3 years agon,ullmitNextflow
Detect germline or somatic variants from normal or tumour/normal whole-genome or targeted sequencing
Cloud Pipeline123
a day ago804apache-2.0Java
Cloud agnostic genomics analysis, scientific computation and storage platform
Alternatives To Pipelines Api Examples
Select To Compare

Alternative Project Comparisons


This repository contains examples for the [Google Genomics Pipelines API] (https://cloud.google.com/genomics/reference/rest/v1alpha2/pipelines).

This is an Alpha release of Google Genomics API. This feature might be changed in backward-incompatible ways and is not recommended for production use. It is not subject to any SLA or deprecation policy.

The API provides an easy way to create, run, and monitor command-line tools on Google Compute Engine running in a Docker container. You can use it like you would a job scheduler.

The most common use case is to run an off-the-shelf tool or custom script that reads and writes files. You may want to run such a tool over files in Google Cloud Storage. You may want to run this independently over hundreds or thousands of files.

The typical flow for a pipeline is:

  1. Create a Compute Engine virtual machine
  2. Copy one or more files from Cloud Storage to a disk
  3. Run the tool on the file(s)
  4. Copy the output to Cloud Storage
  5. Destroy the Compute Engine virtual machine

You can submit batch operations from your laptop, and have them run in the cloud. You can do the packaging to Docker yourself, or use existing Docker images.


  1. Clone or fork this repository.
  2. If you plan to create your own Docker images, then install docker: https://docs.docker.com/engine/installation/#installation
  3. Follow the Google Genomics getting started instructions to set up your Google Cloud Project. The Pipelines API requires that the following are enabled in your project:
    1. Genomics API
    2. Cloud Storage API
    3. Compute Engine API
  4. Follow the Google Genomics getting started instructions to install and authorize the Google Cloud SDK.
  5. Install or update the python client via pip install --upgrade google-api-python-client. For more detail see https://cloud.google.com/genomics/v1/libraries.


See Also

Popular Pipeline Projects
Popular Genomics Projects
Popular Data Processing Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Cloud Storage