Skip to content

Commit 296943b

Browse files
authored
Merge pull request #152 from nasa/DEV_Metagenomics_Illumina_NF_conversion
Metagenomics Illumina NF conversion and pipeline update Pipeline updates: - Update tool versions - FastQC - MultiQC - bowtie2 - samtools - CAT - GTDB-Tk - HUMAnN - MetaPhlAn - In step 14d, MAG taxonomic classification, added the new --skip_ani_screen argument to gtdbtk classify_wf to continue classifying genomes as in previous versions of GTDB-Tk, using mash and skani. Workflow updates: - Update to the latest pipeline version (GL-DPPD-7101-A) - Implement workflow in Nextflow (NF_MGIllumina) rather than Snakemake as in previous workflow versions. - Run checkm separately on each bin and combine results to improve performance Workflow Bug fixes: - Allow explicit specification of the humann3 database location [Metagenomics Illumina] explicitly setting humann3 reference db locations in rule, fixes issue #62 - Package bin and MAGs fasta files into per sample zip archives [Metagenomics Illumina] Some datasets have too many files in the MAGs and bins folders, fixes issue #76
2 parents 7dfbb47 + eee1c4c commit 296943b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+8880
-6
lines changed

Metagenomics/Illumina/Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107-A.md

Lines changed: 1216 additions & 0 deletions
Large diffs are not rendered by default.

Metagenomics/Illumina/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

22
# GeneLab bioinformatics processing pipeline for Illumina metagenomics sequencing data
33

4-
> **The document [`GL-DPPD-7107.md`](Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107.md) holds an overview and example commands for how GeneLab processes Illumina metagenomics sequencing datasets. See the [Repository Links](#repository-links) descriptions below for more information. Processed data output files and processing code are provided for each GLDS dataset in the [Open Science Data Repository (OSDR)](https://osdr.nasa.gov/bio/repo/).**
4+
> **The document [`GL-DPPD-7107-A.md`](Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107-A.md) holds an overview and example commands for how GeneLab processes Illumina metagenomics sequencing datasets. See the [Repository Links](#repository-links) descriptions below for more information. Processed data output files and processing code are provided for each GLDS dataset in the [Open Science Data Repository (OSDR)](https://osdr.nasa.gov/bio/repo/).**
55
>
66
> Note: The exact processing commands and MGIllumina version used for specific GLDS datasets can be found in the *_processing_info.zip file under "Files" for each respective GLDS dataset in the [Open Science Data Repository (OSDR)](https://osdr.nasa.gov/bio/repo/).
77
@@ -26,4 +26,4 @@
2626
---
2727

2828
**Developed and maintained by:**
29-
Michael D. Lee (Mike.Lee@nasa.gov)
29+
Michael D. Lee (Mike.Lee@nasa.gov) and Olabiyi A.Obayomi (olabiyi.a.obayomi@nasa.gov)
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Workflow change log
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7+
8+
9+
## [1.0.0](https://github.com/nasa/GeneLab_Data_Processing/tree/NF_MGIllumina_1.0.0/Metagenomics/Illumina/Workflow_Documentation/NF_MGIllumina)
10+
11+
### Changed
12+
- Update to the latest pipeline version [GL-DPPD-7101-A](../../Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107-A.md)
13+
of the GeneLab Metagenomics consensus processing pipeline.
14+
- Pipeline implementation as a Nextflow workflow [NF_MGIllumina](./) rather than Snakemake as in
15+
previous workflow versions.
16+
- Run checkm separately on each bin and combine results to improve performance
17+
18+
### Fixed
19+
- Allow explicit specification of the humann3 database location ([#62](https://github.com/nasa/GeneLab_Data_Processing/issues/62))
20+
- Package bin and MAGs fasta files into per sample zip archives ([#76](https://github.com/nasa/GeneLab_Data_Processing/issues/76))
21+
22+
<BR>
23+
24+
---
25+
26+
> ***Note:** All previous workflow changes were associated with the previous version of the GeneLab Metagenomics Pipeline
27+
[GL-DPPD-7101](../../Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107.md) and can be found in the
28+
[change log of the Snakemake workflow (SW_MGIllumina)](../SW_MGIllumina/CHANGELOG.md).*
Lines changed: 228 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,228 @@
1+
# Workflow Information and Usage Instructions
2+
3+
## General Workflow Info
4+
5+
### Implementation Tools
6+
7+
The current GeneLab Illumina metagenomics sequencing data processing pipeline (MGIllumina-A), [GL-DPPD-7107-A.md](../../Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107-A.md), is implemented as a [Nextflow](https://nextflow.io/) DSL2 workflow and utilizes [Singularity](https://docs.sylabs.io/guides/3.10/user-guide/introduction.html) containers, [Docker](https://docs.docker.com/get-started/) containers, or [conda](https://docs.conda.io/en/latest/) environments to install/run all tools. This workflow is run using the command line interface (CLI) of any unix-based system. While knowledge of creating workflows in Nextflow is not required to run the workflow as is, [the Nextflow documentation](https://nextflow.io/docs/latest/index.html) is a useful resource for users who want to modify and/or extend this workflow.
8+
9+
> **Note on reference databases**
10+
> Many reference databases are relied upon throughout this workflow. They will be installed and setup automatically the first time the workflow is run. All together, after installed and unpacked, they will take up about about 340 GB of storage, but they may also require up to 500GB during installation and initial un-packing, so be sure there is enough room on your system before running the workflow.
11+
12+
<br>
13+
14+
## Utilizing the Workflow
15+
16+
1. [Installing Nextflow, Singularity, and conda](#1-installing-nextflow-singularity-and-conda)
17+
1a. [Install Nextflow and conda](#1a-install-nextflow-and-conda)
18+
1b. [Install Singularity](#1b-install-singularity)
19+
2. [Download the workflow files](#2-download-the-workflow-files)
20+
3. [Fetch Singularity Images](#3-fetch-singularity-images)
21+
4. [Run the workflow](#4-run-the-workflow)
22+
4a. [Approach 1: Start with OSD or GLDS accession as input](#4a-approach-1-start-with-an-osd-or-glds-accession-as-input)
23+
4b. [Approach 2: Start with a runsheet csv file as input](#4b-approach-2-start-with-a-runsheet-csv-file-as-input)
24+
4c. [Modify parameters and compute resources in the Nextflow config file](#4c-modify-parameters-and-compute-resources-in-the-nextflow-config-file)
25+
5. [Workflow outputs](#5-workflow-outputs)
26+
5a. [Main outputs](#5a-main-outputs)
27+
5b. [Resource logs](#5b-resource-logs)
28+
6. [Post Processing](#6-post-processing)
29+
30+
<br>
31+
32+
---
33+
34+
### 1. Installing Nextflow, Singularity, and conda
35+
36+
#### 1a. Install Nextflow and conda
37+
38+
Nextflow can be installed either through [Anaconda](https://anaconda.org/bioconda/nextflow) or as documented on the [Nextflow documentation page](https://www.nextflow.io/docs/latest/getstarted.html).
39+
40+
> Note: If you want to install Anaconda, we recommend installing a Miniconda version appropriate for your system, as instructed by [Happy Belly Bioinformatics](https://astrobiomike.github.io/unix/conda-intro#getting-and-installing-conda).
41+
>
42+
> Once conda is installed on your system, you can install the latest version of Nextflow by running the following commands:
43+
>
44+
> ```bash
45+
> conda install -c bioconda nextflow
46+
> nextflow self-update
47+
> ```
48+
> You may also install [mamba](https://mamba.readthedocs.io/en/latest/index.html) first which is a faster implementation of conda and can be used as a drop-in replacement:
49+
> ```bash
50+
> conda install -c conda-forge mamba
51+
> conda install -c bioconda nextflow
52+
> nextflow self-update
53+
> ```
54+
55+
<br>
56+
57+
#### 1b. Install Singularity
58+
59+
Singularity is a container platform that allows usage of containerized software. This enables the GeneLab workflow to retrieve and use all software required for processing without the need to install the software directly on the user's system.
60+
61+
We recommend installing Singularity on a system wide level as per the associated [documentation](https://docs.sylabs.io/guides/3.10/admin-guide/admin_quickstart.html).
62+
63+
> Note: Singularity is also available through [Anaconda](https://anaconda.org/conda-forge/singularity).
64+
65+
> Note: Alternatively, Docker can be used in place of Singularity. To get started with Docker, see the [Docker CE installation documentation](https://docs.docker.com/engine/install/).
66+
67+
<br>
68+
69+
---
70+
71+
### 2. Download the workflow files
72+
73+
All files required for utilizing the NF_MGIllumina GeneLab workflow for processing metagenomics Illumina data are in the [workflow_code](workflow_code) directory. To get a copy of latest *NF_MGIllumina* version on to your system, the code can be downloaded as a zip file from the release page then unzipped after downloading by running the following commands:
74+
75+
```bash
76+
wget https://github.com/nasa/GeneLab_Data_Processing/releases/download/NF_MGIllumina_1.0.0/NF_MGIllumina_1.0.0.zip
77+
unzip NF_MGIllumina_1.0.0.zip && cd NF_MGIllumina_1.0.0
78+
```
79+
80+
<br>
81+
82+
---
83+
84+
### 3. Fetch Singularity Images
85+
86+
Although Nextflow can fetch Singularity images from a url, doing so may cause issues as detailed [here](https://github.com/nextflow-io/nextflow/issues/1210).
87+
88+
To avoid this issue, run the following command to fetch the Singularity images prior to running the NF_MGIllumina workflow:
89+
90+
> Note: This command should be run from within the `NF_MGIllumina_1.0.0` directory that was downloaded in [step 2](#2-download-the-workflow-files) above.
91+
92+
```bash
93+
bash ./bin/prepull_singularity.sh nextflow.config
94+
```
95+
96+
Once complete, a `singularity` folder containing the Singularity images will be created. Run the following command to export this folder as a Nextflow configuration environment variable to ensure Nextflow can locate the fetched images:
97+
98+
```bash
99+
export NXF_SINGULARITY_CACHEDIR=$(pwd)/singularity
100+
```
101+
102+
<br>
103+
104+
---
105+
106+
### 4. Run the Workflow
107+
108+
> ***Note:** All the commands in this step must be run from within the `NF_MGIllumina_1.0.0` directory that was downloaded in [step 2](#2-download-the-workflow-files) above.*
109+
110+
For options and detailed help on how to run the workflow, run the following command:
111+
112+
```bash
113+
nextflow run main.nf --help
114+
```
115+
116+
> Note: Nextflow commands use both single hyphen arguments (e.g. -help) that denote general Nextflow
117+
arguments and double hyphen arguments (e.g. --input_file) that denote workflow specific parameters.
118+
Take care to use the proper number of hyphens for each argument.
119+
120+
<br>
121+
122+
#### 4a. Approach 1: Start with an OSD or GLDS accession as input
123+
124+
```bash
125+
nextflow run main.nf -resume -profile singularity --accession OSD-574
126+
```
127+
128+
<br>
129+
130+
#### 4b. Approach 2: Start with a runsheet csv file as input
131+
132+
```bash
133+
nextflow run main.nf -resume -profile singularity --input_file PE_file.csv
134+
```
135+
136+
<br>
137+
138+
**Required Parameters For All Approaches:**
139+
140+
* `-run main.nf` - Instructs Nextflow to run the NF_MGIllumina workflow
141+
142+
* `-resume` - Resumes workflow execution using previously cached results
143+
144+
* `-profile` – Specifies the configuration profile(s) to load (multiple options can be provided as a comma-separated list)
145+
* Software environment profile options (choose one):
146+
* `singularity` - instructs Nextflow to use Singularity container environments
147+
* `docker` - instructs Nextflow to use Docker container environments
148+
* `conda` - instructs Nextflow to use conda environments via the conda package manager. By default, Nextflow will create environments at runtime using the yaml files in the [workflow_code/envs](workflow_code/envs/) folder. You can change this behavior by using the `--conda_*` workflow parameters or by editing the [nextflow.config](workflow_code/nextflow.config) file to specify a centralized conda environments directory via the `conda.cacheDir` parameter
149+
* `mamba` - instructs Nextflow to use conda environments via the mamba package manager.
150+
* Other option (can be combined with the software environment option above):
151+
* `slurm` - instructs Nextflow to use the [Slurm cluster management and job scheduling system](https://slurm.schedmd.com/overview.html) to schedule and run the jobs on a Slurm HPC cluster.
152+
153+
* `--accession` – A Genelab / OSD accession number e.g. OSD-574.
154+
> *Required only if you would like to download and process data directly from OSDR*
155+
156+
* `--input_file` – A single-end or paired-end runsheet csv file containing assay metadata for each sample, including sample_id, forward, reverse, and/or paired. Please see the [runsheet documentation](./examples/runsheet) in this repository for examples on how to format this file.
157+
> *Required only if `--accession` is not passed as an argument*
158+
159+
<br>
160+
161+
> See `nextflow run -h` and [Nextflow's CLI run command documentation](https://nextflow.io/docs/latest/cli.html#run) for more options and details on how to run Nextflow.
162+
> For additional information on editing the `nextflow.config` file, see [Step 4d](#4d-modify-parameters-and-cpu-resources-in-the-nextflow-config-file) below.
163+
164+
165+
<br>
166+
167+
#### 4c. Modify parameters and compute resources in the Nextflow config file
168+
169+
Additionally, all parameters and workflow resources can be directly specified in the [nextflow.config](./workflow_code/nextflow.config) file. For detailed instructions on how to modify and set parameters in the config file, please see the [documentation here](https://www.nextflow.io/docs/latest/config.html).
170+
171+
Once you've downloaded the workflow template, you can modify the parameters in the `params` scope and cpus/memory requirements in the `process` scope in your downloaded version of the [nextflow.config](workflow_code/nextflow.config) file as needed in order to match your dataset and system setup. Additionally, if necessary, you can modify each variable in the [nextflow.config](workflow_code/nextflow.config) file to be consistent with the study you want to process and the computer you're using for processing.
172+
173+
<br>
174+
175+
---
176+
177+
### 5. Workflow outputs
178+
179+
#### 5a. Main outputs
180+
181+
> Note: The outputs from the GeneLab Illumina metagenomics sequencing data processing pipeline workflow are documented in the [GL-DPPD-7107-A.md](../../Pipeline_GL-DPPD-7107_Versions/GL-DPPD-7107-A.md) processing protocol.
182+
183+
#### 5b. Resource logs
184+
185+
Standard Nextflow resource usage logs are also produced as follows:
186+
187+
**Nextflow Resource Usage Logs**
188+
- Output:
189+
- Resource_Usage/execution_report_{timestamp}.html (an html report that includes metrics about the workflow execution including computational resources and exact workflow process commands)
190+
- Resource_Usage/execution_timeline_{timestamp}.html (an html timeline for all processes executed in the workflow)
191+
- Resource_Usage/execution_trace_{timestamp}.txt (an execution tracing file that contains information about each process executed in the workflow, including: submission time, start time, completion time, cpu and memory used, machine-readable output)
192+
193+
> Further details about these logs can also found within [this Nextflow documentation page](https://www.nextflow.io/docs/latest/tracing.html#execution-report).
194+
195+
<br>
196+
197+
---
198+
199+
### 6. Post Processing
200+
201+
The post-processing workflow generates a README file, a protocols file, an md5sums
202+
table, and a file association table suitable for uploading to OSDR.
203+
204+
For options and detailed help on how to run the post-processing workflow, run the following command:
205+
206+
```bash
207+
nextflow run post_processing.nf --help
208+
```
209+
210+
To generate the post-processing files after running the main processing workflow successfully, modify and set the parameters in [post_processing.config](workflow_code/post_processing.config), then run the following command:
211+
212+
```bash
213+
nextflow -C post_processing.config run post_processing.nf -resume -profile singularity
214+
```
215+
216+
The outputs of the post-processing workflow are described below:
217+
218+
**Post processing workflow**
219+
- Output:
220+
- Post_processing/FastQC_Outputs/filtered_multiqc_GLmetagenomics_report.zip (Filtered sequence multiqc report with paths purged)
221+
- Post_processing/FastQC_Outputs/raw_multiqc_GLmetagenomics_report.zip (Raw sequence multiqc report with paths purged)
222+
- Post_processing/<GLDS_accession>_-associated-file-names.tsv (File association table for curation)
223+
- Post_processing/<GLDS_accession>_metagenomics-validation.log (Automated verification and validation log file)
224+
- Post_processing/processed_md5sum_GLmetagenomics.tsv (md5sums for the files to be released on OSDR)
225+
- Post_processing/processing_info_GLmetagenomics.zip (Zip file containing all files used to run the workflow and required logs with paths purged)
226+
- Post_processing/protocol.txt (File describing the methods used by the workflow)
227+
- Post_processing/README_GLmetagenomics.txt (README file listing and describing the outputs of the workflow)
228+
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Runsheet File Specification
2+
3+
## Description
4+
5+
* The runsheet is a comma-separated file that contains the metadata required for processing
6+
metagenomics sequence datasets through the GeneLab Illumina metagenomics sequencing data
7+
processing pipeline (MGIllumina).
8+
9+
10+
## Examples
11+
12+
1. Runsheet for an example [paired-end dataset](paired_end_dataset/PE_file.csv)
13+
2. Runsheet for an example [single-end dataset](single_end_dataset/SE_file.csv)
14+
15+
16+
## Required columns
17+
18+
| Column Name | Type | Description | Example |
19+
|:------------|:-----|:------------|:--------|
20+
| sample_id | string | Unique Sample Name, added as a prefix to sample-specific processed data output files. Should not include spaces or weird characters. | RR23_FCS_FLT_F1 |
21+
| forward | string (local path) | Location of the raw reads file. For paired-end data, this specifies the forward reads fastq.gz file. | /my/data/sample1_R1_HRremoved_raw.fastq.gz |
22+
| reverse | string (local path) | Location of the raw reads file. For paired-end data, this specifies the reverse reads fastq.gz file. For single-end data, this column should be omitted. | /my/data/sample1_R2_HRremoved_raw.fastq.gz |
23+
| paired | bool | Set to True if the samples were sequenced as paired-end. If set to False, samples are assumed to be single-end. | False |

0 commit comments

Comments
 (0)