nf-core/hic
Analysis of Chromosome Conformation Capture data (Hi-C)
1.2.0
). The latest
stable release is
2.1.0
.
Introduction
Nextflow handles job submissions on SLURM or other environments, and supervises
running the jobs. Thus the Nextflow process must run until the pipeline is
finished. We recommend that you put the process running in the background
through screen
/ tmux
or similar tool. Alternatively you can run nextflow
within a cluster job submitted your job scheduler.
It is recommended to limit the Nextflow Java virtual machines memory.
We recommend adding the following line to your environment (typically
in ~/.bashrc
or ~./bash_profile
):
Running the pipeline
The typical command for running the pipeline is as follows:
This will launch the pipeline with the docker
configuration profile.
See below for more information about profiles.
Note that the pipeline will create the following files in your working directory:
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the
nf-core/hic releases page and find
the latest version number - numeric only (eg. 1.3.1
).
Then specify this when running the pipeline with -r
(one hyphen)
eg. -r 1.3.1
.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future.
Main arguments
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order
of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be
installed and available on the PATH
. This is not recommended.
docker
- A generic configuration profile to be used with Docker
- Pulls software from dockerhub:
nfcore/hic
singularity
- A generic configuration profile to be used with Singularity
- Pulls software from DockerHub:
nfcore/hic
conda
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
--reads
Use this to specify the location of your input FastQ files. For example:
Please note the following requirements:
- The path must be enclosed in quotes
- The path must have at least one
*
wildcard character - When using the pipeline with paired end data, the path must use
{1,2}
notation to specify read pairs.
If left unspecified, a default pattern is used: data/*{1,2}.fastq.gz
--single_end
By default, the pipeline expects paired-end data. If you have single-end data, you need to specify --single_end
on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for --reads
. For example:
It is not possible to run a mixture of single-end and paired-end files in one run.
Reference genomes
The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the AWS-iGenomes resource.
--genome
(using iGenomes)
There are 31 different species supported in the iGenomes references. To run the pipeline, you must specify which to use with the --genome
flag.
There are 31 different species supported in the iGenomes references. To run
the pipeline, you must specify which to use with the --genome
flag.
You can find the keys to specify the genomes in the iGenomes config file. Common genomes that are supported are:
- Human
--genome GRCh37
- Mouse
--genome GRCm38
- Drosophila
--genome BDGP6
- S. cerevisiae
--genome 'R64-1-1'
There are numerous others - check the config file for more.
Note that you can use the same configuration setup to save sets of reference files for your own use, even if they are not part of the iGenomes resource. See the Nextflow documentation for instructions on where to save such a file.
The syntax for this reference configuration is as follows:
--fasta
If you prefer, you can specify the full path to your reference genome when you run the pipeline:
--igenomesIgnore
Do not load igenomes.config
when running the pipeline. You may choose this
option if you observe clashes between custom parameters and those supplied
in igenomes.config
.
--bwt2_index
The bowtie2 indexes are required to run the Hi-C pipeline. If the
--bwt2_index
is not specified, the pipeline will either use the igenome
bowtie2 indexes (see --genome
option) or build the indexes on-the-fly
(see --fasta
option)
--chromosome_size
The Hi-C pipeline will also requires a two-columns text file with the
chromosome name and its size (tab separated).
If not specified, this file will be automatically created by the pipeline.
In the latter case, the --fasta
reference genome has to be specified.
--restriction_fragments
Finally, Hi-C experiments based on restriction enzyme digestion requires a BED file with coordinates of restriction fragments.
If not specified, this file will be automatically created by the pipline.
In this case, the --fasta
reference genome will be used.
Note that the --restriction_site
parameter is mandatory to create this file.
Hi-C specific options
The following options are defined in the hicpro.config
file, and can be
updated either using a custom configuration file (see -c
option) or using
command line parameter.
Reads mapping
The reads mapping is currently based on the two-steps strategy implemented in the HiC-pro pipeline. The idea is to first align reads from end-to-end. Reads that do not aligned are then trimmed at the ligation site, and their 5’ end is re-aligned to the reference genome. Note that the default option are quite stringent, and can be updated according to the reads quality or the reference genome.
--bwt2_opts_end2end
Bowtie2 alignment option for end-to-end mapping. Default: ‘—very-sensitive -L 30 —score-min L,-0.6,-0.2 —end-to-end —reorder’
--bwt2_opts_trimmed
Bowtie2 alignment option for trimmed reads mapping (step 2). Default: ‘—very-sensitive -L 20 —score-min L,-0.6,-0.2 —end-to-end —reorder’
--min_mapq
Minimum mapping quality. Reads with lower quality are discarded. Default: 10
Digestion Hi-C
--restriction_site
Restriction motif(s) for Hi-C digestion protocol. The restriction motif(s) is(are) used to generate the list of restriction fragments. The precise cutting site of the restriction enzyme has to be specified using the ’^’ character. Default: ‘A^AGCTT’ Here are a few examples:
- MboI: ^GATC
- DpnII: ^GATC
- BglII: A^GATCT
- HindIII: A^AGCTT
- ARIMA kit: ^GATC,G^ANTC
Note that multiples restriction motifs can be provided (comma-separated) and that ‘N’ base are supported.
--ligation_site
Ligation motif after reads ligation. This motif is used for reads trimming and depends on the fill in strategy. Note that multiple ligation sites can be specified (comma separated) and that ‘N’ base is interpreted and replaced by ‘A’,‘C’,‘G’,‘T’. Default: ‘AAGCTAGCTT’
Exemple of the ARIMA kit: GATCGATC,GANTGATC,GANTANTC,GATCANTC
--min_restriction_fragment_size
Minimum size of restriction fragments to consider for the Hi-C processing. Default: ”
--max_restriction_fragment_size
Maximum size of restriction fragments to consider for the Hi-C processing. Default: ”
--min_insert_size
Minimum reads insert size. Shorter 3C products are discarded. Default: ”
--max_insert_size
Maximum reads insert size. Longer 3C products are discarded. Default: ”
DNAse Hi-C
--dnase
In DNAse Hi-C mode, all options related to digestion Hi-C
(see previous section) are ignored.
In this case, it is highly recommanded to use the --min_cis_dist
parameter
to remove spurious ligation products.
Hi-C processing
--min_cis_dist
Filter short range contact below the specified distance. Mainly useful for DNase Hi-C. Default: ”
--rm_singleton
If specified, singleton reads are discarded at the mapping step.
--rm_dup
If specified, duplicates reads are discarded before building contact maps.
--rm_multi
If specified, reads that aligned multiple times on the genome are discarded. Note the default mapping options are based on random hit assignment, meaning that only one position is kept per read.
Genome-wide contact maps
--bin_size
Resolution of contact maps to generate (space separated). Default:‘1000000,500000’
--ice_max_iter
Maximum number of iteration for ICE normalization. Default: 100
--ice_filer_low_count_perc
Define which pourcentage of bins with low counts should be force to zero. Default: 0.02
--ice_filer_high_count_perc
Define which pourcentage of bins with low counts should be discarded before normalization. Default: 0
--ice_eps
The relative increment in the results before declaring convergence for ICE normalization. Default: 0.1
Inputs/Outputs
--splitFastq
By default, the nf-core Hi-C pipeline expects one read pairs per sample.
However, for large Hi-C data processing single fastq files can be very
time consuming.
The --splitFastq
option allows to automatically split input read pairs
into chunks of reads. In this case, all chunks will be processed in parallel
and merged before generating the contact maps, thus leading to a significant
increase of processing performance.
--saveReference
If specified, annotation files automatically generated from the --fasta
file
are exported in the results folder. Default: false
--saveAlignedIntermediates
If specified, all intermediate mapping files are saved and exported in the results folder. Default: false
--saveInteractionBAM
If specified, write a BAM file with all classified reads (valid paires, dangling end, self-circle, etc.) and its tags.
Skip options
--skipMaps
If defined, the workflow stops with the list of valid interactions, and the genome-wide maps are not built. Usefult for capture-C analysis. Default: false
--skipIce
If defined, the ICE normalization is not run on the raw contact maps. Default: false
--skipCool
If defined, cooler files are not generated. Default: false
--skipMultiQC
If defined, the MultiQC report is not generated. Default: false
Job resources
Automatic resubmission
Each step in the pipeline has a default set of requirements for number of CPUs,
memory and time. For most of the steps in the pipeline, if the job exits with
an error code of 143
(exceeded requested resources) it will automatically
resubmit with higher requests (2 x original, then 3 x original). If it still
fails after three times then the pipeline is stopped.
Custom resource requests
Wherever process-specific requirements are set in the pipeline, the default value
can be changed by creating a custom config file. See the files hosted
at nf-core/configs
for examples.
If you have any questions or issues please send us a message on Slack.
AWS Batch specific parameters
Running the pipeline on AWS Batch requires a couple of specific parameters to be
set according to your AWS Batch configuration. Please use
-profile awsbatch
and then specify all of the following parameters.
--awsqueue
The JobQueue that you intend to use on AWS Batch.
--awsregion
The AWS region in which to run your job. Default is set to eu-west-1
but can be adjusted to your needs.
--awscli
The AWS CLI
path in your custom AMI. Default: /home/ec2-user/miniconda/bin/aws
.
The AWS region to run your job in. Default is set to eu-west-1
but can be
adjusted to your needs.
Please make sure to also set the -w/--work-dir
and --outdir
parameters to
a S3 storage bucket of your choice - you’ll get an error message notifying you
if you didn’t.
Other command line parameters
--outdir
The output directory where the results will be saved.
--email
Set this parameter to your e-mail address to get a summary e-mail with details
of the run sent to you when the workflow exits. If set in your user config file
(~/.nextflow/config
) then you don’t need to specify this on the command line for every run.
--email_on_fail
This works exactly as with --email
, except emails are only sent if the workflow is not successful.
--max_multiqc_email_size
Threshold size for MultiQC report to be attached in notification email. If file generated by pipeline exceeds the threshold, it will not be attached (Default: 25MB).
-name
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.
This is used in the MultiQC report (if not default) and in the summary HTML / e-mail (always).
NB: Single hyphen (core Nextflow option)
-resume
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
You can also supply a run name to resume a specific run: -resume [run-name]
.
Use the nextflow log
command to show previous run names.
NB: Single hyphen (core Nextflow option)
-c
Specify the path to a specific config file (this is a core NextFlow command).
NB: Single hyphen (core Nextflow option)
Note - you can use this to override pipeline defaults.
--custom_config_version
Provide git commit id for custom Institutional configs hosted at nf-core/configs
.
This was implemented for reproducibility purposes. Default: master
.
--custom_config_base
If you’re running offline, nextflow will not be able to fetch the institutional config files
from the internet. If you don’t need them, then this is not a problem. If you do need them,
you should download the files from the repo and tell nextflow where to find them with the
custom_config_base
option. For example:
Note that the nf-core/tools helper package has a
download
command to download all required pipeline files + singularity containers + institutional configs in one go for you, to make this process easier.
--max_memory
Use to set a top-limit for the default memory requirement for each process.
Should be a string in the format integer-unit. eg. --max_memory '8.GB'
--max_time
Use to set a top-limit for the default time requirement for each process.
Should be a string in the format integer-unit. eg. --max_time '2.h'
--max_cpus
Use to set a top-limit for the default CPU requirement for each process.
Should be a string in the format integer-unit. eg. --max_cpus 1
--plaintext_email
Set to receive plain-text e-mails instead of HTML formatted.
--monochrome_logs
Set to disable colourful command line output and live life in monochrome.
--multiqc_config
Specify a path to a custom MultiQC configuration file.