Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2019 Oct 15;15(10):e1007453. doi: 10.1371/journal.pcbi.1007453

TAPES: A tool for assessment and prioritisation in exome studies

Alexandre Xavier 1,*, Rodney J Scott 1,2, Bente A Talseth-Palmer 1,3
Editor: Mihaela Pertea4
PMCID: PMC6814239  PMID: 31613886

Abstract

Next-generation sequencing continues to grow in importance for researchers. Exome sequencing became a widespread tool to further study the genomic basis of Mendelian diseases. In an effort to identify pathogenic variants, reject benign variants and better predict variant effects in downstream analysis, the American College of Medical Genetics (ACMG) published a set of criteria in 2015. While there are multiple publicly available software’s available to assign the ACMG criteria, most of them do not take into account multi-sample variant calling formats. Here we present a tool for assessment and prioritisation in exome studies (TAPES, https://github.com/a-xavier/tapes), an open-source tool designed for small-scale exome studies. TAPES can quickly assign ACMG criteria using ANNOVAR or VEP annotated files and implements a model to transform the categorical ACMG criteria into a continuous probability, allowing for a more accurate classification of pathogenicity or benignity of variants. In addition, TAPES can work with cohorts sharing a common phenotype by utilising a simple enrichment analysis, requiring no controls as an input as well as providing powerful filtering and reporting options. Finally, benchmarks showed that TAPES outperforms available tools to detect both pathogenic and benign variants, while also integrating the identification of enriched variants in study cohorts compared to the general population, making it an ideal tool to evaluate a smaller cohort before using bigger scale studies.

Author summary

New sequencing techniques allow researchers to study the genetic basis of diseases. Predicting the effect of genetic variants is critical to understand the mechanisms underlying disease. Available software can predict how pathogenic a variant is, but do not take into account the abundance of a variants in a cohort. TAPES is a simple open-source tool that can both more accurately predict pathogenicity (using probability over categories) and provide insight on variants enrichment in a cohort sharing the same disease.


This is a PLOS Computational Biology Software paper.

Introduction

With the advances in Next-Generation Sequencing (NGS) technologies and the decline in price over the last few years, exome sequencing has become a standard tool to explore the genetic basis of inherited diseases [1]. It has become easy to annotate the ever-increasing amount of variants identified by such methods, using tools such as VEP [2], snpEff [3] or ANNOVAR [4]. These tools help researchers to better predict the downstream effect of a variant and give insight, for example, on the frequency of the mutation in the general population, the impact on proteins or in-silico predictions of pathogenicity.

In 2015, the American College of Medical Genetics (ACMG) published a set of criteria to assess the probability of a variant pathogenicity, classifying them into five categories [5], from benign to pathogenic, facilitating downstream analysis.

Since then, tools have been developed to assess individual variant pathogenicity using the ACMG criteria (such as CharGer [6] and Intervar [7]) but they do not have the ability to take into account the frequency of variants in a cohort. The categorical nature of the ACMG criteria also leaves a lot of variants classified as “a variant of unknown significance”.

Here, we present TAPES, an open-source tool to both assess and prioritise variants by pathogenicity. TAPES can assign the ACMG criteria and by using one of the first implementations of the model described in Tavtigian et al. [8], providing a more nuanced and easy to understand estimated probability for a variant to be either pathogenic or benign, thus transforming categorical classification into a more linear prediction. Our goal during development was first to create a simple tool that can better predict pathogenicity and reject benign variants, and then to assess a cohort sharing a phenotype by detecting enriched variants compared to the general population without the need of control samples. In addition, we focused on providing simple yet powerful reporting and filtering systems, while allowing pathway analysis of pathogenic mutations, gene-burden calculations and per-sample reporting.

Design and implementation

ANNOVAR interface and annotated variant file

TAPES sorting option can be used with both ANNOVAR and VEP annotated variant calling files (VCF). However we also provide users with simple wrapping tools for a local installation of ANNOVAR to simplify the workflow (this requires users to download ANNOVAR). Users can annotate VCF, gzipped VCF and binary VCF (BCF) using two simple commands without having to specify the databases and annotations to use.

While there are a set of annotation needed to assign all ACMG criteria (see https://github.com/a-xavier/tapes/wiki/Necessary-Annotations for the full list), TAPES will use as many available annotations as possible to assign the relevant ACMG criteria.

Variant classification

TAPES requires annotated ANNOVAR (VCF or tab/comma-separated values) or VEP (VCF) files to use the sorting module.

Regular ACMG criteria assignment

For most of the ACMG criteria assignment (PVS1, PS1, PS3, PM1, PM2, PM4, PM5, PP2, PP3, PP5, BS1, BS2, BS3, BP1, BP3, BP4, BP6, BP7 and BA1), we tried to stay as true to the original ACMG definition as possible when implementing the criteria assignment. Please see Richards et al. [5] and S1 Table for more information on the ACMG Criteria definition.

Enrichment analysis / PS4 criteria

One of TAPES unique features is the ability to calculate variant enrichment from public frequency data (ExAC or gNomad [9]), without having to sequence control samples. In cohort studies, TAPES require a multi-sample vcf file to extract genotyping data and get frequencies from the cohort studied. It uses a simple one-sided Fisher’s exact test to calculate both the Odds Ratio (OR) and the p-value of the enrichment. Only the variant enrichment in the cohort is tested against the general population.

Since OR calculation requires integer numbers and frequency in the general population is given as a 0–1 fraction, TAPES approximates the number of individuals affected using the following formula.

If MAFc is the Minor Allele Frequency (MAF) in a control population, nc is the number of individuals affected by the variant in the control population and Nc is the number of individuals without the variant then:

MAFc=y×10x,nc=yandNc=10x2nc.

For example if:

MAFc=3.23×105thennc=4andNc=10524.

This approximation is only valid if the following assumptions are made; MAF in the control population is under 0.05 and that very rare variants are mostly heterozygous.

The PS4 criteria assignment was designed to be more stringent than a normal study with controls (choosing to overestimate the frequency in the general population) and will only be assigned if OR ≥ 20, p-value ≤ 0.001 and at least 2 individuals in the cohort share the variant.

Trio analysis / PS2 assignment

TAPES allow researchers to work with trio studies. In trio studies, the user provides information such as sample name, trio ID and pedigree information in a tab-delimited file. Then PS2 will be assigned if a variant is identified as de-novo and healthy parents are removed from downstream analysis. PS2 is assigned to a variant if it was found as de-novo in any trio but details from each trio will still be provided.

Probability of pathogenicity calculation

TAPES includes the model developed by Tavtigian et al. [8] to transform ACMG categorical classification into linear probability of pathogenicity and the method uses the default parameters from (Prior P = 0.10, OPVSt = 350 and X = 2). This allows for a finer pathogenicity prediction and adjustable thresholds to decide variant pathogenicity. It is important to keep in mind that this measure is a probability and not a measure of how pathogenic a variant is.

Cohort reporting

TAPES provides an array of different useful reports.

Filtering

TAPES can easily perform advance filtering. Three different options are available. First, users can provide a custom list of gene symbols (either as a text file or directly on the command line) to only output variants present in those genes. Then users can also do a reverse pathway search by providing the name of a pathway (extracted from KEGG pathways [10]) and output a report with variants in genes involved in that pathway. Finally, users can run searches based on terms contained in the description for each gene, i.e. if the user looks for ‘autosomal dominant’ genes or ‘colorectal cancer’ genes. These filtered reports keep the same format as the main report, making it possible to use them with other reporting tools.

By-sample report

For each individual in the cohort, a report containing the variant predicted to be pathogenic with the highest level of confidence will be available. This allows the study of individual samples and their specificity.

By-gene report

TAPES can also calculate, for each gene, a gene burden score. This score helps determining which genes harbour the most potentially pathogenic variants in a cohort. This can be useful when searching for variants in diseases caused by single genes and that cannot be discovered using pathway analysis. The gene burden score is calculated by summing the probability of pathogenicity of a specific variant multiplied by the number of individuals with that genotype in the cohort.

Geneburdenscore=1nPi×Ni

Calculated for each gene, where Pi = the probability of pathogenicity of the variant and Ni = Number of samples affected by the variant. If Pi ≤ 0.80 then the variant is excluded.

This measure is useful to detect which genes in the cohort are particularly enriched in pathogenic and probably pathogenic variants (it is important to remember that this measure is a sum of probabilities). However, there are a few caveats. This measure might be affected by very long genes or genes frequently mutated in exomes (FLAGS [11]). In some cases, poorly mapped reads (for example due to pseudo-autosomal regions in the X or Y chromosome), might impact the result with an excessive number of samples affected by a variant. TAPES provides an appropriate warning for all of those cases.

Pathway analysis

TAPES can also perform a pathway analysis using the EnrichR [12] API. Only genes containing variants that are predicted to be pathogenic are kept as a gene list. The user can then use any library to analyse the gene list but the default is GO_Biological_Process_2018. Pathway analysis is important to understand the possibly disrupted mechanism and the commonalities between variants found in a cohort.

Results

Variant classification

TAPES variant classification was benchmarked against similar tools, CharGer [6] and Intervar [7] using the prediction on the pathogenicity of variants of the expert panel of Zhang et al., 2015 as reference [13] (see S2 Table for the full table). This dataset was also used to benchmark CharGer in their original publication. The ‘probably pathogenic’ and ‘pathogenic’ predictions were pooled into one ‘pathogenic’ group. Similarly the ‘probably benign’ and ‘benign’ were pooled into one ‘benign’ group.

To assess the predictive power of each software, we used Receiver Operating Characteristics (ROC) curves and calculated the area under the curve (AUC) as well as the precision-recall curves and average precision (AP). We compared TAPES ACMG and probability of pathogenicity prediction with CharGer score and InterVar ACMG prediction (see Fig 1).

Fig 1. ROC curves and precision recall curves.

Fig 1

a) ROC curve of various softwares for pathogenicity prediction AUC b) ROC curve of various softwares for benignity prediction AUC c) Precision-recall curve of various softwares for pathogenicity prediction d) Precision-recall curve of various softwares for benignity prediction (Metrics used; TAPES proba; TAPES probability of pathogenicity prediction, TAPES ACMG: TAPES ACMG prediction, CharGer score: CharGer prediction of pathogenicity based of a custom score, InterVar ACMG: InterVar ACMG prediction).

TAPES probability of pathogenicity, using Tavtigian et al [8] modelling, outperformed both software’s tested using AUC and AP for prediction of both pathogenic and benign variants.

AUC and AP show that using TAPES ACMG criteria assignment remains less precise than using CharGer custom score (due to the additional information CharGer need to function properly) and closer to InterVar. Using the probability of pathogenicity should be the preferred way to identify pathogenic variants and reject benign variants. Based on ROC curves, a threshold of 0.80–0.85 for probability of pathogenicity seemed to keep high true positive rate (TPR) while low false positive rate (FPR) for predicting pathogenic variants. Similarly, a threshold of 0.20–0.35 for probability of pathogenicity had high TPR and low FPR for predicting benignity.

To validate these findings and choose the best probability thresholds for pathogenicity and benignity, we used TAPES, InterVar and CharGer on a different dataset (see S3 Table). Using 530 hand curated variants from ClinGen evidence repository (https://erepo.clinicalgenome.org/evrepo/) as ground truth. TAPES outperformed both InterVar and CharGer (see Fig 2). In addition to the precision of the prediction, TAPES also outperformed other software in terms of absolute number of variants correctly identified.

Fig 2. Validation dataset software comparisons.

Fig 2

a) Percentage of identical calls between the ClinGen expert panel decisions and software prediction. Lenient thresholds are 0.80 for pathogenicity and 0.35 for benignity. Strict thresholds are 0.85 for pathogenicity and 0.20 for benignity. b) Absolute number of variants predictions. Pathogenic and benign variants correctly and incorrectly identified between the panel of expert and various software. (Metrics used; TAPES probability lenient; TAPES probability of pathogenicity prediction 0.35–0.80, TAPES probability strict; TAPES probability of pathogenicity prediction 0.20–0.85 TAPES ACMG: TAPES ACMG prediction, CharGer: CharGer prediction of pathogenicity based of a custom score, InterVar ACMG: InterVar ACMG prediction).

We recommend to use TAPES probability of pathogenicity prediction with either lenient thresholds of 0.8 and 0.35 (respectively for pathogenicity and benignity) or stricter thresholds of 0.85 and 0.20.

Variant enrichment / PS4 benchmark

We compared our method of calculation of ORs compared to the normal method (see Fig 3).

Fig 3. PS4 calculation with Fisher’s exact test one sided.

Fig 3

Comparison of TAPES extrapolation of odds rations compared to the normal method (top graph). Comparison of the p-value of both methods (bottom graph). The vertical dotted line represents the known frequency of the variant in the studied cohort (0.025). The horizontal green dotted line represents the thresholds used to assign PS4 (OR = 20 or ln(OR) = 2.9957 (top) and p-value < 0.01(bottom)).

The OR using TAPES extrapolation is always smaller than the normal calculation, making it more stringent. Similarly, the p-value of the Fisher’s exact test rises faster with frequency than the normal method. This way, only the most significantly enriched variant are assigned with PS4 to ensure very few false positives.

Reporting options

TAPES reporting options are powerful and easy to use. Using a mock input file with variants from Zhang et al. [13] as well as simulated samples to form a cohort, the pathway analysis correctly identified DNA repair as the pathway with the most probable pathogenic variants.

The by-gene report also identified BRCA2 as the gene with the highest gene burden.

See S1 File to see all reports templates.

Availability and future directions

TAPES is available on github at: https://github.com/a-xavier/tapes, under the MIT licence, which allows anyone to both freely download and modify the source code. Help can be found both in the manual (located in the main repository) or on the wiki (https://github.com/a-xavier/tapes/wiki). Examples of inputs can also be found in the main repository. Dependencies can be easily installed using PyPi repositories (pip). All builds are verified through Travis continuous integration on Linux, Windows and macOS. All benchmarks and examples showed in this manuscript were generated using TAPES release 0.1.

All benchmarks and examples were generated using the initial release 0.1 of TAPES (https://github.com/a-xavier/tapes/releases).

TAPES will continue to evolve with the advances in various databases such as ExAC, dnSNP or dbNSFP. As they constantly update their data and the format, TAPES will evolve to be more precise and accurate. In addition, future directions include more statistical measures to detect significant variants in different cohort studies.

We aim to keep TAPES as simple and useful as possible to make it a perfect endpoint tool to analyse variants from small-scale cohorts.

Supporting information

S1 Table. ACMG criteria assignment in TAPES and definitions from the original Richards et al 2015 article.

(XLSX)

S2 Table. Comparison of Prediction between different pathogenicity assessment software and the expert panel from Zhang J et al. 2015.

Comparison between TAPES ACMG and pathogenicity probability prediction, CharGer Prediction Score and InterVar AMCG Prediction.

(XLSX)

S3 Table. Comparison of Prediction between different pathogenicity assessment software and the expert panel from ClinGen evidence repository variants.

Comparison between TAPES ACMG and pathogenicity probability prediction, CharGer Prediction Score and InterVar AMCG Prediction.

(TXT)

S1 File. Example reports from TAPES sort option.

Generated using the data from: Zhang, J., et al. Germline Mutations in Predisposition Genes in Pediatric Cancer. N Engl J Med 2015;373(24):2336–2346. Using the command: python tapes.py sort -i ./input.csv -o ./Report/ --tab --by_gene --by_sample --enrichr --disease "autosomal dominant" --kegg "Pathways in cancer". This file gives examples for the main report, the by-gene report, the by-sample report, the enrichr report, the disease report and the kegg report.

(XLSX)

S2 File. Files used for TAPES benchmark and validation.

The Initial Benchmark folder contains all files used for the original benchmark, CharGer_and_Panel_Benchmark.xlsx: CharGer pathogenicity prediction and expert panel decision from from: Zhang, J., et al. 2015, extracted from CharGer original publication, Synthetic_VCF_for_Benchmark.vcf.vcf: Synthetic VCF file created from the CharGer_and_Panel_Benchmark.xlsx variants information, InterVar_Benchmark.txt: InterVar predictions of pathogenicity after analysis of the synthetic VCF, TAPES_Benchmark.xlsx: TAPES prediction of pathogenicity after analysis of the synthetic VCF. The results of all 3 software are compiled in S2 Table. The Validation folder contains all filed used for the validation of the pathogenicity thresholds and comparison with other software. TAPES_validation_synthetic.vcf: Synthetic VCF created with data extracted from the ClinGen evidence repository (https://erepo.clinicalgenome.org/evrepo/), TAPES_validation.charger.txt: the CharGer predictions of pathogenicity after analysis of the Synthetic VCF, TAPES_Validation.intervar.txt: InterVar prediction of pathogenicity after analysis of the synthetic VCF, TAPES_Validation.tapes.txt: TAPES prediction of pathogenicity after analysis of the Synthetic VCF. The results of all 3 software are compiled in S3 Table.

(ZIP)

Acknowledgments

The authors would like to thank Mr. Sean Burnard for his helpful advices regarding this manuscript.

Data Availability

All source code can be found at: https://github.com/a-xavier/tapes. Documentation is also available through this repository and at: https://github.com/a-xavier/tapes/wiki.

Funding Statement

The Hunter Cancer Research Alliance (https://www.hcra.com.au/) funded Bente Talseth-Palmer and Alexandre Xavier. The University of Newcastle (https://www.newcastle.edu.au/) funded Alexandre Xavier. The Cancer Institute NSW (https://www.cancercouncil.com.au/) funded Bente Talseth-Palmer. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Bamshad MJ, Ng SB, Bigham AW, Tabor HK, Emond MJ, Nickerson DA, et al. Exome sequencing as a tool for Mendelian disease gene discovery. Nat Rev Genet. 2011;12(11):745–55. Epub 2011/09/29. 10.1038/nrg3031 . [DOI] [PubMed] [Google Scholar]
  • 2.McLaren W, Gil L, Hunt SE, Riat HS, Ritchie GR, Thormann A, et al. The Ensembl Variant Effect Predictor. Genome Biol. 2016;17(1):122 Epub 2016/06/09. 10.1186/s13059-016-0974-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cingolani P, Platts A, Wang le L, Coon M, Nguyen T, Wang L, et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly (Austin). 2012;6(2):80–92. Epub 2012/06/26. 10.4161/fly.19695 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wang K, Li M, Hakonarson H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010;38(16):e164 Epub 2010/07/06. 10.1093/nar/gkq603 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Richards S, Aziz N, Bale S, Bick D, Das S, Gastier-Foster J, et al. Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Genet Med. 2015;17(5):405–24. Epub 2015/03/06. 10.1038/gim.2015.30 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Scott AD, Huang KL, Weerasinghe A, Mashl RJ, Gao Q, Martins Rodrigues F, et al. CharGer: Clinical Characterization of Germline Variants. Bioinformatics. 2018. Epub 2018/08/14. 10.1093/bioinformatics/bty649 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Li Q, Wang K. InterVar: Clinical Interpretation of Genetic Variants by the 2015 ACMG-AMP Guidelines. Am J Hum Genet. 2017;100(2):267–80. Epub 2017/01/31. 10.1016/j.ajhg.2017.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Tavtigian SV, Greenblatt MS, Harrison SM, Nussbaum RL, Prabhu SA, Boucher KM, et al. Modeling the ACMG/AMP variant classification guidelines as a Bayesian classification framework. Genet Med. 2018;20(9):1054–60. Epub 2018/01/05. 10.1038/gim.2017.210 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Lek M, Karczewski KJ, Minikel EV, Samocha KE, Banks E, Fennell T, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536(7616):285–91. Epub 2016/08/19. 10.1038/nature19057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kanehisa M, Furumichi M, Tanabe M, Sato Y, Morishima K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2017;45(D1):D353–D61. Epub 2016/12/03. 10.1093/nar/gkw1092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Shyr C, Tarailo-Graovac M, Gottlieb M, Lee JJ, van Karnebeek C, Wasserman WW. FLAGS, frequently mutated genes in public exomes. BMC Med Genomics. 2014;7:64 Epub 2014/12/04. 10.1186/s12920-014-0064-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kuleshov MV, Jones MR, Rouillard AD, Fernandez NF, Duan Q, Wang Z, et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016;44(W1):W90–7. Epub 2016/05/05. 10.1093/nar/gkw377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zhang J, Walsh MF, Wu G, Edmonson MN, Gruber TA, Easton J, et al. Germline Mutations in Predisposition Genes in Pediatric Cancer. N Engl J Med. 2015;373(24):2336–46. Epub 2015/11/19. 10.1056/NEJMoa1508054 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1007453.r001

Decision Letter 0

Mihaela Pertea

2 Aug 2019

Dear Dr Xavier,

Thank you very much for submitting your manuscript 'TAPES: a tool for assessment and prioritisation in exome studies' for review by PLOS Computational Biology. Your manuscript has been fully evaluated by the PLOS Computational Biology editorial team and in this case also by independent peer reviewers. The reviewers appreciated the attention to an important problem, but raised some substantial concerns about the manuscript as it currently stands. While your manuscript cannot be accepted in its present form, we are willing to consider a revised version in which the issues raised by the reviewers have been adequately addressed. We cannot, of course, promise publication at that time.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

Your revisions should address the specific points made by each reviewer. Please return the revised version within the next 60 days. If you anticipate any delay in its return, we ask that you let us know the expected resubmission date by email at ploscompbiol@plos.org. Revised manuscripts received beyond 60 days may require evaluation and peer review similar to that applied to newly submitted manuscripts.

In addition, when you are ready to resubmit, please be prepared to provide the following:

(1) A detailed list of your responses to the review comments and the changes you have made in the manuscript. We require a file of this nature before your manuscript is passed back to the editors.

(2) A copy of your manuscript with the changes highlighted (encouraged). We encourage authors, if possible to show clearly where changes have been made to their manuscript e.g. by highlighting text.

(3) A striking still image to accompany your article (optional). If the image is judged to be suitable by the editors, it may be featured on our website and might be chosen as the issue image for that month. These square, high-quality images should be accompanied by a short caption. Please note as well that there should be no copyright restrictions on the use of the image, so that it can be published under the Open-Access license and be subject only to appropriate attribution.

Before you resubmit your manuscript, please consult our Submission Checklist to ensure your manuscript is formatted correctly for PLOS Computational Biology: http://www.ploscompbiol.org/static/checklist.action. Some key points to remember are:

- Figures uploaded separately as TIFF or EPS files (if you wish, your figures may remain in your main manuscript file in addition).

- Supporting Information uploaded as separate files, titled Dataset, Figure, Table, Text, Protocol, Audio, or Video.

- Funding information in the 'Financial Disclosure' box in the online system.

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see here

We are sorry that we cannot be more positive about your manuscript at this stage, but if you have any concerns or questions, please do not hesitate to contact us.

Sincerely,

Mihaela Pertea

Software Editor

PLOS Computational Biology

Mihaela Pertea

Software Editor

PLOS Computational Biology

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: SUMMARY:

'TAPES: a tool for assessment and prioritisation in exome studies' implements a novel and more precise method for assessing variant pathogenicity by introducing a novel modeling for integration of ACMG criteria. They leverage this model along with publicly available variant population frequencies to provide more accurate predictions of variant pathogenicity. Additionally, this software provides a comprehensive list of both reporting and analysis options.

MAJOR CODE PROBLEMS:

- Code doesn't seem to have any tests or automated way to run them: https://github.com/a-xavier/tapes. Please add tests (preferrably using a testing framework such as PyTest) that minimally take advantage of your toy datasets that covers most of your functionality. Integration with a free and automated continuous integration environment like Travis would also be highly recommended. Once tests are in place, potentialy using branches to provide a more stable development path may aid development

- Toy example provided doesn't work natively or within a virtualenv:

- python3 tapes.py sort -i ./Example_Output/input.csv -o ./Toy_dataset/ --tab --by_gene --by_sample --enrichr --disease "autosomal dominant" --kegg "Pathways in cancer":

No acmg_db path given and no db_config.json found

Default is: /home/ubuntu/repositories/tapes/acmg_db

***TAPES: SORT***

2019-07-15 10:37:05.....Output type: FOLDER

Traceback (most recent call last):

File "tapes.py", line 309, in <module>

main()

File "tapes.py", line 164, in main

output_prefix = args.output.split('\\\\')[-2]

IndexError: list index out of range

MINOR CODE FEEDBACK

- I would put code that is not top-level in a `src` or `source` directory.

- While the Manual is fine as a PDF, long-term maintenance might be easier if it is in markdown. It can be further extrapolated using something like Read the Docs or other services.

MINOR EDITS:

- line 25: should be: "share the same phenotype" , missing "the"

- line 27: I think it reads better to say "Benchmarks showed that TAPES outperforms avaialable tools"

- line 34: "Available software can predict" drop "'s"

- line 90: "individuals affected" and "number of individuals", individuals I believe should be plural in both cases

- line 96: "very vare variants" I believe variants should be plural

- line 134: "cohort are in the class are probably" , missing "are"

- line 137-139: This is not a complete sentence.

- Figure 1: Charger should be "CharGer" in your legend

MINOR QUESTIONS:

- In this model there are no controls, which is novel. I'm mildly curious if it can be shown that providing controls offers little or no statistical benefit over the publily available variant frequencies.

- A minor discussion of why the CharGer Scores were so simlar to the TAPES probability model might be useful in context of Figure 1.

Reviewer #2: The article "TAPES: a tool for assessment and prioritisation in exome

studies" describes a new software tool to identify pathogenic and

benign variants. The aim described is very promising. However, I think

the clarity of both the paper and the documentation could be improved.

I will first comment on my experience with the software and then on

the paper. (This review is writen in MarkDown format, so it can be

converted to html or other format to see code section.)

## Comments on the software package.

I cloned the repository from GitHub, and could install it. Then I ran

into a few problems.

1. I found a bug in `t_func.py` on line 3197 that made the program

crash

```

with gzip.open(os.path.join(acmg_db_path, 'repeat_dict.{}.gz'.format(assembly)), "r") as dj:

```

Correcting the line to the following solved the issue.

```

with gzip.open(os.path.join(acmg_db_path, 'repeat_dict.{}.gz'.format(assembly)), "rt") as dj:

```

2. It was not clear at installation that I should install annovar to

be able to use tapes. I have found this information later in the

manual.

3. After installing annovar I needed to run

`python3 tapes.py db -s -A annovar` and

`python3 tapes.py db -b annovar` before I could annotate vcf

files. These commands were only mentioned at the end of the manual.

4. I did not manage to find a way to start with a vcf file, annotate

it and finally obtain ACMG classification. I think a tutorial and an

example dataset (starting from vcf files) would be valuable additions.

5. I would suggest to add a workflow diagram both to the manual and to

the paper to make it clear what kind of steps are needed and what

are the potential input and output files.

6. I could not identify what was the input file used for the analysis

shown in the paper, so I could not check whether it is

reproducible.

7. The program does not always produce the expected file name or it

does create the expected file, but does not log it

correctly. I think the code needs to be checked more thoroughly.

8. Please create a release for the publication version of the package

so people can know which version/status of the software was used

for the publication (This can be done at https://github.com/a-xavier/tapes/releases).

9. A docker image is always a nice addition, to make sure that

everything is specified as it should be, and there are no problems

due to difference in the software environment. It is also a good

way to test, how a software can be installed in a new environment.

## Comments on the paper

I have found several typos and grammatically mistakes. I think the

text should be checked more thoroughly for mistakes.

1. Abstract line 17: What does "downstream" variants refer to?

2. On line 20 multi-sample variant calling formats are mentioned in

the abstract, but this is never mentioned further in the article. I

would either remove it from the abstract or add an explanation to

a later section.

3. Lines 25-26. The Authors mention that cohort samples can be analyzed

even without a control sample set. My question is whether it is

possible to make use of a control set or is it only possible to use

the standard option where the databases are checked?

4. Lines 26-27: "Finally, it can provide powerful filtering and

reporting options to help researchers make sense of cohort

studies." I would say "it provides powerful filtering and

reporting options". I find "make sense" to be too informal for a

scientific paper.

5. The Author summary contains several typos also some have

grammatical mistakes as well.

6. Lines 34-35: "but does not take into account the fact that the

variants belongs in a cohort." I don't know what this sentence

refers to exactly. Also, I have the same comment for line 52: "any

chort characteristic". I think there should be a clear discussion

on what these are and how they are used or not used by the

different software tools.

7. ANNOVAR interface and annotated variant file: lines 66-69. This

section contains grammatical mistakes and is not clearly

structured. I think it would be good to have workflow chart to make

clear how different inputs can be used. Starting from VCF either

VCF --(3rd party tools)--> annoteted VCF or

VCF --(TAPES as a wrapper for ANNOVAR)--> annoteted VCF. And how to proceed

with the annotated VCF. I could not use the sort function on a VCF,

only on CSV.

8. Line 69: "without having to specify the databases and annotations

to use." It is true that when running one does not have to specify them,

but at set up the user has to specify which databases are to be

used, according to my experience. Also this comment gives the

impression that the user has no control over which databases are

being used.

9. Lines 76-80. I think it would be nice to have a description on how

each criterion was implemented as supplementary at least. Then the

Authors could say that most criteria were straight forward to

implement (see suppl.), but the others we solved in the following

way, and then explain them.

10. Line 95. assumptions "are" made.

11. Line 125. I would not use "most confidence", but "highest level of

confidence".

12. Lines 128-129. I would suggest to reformulate the first sentence

to make it clearer.

13. Line 134. What does "in the class" mean?

14. Line 169. "sheer number of" I find this a bit too informal.

15. Figure 1. and surrounding text is not well formulated. It is

difficult to interpret the difference between "TAPES proba" and

"TAPES ACMG". I have only realized what the difference was once I

opened the supplementary table and saw the last two columns. I

think this could be improved.

16. Lines 166-167. How did the Authors arrive at the threshold values (0.35 and 0.8) for

probability scores? Was it to maximize the score on the

example/training dataset? I would suggest to use more than one dataset

for benchmarking and testing. It should be avoided to optimize a

method on the benchmarking set.

17. Figure 1. According to the benchmarks TAPES falls either between the two other

software or performs worse than the other two software if we use the

ACMG results according to the ROC and precision recall analysis. While

this is not discussed in the text. Also would the Authors suggest to

use the Probability instead of ACMG then?

18. Figure 3. I prefer graphs with two axes. Having two y-axes makes

it difficult to interpret. The two graphs could be shown below each

other (A and B) with the same x-axis, but separate y-axes for the two plots.

19. Lines 188-194. I think this section could be improved by adding

example output, adding context on how does it compare to a

workflow without TAPES to fully show the benefits of the method.

I suggest to separate real data and mock up (made up) data

examples.

20. TAPES is able to assign variants to ACMG categories and then can

do further sorting and reporting. Other software tools can also

use ACMG categories as mentioned in the introduction. Can TAPES

use the output of those software and then do sorting and reporting?

21. On which platforms was TAPES tested?

22. Please add a release for TAPES that is referred to in the article.

Also add version numbers for the software used (or commit tags from GitHub).

## Summary

Overall, I find the tool promising. However, I do think both the

software package and the article require significant revision.

I do believe that TAPES can become a valuable tool.</module>

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: No: I could not identify the input data used for the benchmark.

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nathan Dunn

Reviewer #2: No

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1007453.r003

Decision Letter 1

Mihaela Pertea

7 Sep 2019

Dear Dr Xavier,

Thank you very much for submitting your manuscript 'TAPES: a tool for assessment and prioritisation in exome studies' for review by PLOS Computational Biology. Your manuscript has been fully evaluated by the PLOS Computational Biology editorial team and in this case also by independent peer reviewers. The reviewers appreciated the attention to an important problem, but raised some substantial concerns about the manuscript as it currently stands. At this time we are not willing to consider a revised manuscript unless you can provide the following information, in addition to adequately answering the reviewers' concerns:

- the input file for the benchmark

- the reference set for the benchmark

- how the thresholds were calculated.

We cannot, of course, promise publication, even if you decide to send us a revised version.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

Your revisions should address the specific points made by each reviewer. Please return the revised version within the next 60 days. If you anticipate any delay in its return, we ask that you let us know the expected resubmission date by email at ploscompbiol@plos.org. Revised manuscripts received beyond 60 days may require evaluation and peer review similar to that applied to newly submitted manuscripts.

In addition, when you are ready to resubmit, please be prepared to provide the following:

(1) A detailed list of your responses to the review comments and the changes you have made in the manuscript. We require a file of this nature before your manuscript is passed back to the editors.

(2) A copy of your manuscript with the changes highlighted (encouraged). We encourage authors, if possible to show clearly where changes have been made to their manuscript e.g. by highlighting text.

(3) A striking still image to accompany your article (optional). If the image is judged to be suitable by the editors, it may be featured on our website and might be chosen as the issue image for that month. These square, high-quality images should be accompanied by a short caption. Please note as well that there should be no copyright restrictions on the use of the image, so that it can be published under the Open-Access license and be subject only to appropriate attribution.

Before you resubmit your manuscript, please consult our Submission Checklist to ensure your manuscript is formatted correctly for PLOS Computational Biology: http://www.ploscompbiol.org/static/checklist.action. Some key points to remember are:

- Figures uploaded separately as TIFF or EPS files (if you wish, your figures may remain in your main manuscript file in addition).

- Supporting Information uploaded as separate files, titled Dataset, Figure, Table, Text, Protocol, Audio, or Video.

- Funding information in the 'Financial Disclosure' box in the online system.

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see here

We are sorry that we cannot be more positive about your manuscript at this stage, but if you have any concerns or questions, please do not hesitate to contact us.

Sincerely,

Mihaela Pertea

Software Editor

PLOS Computational Biology

Mihaela Pertea

Software Editor

PLOS Computational Biology

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: Thanks you for addressing my concerns.

Reviewer #2: In my opinion the manuscript is clearer now thanks to the corrections. I still think that adding a flowchart on input, output and processes could be very helpful to easily understand the pipeline and the possibilities. Below I have included a suggested workflow chart.

Most importantly, please indicate clearly the input file and the reference set for the benchmark, and how the thresholds were calculated. Otherwise, readers cannot evaluate the validity, it would be only faith or distrust.

In addition, I recommend to further improve the git repo, because potential users will give up easily if it is not clear or there are two manny mistakes. Adding a tutotorial with example input (e.g. the benchmark would be an excelent example), commands to run and the interpretation would help users test that everything is installed correctly and help them understand how the program works and what the seteps are.

Example flowchart, based on the manual, in mermaid (it can be drawn using the online editor: https://mermaidjs.github.io/mermaid-live-editor/):

```

graph TD

VCF -->an{annotate}

VCF -->an2

subgraph ANNOVAR

an2{annotate: table_annovar.pl}

end

an2 -->VCF2[VCF: annotated variants]

an2 -->TSV[TSV: annotated variants]

VCF2-->an

TSV -->|?|sort

subgraph TAPES

subgraph wrapped ANNOVAR

an

end

an --> Annot[CSV: annotated variants]

Annot -->sort{sort}

sort -->Sorted[CSV: sorted varainats]

Sorted -->X{analyse}

end

X -->|by_sample| S[By sample report]

X -->|by_gene| G[By gene reoprt]

X -->|enrich| E[EnrichR report]

X -->|list| L[Kegg, List and Disease reoprt]

```

# Questions based on the response from the Authors

In one of the answers the Authors mention that for the benchmarking they used the dataset from the CharGer publication. Please include this also in the manuscript, and also add to the repository. The same answer ends with the comment that a sentence has been added to line 176. I think that line numbering has changed, so I could not find the referenced sentence.

The Authors claim that ANNOVAR wrapping is totally optional, although I could not run any of the commands without setting up the database by first installing ANNOVAR.

I still don't understand how the threshold values 0.35 and 0.8 were chosen for the probability method which is the recommended method. My assumption is that based on the benchmark set the Authors identified which cutoffs would yield the maximum number of correctly identified variants. If this is true then an independent data set is needed to test how valid the calls are, because the benchmark and training set should be independent from each other. If this assumption is false, please include the method used for deciding the threshold values.

# Textual comments

Line 35: "does not take into account the abundance of a variants in a cohort" should be "do not take into account the abundance of variants in a cohort"

Line 66-70. I could not perform the described steps without installing ANNOVAR and the setting up the database. Either include clearly in the manual how this can be done or modify this paragraph.

Line 116 should read "TAPES provides an array of different useful reports."

Line 119. "on the command line" not "in"

Line 121. "a pathway" not plurar

Line 122. "users do research" could be "run searches"

Line 145. TAPES will or does?

Line 158. Is the table only used as reference or also as input for the analysis? Please add the input.

Line 167-168. How do the ROC curves suggest the threshold values?

Figure 3: Why is the old TAPES curve used instead of the new one that is already in the git repo?

# Code review

The code still contains bugs that cause it to crash. Although the git repo suggests using `python tapes.py`, since tapes.py is written in python3 and the default python on many linux systems is python2 the program crashes.

Many of the example commands contains incorrect hyphen character that results in an error when copy pasting them to command line.

Attempting o run the "Quick Start" section

`python tapes.py db -s -A /path/to/annovar/` -> `python3 tapes.py db -s -A ~/temp/tapes/annovar/`

Worked fine with absolute path, but fails with relative path with a non informative error.

`python3 tapes.py db -s -A ../tapes/annovar/` Gives the following output:

```

No acmg_db path given and no db_config.json found

Default is: /home/user/temp/tapes-0.1/acmg_db

***TAPES: SEE DATABASE***

2019-09-04 13:45:59.....Fetching ANNOVAR Alldb file

NOTICE: Web-based checking to see whether ANNOVAR new version is available ... Done

NOTICE: Downloading annotation database http://www.openbioinformatics.org/annovar/download/hg19_avdblist.txt.gz ... OK

NOTICE: Uncompressing downloaded files

NOTICE: Finished downloading annotation files for hg19 build version, with files saved at the '.' directory

NOTICE: Web-based checking to see whether ANNOVAR new version is available ... Done

NOTICE: Downloading annotation database http://www.openbioinformatics.org/annovar/download/hg38_avdblist.txt.gz ... OK

NOTICE: Uncompressing downloaded files

NOTICE: Finished downloading annotation files for hg38 build version, with files saved at the '.' directory

Traceback (most recent call last):

File "tapes.py", line 406, in <module>

tf.check_online_annovar_dbs(annovar_path)

File "/home/user/temp/tapes-0.1/src/t_func.py", line 883, in check_online_annovar_dbs

with open(outfile_hg19, 'r') as file:

FileNotFoundError: [Errno 2] No such file or directory: '../tapes/annovar/hg19_avdblist.txt'

```

`python tapes.py db -b --acmg --assembly hg19` -> `python3 tapes.py db -b --acmg --assembly hg19` Fails

```

No acmg_db path given and no db_config.json found

Default is: /home/user/temp/tapes-0.1/acmg_db

***TAPES: DOWNLOAD DATABASE***

No annovar path given and no db_config.json found

Traceback (most recent call last):

File "tapes.py", line 358, in <module>

tf.build_annovar_db(annovar_path, args.assembly, args.acmg)

NameError: name 'annovar_path' is not defined

```

`python3 tapes.py annotate -i toy_dataset/toy.vcf -o test/output.vcf --acmg –a hg19` does not run and prints out the help page plus the following warning:

```

tapes: error: unrecognized arguments: –a hg19

```

After changing the hyphen to the correct character ` python3 tapes.py annotate -i toy_dataset/toy.vcf -o test/output.vcf --acmg -a hg19`

```

No acmg_db path given and no db_config.json found

Default is: /home/user/temp/tapes-0.1/acmg_db

***TAPES: ANNOTATE***

No annovar path given and no db_config.json found

Traceback (most recent call last):

File "tapes.py", line 384, in <module>

tf.process_annotate_vcf(args.input, args.output, annovar_path, args.assembly, args.ref_anno, args.acmg)

NameError: name 'annovar_path' is not defined

```

`python3 tapes.py sort -i toy_dataset/toy_annovar_multi.vcf -o test-sort/ --tab` works and creates a folder with three plots (png) and `test-sort.txt`, which is a tab separated file

`python3 tapes.py analyse -i test-sort/test-sort.txt -o test-report/report.txt --single_option` Fails with the following error:

```

tapes: error: unrecognized arguments: --single_option

`python3 tapes.py analyse -i test-sort/test-sort.txt -o test-report/report.txt` Runs without error, but creates no output

```

No acmg_db path given and no db_config.json found

Default is: /home/user/temp/tapes-0.1/acmg_db

***TAPES: RE-ANALYSE***

2019-09-04 14:07:10.....48 samples found

2019-09-04 14:07:10.....Output type: TXT/TSV + XLSX

2019-09-04 14:07:10.....Done

```

However, `python3 tapes.py sort -i toy_dataset/toy_annovar_multi.vcf -o test-full/ --tab --by_gene --by_sample --enrichr --list "MLH1 MSH6 MSH2" --disease "autosomal dominant" --kegg "pathways in cancer"` does work.</module></module></module>

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: No: Input or reference set for the bechmarking or their clear description

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nathan Dunn

Reviewer #2: No

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1007453.r005

Decision Letter 2

Mihaela Pertea

1 Oct 2019

Dear Dr Xavier,

We are pleased to inform you that your manuscript 'TAPES: a tool for assessment and prioritisation in exome studies' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes.

In the meantime, please log into Editorial Manager at https://www.editorialmanager.com/pcompbiol/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production and billing process.

One of the goals of PLOS is to make science accessible to educators and the public. PLOS staff issue occasional press releases and make early versions of PLOS Computational Biology articles available to science writers and journalists. PLOS staff also collaborate with Communication and Public Information Offices and would be happy to work with the relevant people at your institution or funding agency. If your institution or funding agency is interested in promoting your findings, please ask them to coordinate their releases with PLOS (contact ploscompbiol@plos.org).

Thank you again for supporting Open Access publishing. We look forward to publishing your paper in PLOS Computational Biology.

Sincerely,

Mihaela Pertea

Software Editor

PLOS Computational Biology

Mihaela Pertea

Software Editor

PLOS Computational Biology

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #2: Dear Authors,

Thank you for addressing all my comments. I think the manuscript has improved significantly since the submission. I find the new comparison results and figures very impressive and convincing.

I have two minor comments:

Is the release number still 0.1 as stated in the manuscript or is it 0.1.1? I would suggest the improved version. Otherwise, potential users might start with 0.1 and be discouraged by the bugs and give up.

I would include the version of Figure 3 that best represents the version of the software that is used for the latest version of the manuscript and github.

I leave both these comments up the the Authors consideration when working on the final proof of the paper.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1007453.r006

Acceptance letter

Mihaela Pertea

9 Oct 2019

PCOMPBIOL-D-19-01091R2

TAPES: a tool for assessment and prioritisation in exome studies

Dear Dr Xavier,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Matt Lyles

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. ACMG criteria assignment in TAPES and definitions from the original Richards et al 2015 article.

    (XLSX)

    S2 Table. Comparison of Prediction between different pathogenicity assessment software and the expert panel from Zhang J et al. 2015.

    Comparison between TAPES ACMG and pathogenicity probability prediction, CharGer Prediction Score and InterVar AMCG Prediction.

    (XLSX)

    S3 Table. Comparison of Prediction between different pathogenicity assessment software and the expert panel from ClinGen evidence repository variants.

    Comparison between TAPES ACMG and pathogenicity probability prediction, CharGer Prediction Score and InterVar AMCG Prediction.

    (TXT)

    S1 File. Example reports from TAPES sort option.

    Generated using the data from: Zhang, J., et al. Germline Mutations in Predisposition Genes in Pediatric Cancer. N Engl J Med 2015;373(24):2336–2346. Using the command: python tapes.py sort -i ./input.csv -o ./Report/ --tab --by_gene --by_sample --enrichr --disease "autosomal dominant" --kegg "Pathways in cancer". This file gives examples for the main report, the by-gene report, the by-sample report, the enrichr report, the disease report and the kegg report.

    (XLSX)

    S2 File. Files used for TAPES benchmark and validation.

    The Initial Benchmark folder contains all files used for the original benchmark, CharGer_and_Panel_Benchmark.xlsx: CharGer pathogenicity prediction and expert panel decision from from: Zhang, J., et al. 2015, extracted from CharGer original publication, Synthetic_VCF_for_Benchmark.vcf.vcf: Synthetic VCF file created from the CharGer_and_Panel_Benchmark.xlsx variants information, InterVar_Benchmark.txt: InterVar predictions of pathogenicity after analysis of the synthetic VCF, TAPES_Benchmark.xlsx: TAPES prediction of pathogenicity after analysis of the synthetic VCF. The results of all 3 software are compiled in S2 Table. The Validation folder contains all filed used for the validation of the pathogenicity thresholds and comparison with other software. TAPES_validation_synthetic.vcf: Synthetic VCF created with data extracted from the ClinGen evidence repository (https://erepo.clinicalgenome.org/evrepo/), TAPES_validation.charger.txt: the CharGer predictions of pathogenicity after analysis of the Synthetic VCF, TAPES_Validation.intervar.txt: InterVar prediction of pathogenicity after analysis of the synthetic VCF, TAPES_Validation.tapes.txt: TAPES prediction of pathogenicity after analysis of the Synthetic VCF. The results of all 3 software are compiled in S3 Table.

    (ZIP)

    Attachment

    Submitted filename: PlosCompBiol Response_BTP_RJS.docx

    Attachment

    Submitted filename: PlosCompBiol Response_2.docx

    Data Availability Statement

    All source code can be found at: https://github.com/a-xavier/tapes. Documentation is also available through this repository and at: https://github.com/a-xavier/tapes/wiki.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES