VariBench_logo

A benchmark database for variations


Home | Instructions | Datasets | Citing | Disclaimer |

1. Variation datasets affecting protein tolerance

ToolScores datasets

This Archive contains filtered versions of five publicly available benchmark datasets for pathogenicity prediction:

A description of the columns used in these datasets can be found here

Reference:
Grimm, Dominik G., Azencott, ChloƩ-Agathe, Aicheler, Fabian, Gieraths, Udo, MacArthur, Daniel G., Samocha, Kaitlin E., Cooper, David N., Stenson, Peter D., Daly, Mark J., Smoller, Jordan W., Duncan, Laramie E., Borgwardt, Karsten M., 2015.
Evaluation of tools used to predict the impact of missense variants is hindered by two types of circularity.
Hum Mutat. doi:10.1002/humu.22768

Abstract:
Prioritizing missense variants for further experimental investigation is a key challenge in current sequencing studies for exploring complex and Mendelian diseases. A large number of in silico tools have been employed for the task of pathogenicity prediction, including PolyPhen-2, SIFT, FatHMM, MutationTaster-2, MutationAssessor, CADD, LRT, phyloP and GERP++, as well as optimized methods of combining tool scores, such as Condel and Logit. Due to the wealth of these methods, an important practical question to answer is which of these tools generalize best, that is, correctly predict the pathogenic character of new variants. We here demonstrate in a study of ten tools on five datasets that such a comparative evaluation of these tools is hindered by two types of circularity: they arise due to (1) the same variants or (2) different variants from the same protein occurring both in the datasets used for training and for evaluation of these tools, which may lead to overly optimistic results. We show that comparative evaluations of predictors that do not address these types of circularity may erroneously conclude that circularity-confounded tools are most accurate among all tools, and may even outperform optimized combinations of tools.