Skip to main content
. Author manuscript; available in PMC: 2018 Sep 1.
Published in final edited form as: Hum Mutat. 2017 May 2;38(9):1085–1091. doi: 10.1002/humu.23199

Figure 1.

Figure 1

Different schemes used for method performance assessment. CAGI belongs to challenges, which are important for testing ideas and finding what is possible with the current approaches. Initial performance assessments typically represent those included in the original publications describing prediction methods. They often suffer from limitations of small dataset and may also be selective in regards to reporting performance measures. Sometimes they approach the thoroughness of systematic performance assessment. Especially the availability of benchmark datasets has improved the quality of method comparisons. It is essential that the cases used for training the methods are not used for testing the performance and that all the necessary performance measures and details are provided. For a meaningful comparison, the assessment should be extensive and include related methods, especially those with good performance. Challenges provide estimates of the method performance while the systematic comparisons facilitate their ranking. Challenges allow more freedom for experimenting with prediction methods, while mature methods require systematic optimization.