Table 1. Results of competency questions evaluation using MS Ontology compared to manual search on PubMed.
Question No. | 1 | 2 | 3 |
---|---|---|---|
# Documents retrieved by MS Ontology | 26 | 9 | 27 |
• Validated documents (MS Ontology) | 26 | 9 | 26 |
Specificity of MS Ontology | 100% | 100% | 96% |
# Documents retrieved by PubMed advanced search | 0 | 3 | 1 |
• Validated documents (PubMed advance search) | 0 | 2 | 1 |
# Documents retrieved by expert search in PubMed | 18 | 11 | 14 |
• Validated documents (expert search) | 15 | 9 | 12 |
• Sensitivity of MS Ontology | 100% | 100% | 100% |
Results are shown as the number of all retrieved documents and the “validated ones” based in manual review of the documents by the expert in order to ensure they were covering the topics of the competency questions. We define as the gold standard for calculating sensitivity, the expert search in PubMed using key words (related with AND) and the manual revision of the abstracts. In order to calculate ‘Sensitivity’ and ‘Specificity’ of MS Ontology based searches, true positives are defined as the number of ‘validated documents’ retrieved by a MS Ontology based search; false positive are the number of documents retrieved by MS Ontology based search but were not considered relevant in expert review and False negatives are the number of documents retrieved by ‘expert based searches’ in PubMed but were not retrieved by MS Ontology. See S1 Methods for details of the searches.