Evidences-based Evaluation: |
|
|
Replicating existing medical discoveries |
√ |
– |
Time–sliced evaluation |
– |
√ |
Manual literature search |
– |
√ |
Intersection evaluation |
– |
√ |
Derive reference sets from literature |
– |
√ |
Compare results with curated databases |
√ |
– |
Compare results using other resources |
√ |
– |
Comparison with baselines: |
|
|
Comparison with existing LBD tools |
√ |
– |
Comparison with previous LBD techniques |
– |
√ |
Comparison with previous LBD work |
– |
√ |
Comparison with other state-of-art methods |
– |
√ |
Expert-oriented Evaluation: |
|
|
Expert-based evaluation |
– |
√ |
Qualitative analysis of several selected results |
– |
√ |
User-oriented Evaluation: |
|
|
User-based evaluation |
– |
√ |
User-experience evaluation |
– |
√ |
Proven from Experiments: |
|
|
Clinical Tests (or relevant other experiments) |
– |
√ |
Scalability Analysis: |
|
|
Processing time analysis |
– |
√ |
Storage analysis |
– |
√ |
Evaluate Ranking Technique: |
|
|
Evaluate ranking positions |
– |
√ |
Evaluation ranking scores |
– |
√ |
Evaluate the quality of the output: |
|
|
Evaluate the interestingness of results |
– |
√ |
Evaluation of quality and coherence of stories |
– |
√ |