Skip to main content
. 2016 Aug 16;5:ELIXIR-2000. [Version 1] doi: 10.12688/f1000research.9206.1

Table 1. Prioritised top 10 metrics for assessment of life science software development good practice.

Each identified metric was scored according to importance (for sustainability) and implementability. Importance scores ranged from 1 (little) to 5 (very much) and implementability from 1 (difficult) to 5 (easy). Average values are shown for both importance ( a) and implementability ( b). A priority score ( c) is calculated as the sum of the averages provided by ( a) and ( b). ( c) is further discussed and the final Manual Priority Evaluation ( d) is agreed, reflecting the final prioritisation judgement decided by the Working Group.

Top 10 Ranked Metrics Avg
Importance a
Avg
Implementability b
Avg Sum
Priority Score c
Manual Priority
Evaluation d
Is version control used? 5 4.6 9.6 1
Is the software discoverable? 4.1 5 9.1 2
Is an automated build system used? 4.6 3.9 8.4 3
Are test data available? 3.8 4 7.8 4
Does software contain parts that
reimplement existing technology?
4.4 2.9 7.3 5
Is the software compliant with
community standards?
4.1 2.5 6.6 6
Are code reviews performed? 3.4 2.8 6.1 7
Is automated testing performed? 3.5 3.1 6.6 8
Is the code documented? 2.4 4.3 6.6 9
How high is the code complexity? 3.5 2.9 6.4 10