Skip to main content
. Author manuscript; available in PMC: 2016 Aug 5.
Published in final edited form as: J Biomed Inform. 2015 Jul 22;58(Suppl):S189–S196. doi: 10.1016/j.jbi.2015.07.008

Table 1.

The Evaluation Instrument

Task group Evaluation question Response type/scale
Task 1:
Evaluation of
the Website
Hosting the
System
Q.1 Based on the information provided on the website,
are you able to find out the designed objectives of the
system? You may write below what you learned, or
you may write, “I couldn’t figure it out!”
Open-ended§
Q.2 Is it easy to locate this information (i.e., the
designed objectives of the system)?
Q.3 Does the website provide an online demo of the
system? If so, is it easy to find the demo?
Q.4 If the website provides an online demo, does the
demo help you understand the objectives of the
system?
Yes (2); Somewhat
yes (1); Somewhat no
(0); No (−1)
Task 2:
Installation
Q.5 Is it easy to find the instructions on how to install
the system (henceforth referred to as the “installation
guide”)? The installation guide could be a webpage, a
document (.pdf or .doc), or a readme file in the
installation directory.
Q.6 Is it easy to follow the installation instructions to
install the prerequisites?
The last response
scale, “Could not
figure it out,” was
replaced with “This
system does not
require prerequisites
other than Java or
Python” for this
question. Systems that
do not require
nonstandard
prerequisites received
a usability score of 2.
Q.7 Is it easy to follow the installation instructions to
install the tool itself?
Task 3: Use Q.8 Is it easy to find the instructions on how to use the
system (henceforth referred to as the “user manual”)?
The user manual could be a webpage, a document
(.pdf or .doc), or a readme file in the installation
directory.
Q.9 Is it easy to follow the instructions in the user
manual to use the system to process the medical
documents provided?
Q.10 Is it easy to interpret the results generated by the
system?
Overall
impression
Q.11 Do you have any suggestions on what the
authors of the system can do to make it more usable?
Open-ended

The response scales are as follows unless otherwise specified: Effortless or nearly effortless; Somewhat easy but there are challenges; Somewhat difficult; Extremely difficult, nearly impossible; Could not figure it out (operationalized as “I was not able to locate it” or “I was not able to get it to work” depending on the context.

§

Open-ended responses were coded as follows: if the evaluator was able to articulate the designed objectives of the system with no complaints, the system received a score of 2; if the evaluator expressed explicit concerns regarding their ability/inability to understand the objectives, the system received a score of 1, or 0, depending the severity of the issue(s) reported; if the evaluator failed to articulate the designed objectives of the system, the system received a score of −1.