Skip to main content
. 2016 Aug 7;24(2):460–468. doi: 10.1093/jamia/ocw104

Table 2.

Study quality

Author, year Representativeness (sampling) Selection of comparison group Comparability of cohortsa Outcome follow-upb
No comparison
Reichert (2004)26 0 N/A N/A 0
Del Fiol et al. (2006)29 0 N/A N/A N/A
Oppenheim et al. (2009)36 0 N/A N/A N/A
Comparison with non-IB resource
Cimino et al. (2003)25 0 N/Ac N/A N/A
Rosenbloom et al. (2005)27 1 1 1 (RCT) 1
Chen et al. (2006)28 0 N/Ac N/A N/A
Cimino (2007)32 0 N/Ac N/A N/A
Hunt et al. (2013)39 0 N/Ac N/A N/A
Hyun et al. (2013)40 0 N/Ac N/A N/A
Borbolla et al. (2014)41 0 N/Ac N/A N/A
Comparison with alternate IB implementation
Maviglia et al. (2006)30 0 1 1 (RCT) 0
Cimino et al. (2007)31 0 1 0 1
Cimino and Borovtsov (2008)33 0 0 0 1
Del Fiol et al. (2008)34 0 1 1 (RCT) 1
Cimino (2009)35 0 N/Ac N/A N/A
Del Fiol et al. (2010)37 0 1 0 N/A
Cimino et al. (2013)38 0 0 0 N/A

NA = not applicable (no separate comparison group or no follow-up period).

aComparability of cohorts could be demonstrated through randomized group assignment or by statistical adjustment for baseline characteristics; no studies reported the latter approach.

bWe also appraised the quality of outcome assessment. All studies reported log file data on infobutton usage, which we coded as an objective, blinded measurement. Several studies also reported self-report measures of impact on patient care and satisfaction (see Table 1).

cCompared usage of different resources among the same group of potential users, thus providing comparison results without a separate comparison group.