Skip to main content
. 2015 Mar 14;2015(3):CD004749. doi: 10.1002/14651858.CD004749.pub3

Haynes 2006.

Methods Study design: CRCT
Data collection: individual user logins were tracked by the online system
Unit of analysis issue: clusters were assigned the community (group) level, but primary outcome data were reported on individual participants within each cluster. The authors made this decision based on an analysis of baseline data for the primary outcome measure (number of logins), where they calculated an intracluster correlation coefficient of ‐0.02 (95% confidence interval ‐0.16 to 0.12) and thus determined that the variation between communities was not important. We agreed that for this outcome, reporting individual login data versus cumulative logins per cluster was acceptable and did not misrepresent the effect of intervention
Participants Participants: physicians who spent at least 20% of their time working in general practice or internal medicine, or subspecialties of internal medicine; were available for at least 1 year; were registered with the Northern Ontario Virtual Library; were fluent in English; and used a personal email account at least once per month
Total number randomized: 203. full‐service group: 98; self‐service group: 105
Clusters: by geographically distinct practice locations referred to as "communities": 10 communities. Full‐service and self‐service groups each had 5 clusters/communities consisting of 3 small and 2 large clusters
Full‐service: 98: 3 small clusters with 15, 18, and 20 physicians; 2 large clusters with 22 and 23 physicians
Self‐service: 105: 3 small clusters with 7, 14, and 17 physicians; 2 large clusters with 28 and 39 physicians
Setting: primary care practices; internal medicine practices in northern Ontario, Canada ‐ an area of approximately 800,000 km2and with a population < 800,000 inhabitants
Country: Canada
Interventions Description:
An electronic database, McMaster PLUS, was added to an existing digital library suite in regional library system. McMaster PLUS was offered in 2 versions, full‐service and self‐service. Full‐service was the intervention; self‐service was the control. McMaster PLUS was provided to groups of practitioners at practice locations in different geographic areas. The full‐service version included a unique search interface (search engine) to a new database of critically appraised articles; the self‐service version did not. Both the self‐ and full‐service groups had access to usual digital resources such as bibliographic databases (Cochrane Database of Systematic Reviews, MEDLINE, CINAHL, Books at Ovid) through the Ovid interface, MD Consult, and Stat!Ref
Comparison: self‐service version of McMaster PLUS
Type of intervention:
Organizational: provision of a new electronic database
Timing: not sure, but outcome data for usage were reported by month
Study period: April 2004 to May 2005
Follow‐up: at 19 months after the end of the trial
Outcomes Use of McMaster PLUS (measured by logins per month)
Notes  
Risk of bias
Bias Authors' judgement Support for judgement
Random sequence generation (selection bias) Low risk Community clusters (participating hospitals/practice sites) were stratified by "the number of participants in each. Each cluster was then assigned a number to conceal the name of the community...and the 4 largest clusters were rank ordered from largest to smallest. Each cluster was randomised to either Full‐Service or Self‐Service interface based on a table of random numbers...with balancing for each pair of cluster. This process was repeated for the 6 small clusters" (p. 596, col 1, para 3)
Allocation concealment (selection bias) Low risk "During a pre‐randomization baseline period with access only to NOVL [Northern Ontario Virtual Library user data], we assembled trial participants into 10 community clusters by mapping clinical practice locations of PLUS [trial] participants, and grouping them into non‐overlapping clusters with maximized geographic distance between clusters and minimized the variation in numbers of participants in each cluster. Hospital district divisions were consulted about physician practice patterns to make decisions on some cluster designations. ...Since baseline usage patterns showed little inter cluster difference, clusters were ...stratified by the number of participants in each. Each cluster was then assigned a number to conceal the name of the community...and the 4 largest clusters were rank ordered from largest to smallest. Each cluster was randomised to either the Full‐Service or Self‐Service interface based on a table of random numbers reported by Fleiss, with balancing for each pair of clusters. This process was repeated for the 6 smaller clusters" (p. 595‐596, Randomization section)
Baseline characteristics: "Participants were well matched...except that a higher proportion of the participants in the Self‐Service group lived in larger communities (64% vs 46% for the Full‐Service group) (p. 598, col 1, para 2; and Table 2)
cf. Fleiss J. Statistical Methods for Rates and Proportions, 2nd ed. New York: Wiley, 1981.
Blinding (performance bias and detection bias) 
 All outcomes High risk Detection: high. "All PLUS trial staff except the data analyst were blinded to the allocation of practice communities to Full‐Service or Self‐Service trial interfaces until the time of data analysis" (p. 596, col 1, para 3)
Performance: high. "Although physicians could not be blinded to the intervention, the Full Service and Self‐Service interfaces were similar in appearance and navigation...participants were not told to which group they were assigned. Also, each group's trial period interfaces offered something new compared with the baseline period"
Incomplete outcome data (attrition bias) 
 All outcomes Low risk Full servicice group: 5 left the study (1 retired; 1 lost interest; 3 left the eligible area)
Self‐service group: 4 left the study (3 left the eligible area; 1 lacked computer literacy)
Selective reporting (reporting bias) Low risk Utilization of PLUS was the primary outcome measure; this was measured by using the rate of logins per month per user. A login event was defined as a login followed by any system usage (i.e. if any menu items or links were clicked) (p. 597, col 1, para 1)