Abstract
Randomized clinical trials (RCTs) are the gold standard for assessing causal relationships and the most rigorous design for evaluating interventions. Trials conducted by nurses are being performed with increasing frequency in critical care areas. Most of these RCTs focus on implementing new protocols of care. Internet-based interventions are being used in other settings, such as with patients and families in home and clinic settings. To our knowledge, however, there are no published reports of Internet-based interventions that target critical care nurses. The challenges of carrying out an RCT in critical care settings by using an Internet-based education intervention for nurses have not been addressed. The purpose of this column is to examine methodological issues associated with nonadherence to Internet-based interventions. We explore both the benefits and challenges of using an Internet-based education intervention in critical care, citing an example of our ongoing study in which an Internet-based education program is the primary intervention. We discuss strategies for assessing and promoting intervention fidelity in such a study and consider issues related to measuring outcomes online.
Internet-Based Education and the PULSE Trial
Computer-based approaches have become an increasingly important part of the education of nurses over the last 2 decades. In particular, staff development educators, who are faced with reduced budgets, few resources, minimal time allotted for education, and a long list of topics to cover, have found that computer-based education programs are feasible, efficient, and cost-effective.1,2 Computer-based education occurs in 3 different formats: (1) CD-ROM, (2) online via an institution’s intranet, or (3) online via the Internet (Internet based). Programs on CD-ROM become outdated quickly, cannot be updated in a uniform manner, and thus must include an expiration date. Programs on an institution’s intranet may have limited access from other computers. Using Internet-based programs allows program developers to update information as needed and allows users access to the program from any computer at any time.
When we designed our ongoing RCT on electrocardiographic (ECG) monitoring, we planned an Internet-based educational intervention for nurses. The Practical Use of the Latest Standards for Electrocardiography (PULSE) trial is a 5-year multisite RCT evaluating the effect of implementing American Heart Association Practice Standards for ECG Monitoring3 on nurses’ knowledge, quality of care, and patient outcomes. The intervention consists of an Internet-based ECG-monitoring education program and strategies to implement and sustain change in practice.
We chose an Internet approach for our study for several reasons. (1) Essential concepts for learning how to interpret ECGs often require repetition. The Internet-based content can be repeated at the discretion of the nurse. (2) Content can be updated as necessary. (3) The high prevalence of home computers allows nurses the flexibility of completing the program at home. (4) It is usually not necessary to complete the program in one sitting. (5) An Internet-based program is under the control of the content experts who developed the program. It avoids reliance upon clinical instructors who may misrepresent or alter content. (6) An Internet-based program ensures that all nurses in a multisite study receive consistent educational content.
The PULSE Internet-based intervention consists of 4 education modules that are modeled on the practice standards3: (1) essentials of ECG monitoring, (2) arrhythmia monitoring, (3) ischemia monitoring, and (4) QT-interval monitoring. Content in the modules is presented using didactic text, images, short videos, links to additional information, and interactive simulations. For example, in one simulation, participants use their mouse to practice placing the monitoring electrodes in the correct place on an image of a torso. When done, they click on a button to see whether they were correct. In the module on QT-interval monitoring, participants move calipers to determine the QT interval, enter the values for the program to calculate the QTc, and then decide whether the value is abnormal.
Participants must complete a predefined series of steps in a particular order. The steps are as follows: (1) an introduction and online consent to participate in the study, (2) a demographic form, (3) a pretest to assess baseline knowledge, (4) the 4 modules, and (5) a posttest. Nurses access the program through a Web site maintained by an Internet service provider using a unique username and password. At the completion of the entire program and testing, participants receive a gift card and 4 hours of continuing education units (CEUs).
The program automatically documents time spent on each screen by each participant. One can verify the exact time spent on each section by examining tracking reports generated by the program.
Intervention Fidelity and Treatment Attrition
Intervention fidelity refers to the faithful delivery of each component of an intervention in the manner in which it was designed.4,5 The assessment of intervention fidelity enables more accurate evaluation of the effects of an intervention and facilitates accurate interpretation of postintervention results. It is typically assessed by appraising participants’ adherence, their resulting level of exposure to the intervention, and the quality of the intervention.5,6 Treatment attrition occurs when participants do not complete the intervention as it was designed.7 Only the assessment of the level of participant adherence and exposure is applicable to online interventions. In an Internet-based intervention, the quality of the intervention is constant.
To illustrate compromised intervention fidelity, consider the following phenomenon in the PULSE trial. To ensure that participants receive the entire intervention, they must complete the whole program before they are allowed to take the posttest and receive the gift card and CEUs. The principal investigator (M.F.) receives an e-mail when nurses complete the demographic form, pretest, and posttest. From this information, we are able to calculate the length of time a nurse spends on the intervention. We have observed that, on occasion, nurses simply click through each screen without reading the content, viewing the videos, accessing the links to important information, or practicing using the simulations. They are not experiencing the intervention as it was intended. We have labeled this as the “speedy nurse” phenomenon.
When discussing intervention fidelity, it is important to distinguish between efficacy and effectiveness of an intervention. Efficacy refers to how well an intervention, if administered in an ideal and pure manner, effects change in the outcome of interest. Effectiveness refers to how well an intervention works given the presence of other extraneous influences: that is, the “real world.” The power of a trial to detect a significant effect of an intervention is reduced by treatment attrition. Attrition is an indicator of the variability between the efficacy and the effectiveness of an intervention. Therefore, treatment attrition should be assessed to determine the magnitude of the threat to validity. An appraisal of adherence to the intervention should include an attempt to identify why participants are not complying, so that measures to improve adherence can be instituted.
In RCTs, randomization is done to equalize the groups.8 This step helps to control for any unmeasured confounders that may lead to spurious conclusions. Another hallmark of RCTs is that intention-to-treat analyses are done. In intention-to-treat analyses, participants are evaluated according to the treatment group to which they were assigned, regardless of whether they received the intervention as designed. This analysis provides an impartial estimate of the effects and is a more conservative approach that helps to assess effectiveness of an intervention and not just its efficacy.
Because failure to consider whether the intervention has been delivered as assigned can lead to incorrect conclusions regarding efficacy, intention-to-treat analysis should be supplemented with additional analyses.2 Various methods can be used to account for treatment attrition in analyses. It should be emphasized that these analyses are supplemental to, and not a replacement for, an intention-to-treat analysis; failure to adhere to an intention-to-treat analysis compromises the rigor of an RCT.
One method to account for treatment attrition is to conduct an as-treated analysis in which subjects are analyzed on the basis of whether they received the intervention. This method is akin to assessing efficacy and must be interpreted cautiously. Moreover, categorizing subjects by whether they received the intervention can be problematic when there is partial receipt of the intervention.
Another method of analysis is to include a covariate that measures the degree of intervention received: that is, a dose effect. This method can be done by including the covariate in the model, whereby the analysis controls for the dose effect. Often this step is done by treating the dose effect as a random coefficient in a mixed-model approach. One may compare the model including the dose effect with the model excluding the dose effect to determine the influence the dose of the intervention has on the results. Finally, identifying characteristics of participants who do not complete the intervention as designed may help to identify subgroups. If the subgroup of respondents in whom treatment attrition is evidenced is known, conducting separate analyses on the subgroups may salvage some experimental comparisons that have been compromised by treatment attrition. It should be emphasized, however, that this comparison no longer has the rigor of an RCT.
In the context of the PULSE trial, in addition to an intention-to-treat analysis, we plan to assess the extent of treatment attrition—namely the speedy nurse phenomenon—by comparing results controlling for treatment attrition with results not controlling for treatment attrition. We will use a measure of time spent on the intervention in our modeling and address to what extent the speedy nurse phenomenon influences the effectiveness of our intervention. If the speedy nurse phenomenon overly influences the results of our study (ie, inhibits the effectiveness), then we must consider how we might improve intervention fidelity or consider whether our approach is truly effective in engaging nurses to complete the intervention, thus improving the quality of ECG monitoring.
Preventing Attrition in Internet-Based Research
A trial should be designed so that the intervention is as easy as possible for participants to adhere to and complete. For example, the educational modules in the PULSE trial are interactive to make learning interesting. Telling potential participants that the educational modules are designed to take a certain amount of time to complete can dissuade participants who may not take the study seriously enough to complete it properly.
Incentives, such as the gift cards and CEUs in the PULSE trial, can be given to encourage participation. However, incentives can also cause participants to race through the intervention to get to the end to receive the reward. Informing potential participants that the amount of time spent on the intervention will be tracked might help to deter some participants from thwarting the intervention process. In addition, providing CEUs for participants who truly do not go through learning modules has ethical and educational implications. Researchers must carefully scrutinize the effect of incentives on participation in a study to determine any unanticipated negative effects.
It is necessary to assess how and why there is treatment attrition to improve the fidelity of future interventions. For example, some nurses in the PULSE trial may have been participating in the intervention only because the investigators at their hospital established a competition to determine the unit with the highest participation rate, which may have resulted in the speedy nurse phenomenon.
The PULSE trial is an effectiveness trial—one that is administered in a real-world setting. For these trials, varying levels of intervention implementation may be experienced. Therefore, one must be aware of intervention fidelity both when designing a study and when interpreting the results.
Measuring Outcomes Online
Often, Internet-based interventions also include online assessments of demographic characteristics and outcome measures. Automatic databases can be generated, eliminating the time, cost, and potential errors related to transferring paper-and-pencil forms to databases.9 In addition to potential problems with treatment attrition, measurement attrition, which is the failure to obtain measures as designed, can be a problem in an Internet-based RCT.
In the PULSE trial, nurses complete a 13-item online demographic form and a 20-item online knowledge test that covers essentials of ECG monitoring and arrhythmia, ischemia, and QT-interval monitoring. As was the case with the education intervention, a small minority of participants are not serious about completing the measures. For example, one nurse scored a 12 (possible range, 0–100) on the test, spent a total of 1 minute and 31 seconds to complete the entire test, and answered “A” on all the multiple-choice questions. This test is likely not a measure of this nurse’s true knowledge and is an example of measurement attrition. Rushing through both the pretest and the posttest reduces the statistical power to detect a significant effect of the intervention.
Although it is tempting to discard results of participants who do not seriously complete online assessments, one must include these scores and use an intention-to-treat analysis. In the PULSE trial, we periodically discuss issues related to treatment and measurement attrition (ie, speedy nurse phenomenon) with the investigators at each hospital so they can remind nurses about the importance of completing the Internet-based intervention and tests properly. The investigators can emphasize that the objective is to increase their knowledge so they can enhance the quality of their ECG monitoring, with the ultimate goal of improving patient outcomes.
Conclusion
Because our intervention has so far had only a modicum of treatment and measurement attrition, we recommend the Internet as a viable medium to deliver educational interventions to nurses in critical care settings. Researchers must carefully monitor the use of the intervention to enhance fidelity. In addition, valuable data about how effectively the intervention can be implemented in real-world settings may be obtained by analyzing the characteristics of respondents who fail to complete the intervention as designed.
Contributor Information
Marjorie Funk, Yale University School of Nursing, 100 Church St S, PO Box 9740, New Haven, CT 06536 (marjorie.funk@yale.edu).
Leonie Rose, Yale University School of Nursing, New Haven, Connecticut.
Kristopher Fennie, Yale University School of Nursing, New Haven, Connecticut.
REFERENCES
- 1.Benson EP. Online learning: a means to enhance professional development. [Accessed April 9, 2010];Crit Care Nurse. 2004 24(1):60–63. http://ccn.aacnjournals.org/cgi/reprint/24/1/60. [PubMed] [Google Scholar]
- 2.Billings DM, Jeffries PR, Daniels DM, Rowles C, Stone CL, Stephenson E. Developing and using online courses to prepare nurses for employment in critical care. [Accessed April 9, 2010];J Nurses Staff Dev. 2006 22(2):87–92. doi: 10.1097/00124645-200603000-00008. [DOI] [PubMed] [Google Scholar]
- 3.Drew BJ, Califf RM, Funk M, et al. Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the young. Circulation. 2004;110(17):2721–2746. doi: 10.1161/01.CIR.0000145144.56673.59. [DOI] [PubMed] [Google Scholar]
- 4.Dumas JE, Lynch AM, Laughlin JE, Smith EP, Prinz RJ. Promoting intervention fidelity: conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. [Accessed March 13, 2010];Am J Prev Med. 2001 20(1) suppl:38–47. doi: 10.1016/s0749-3797(00)00272-5. [DOI] [PubMed] [Google Scholar]
- 5.Santacroce SJ, Maccarelli LM, Grey M. Intervention fidelity. Nurs Res. 2004;53(1):63–66. doi: 10.1097/00006199-200401000-00010. [DOI] [PubMed] [Google Scholar]
- 6.Lee CY, August GJ, Realmuto GM, Horowitz JL, Bloomquist ML, Klimes-Dougan B. Fidelity at a distance: assessing implementation fidelity of the early risers prevention program in a going-to-scale intervention trial. [Accessed February 18, 2010];Prev Sci. 2008 9(3):215–229. doi: 10.1007/s11121-008-0097-6. [DOI] [PubMed] [Google Scholar]
- 7.Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB. Designing Clinical Research. 3rd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2007. [Google Scholar]
- 8.Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. 1st ed. Boston, NY: Houghton Mifflin; 2002. [Google Scholar]
- 9.Murray E, Khadjesari Z, White IR, et al. Methodological challenges in online trials. [Accessed February 18, 2010];J Med Internet Res. 2009 11(2):e9. doi: 10.2196/jmir.1052. [DOI] [PMC free article] [PubMed] [Google Scholar]