Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Sep 1.
Published in final edited form as: Nurs Res. 2011 Sep-Oct;60(5):340–347. doi: 10.1097/NNR.0b013e31822cc87d

An Intervention Fidelity Framework for Technology-Based Behavioral Interventions

Annette DeVito Dabbs 1, Mi-Kyung Song 2, Robert Hawkins 3, Jill Aubrecht 4, Karen Kovach 5, Lauren Terhorst 6, Mary Connolly 7, Mary McNulty 8, Judy Callan 9
PMCID: PMC3164967  NIHMSID: NIHMS317190  PMID: 21878796

Abstract

Background

Despite the proliferation of health technologies, descriptions of the unique considerations and practical guidance for evaluating intervention fidelity of technology-based behavioral interventions are lacking.

Objectives

To: (a) discuss how technology-based behavioral interventions challenge conventions about how intervention fidelity is conceptualized and evaluated, (b) propose an intervention fidelity framework that may be more appropriate for technology-based behavioral interventions, and (c) present a plan for operationalizing each concept in the framework using the intervention fidelity monitoring plan for Pocket PATH®, a mobile health technology designed to promote self-care behaviors after lung transplantation, as an exemplar.

Method

The literature related to intervention fidelity and technology acceptance was used to identify the issues that are unique to fidelity of technology-based behavioral interventions and thus important to include in a proposed intervention fidelity framework. An intervention fidelity monitoring plan for technology-based behavioral interventions was developed as an example.

Results

The intervention fidelity monitoring plan was deemed feasible and practical to implement, and showed utility in operationalizing the concepts such as assessing interventionists’ delivery and participants’ acceptance of the technology-based behavioral intervention.

Discussion

The framework has the potential to guide the development of implementation fidelity monitoring tools for other technology-based behavioral interventions. Further application and testing of this framework will allow for a better understanding of the role that technology acceptance plays in adoption and enactment of the behaviors that technology-based behavioral interventions are intended to promote.

Keywords: treatment fidelity, technology acceptance, behavioral intervention studies

Intervention fidelity is defined as the extent to which an intervention is given as conceived and planned (Dusenbury, Brannigan, Falco, & Hansen, 2003). Technology-based behavioral interventions use information and communication technology applications to promote behavioral outcomes (Poelmans, Wessa, Milis, Bloemen, & Doom, 2008). Despite the proliferation of technology-based behavioral interventions and the growing recognition of the importance of evaluating intervention fidelity, descriptions of the unique considerations and practical guidance for evaluating fidelity of technology-based behavioral interventions are lacking, and these represent substantial issues.

First, the term intervention fidelity is neither defined nor applied consistently. Second, the belief that methods of evaluating intervention fidelity can be applied uniformly to all interventions fails to account for the technological elements (i.e., features and interfaces) and theoretical elements of technology-based behavioral interventions. Third, the steps to ensure system quality of the technological application itself (i.e., stability, reliability, functionality, usability) may be confused with strategies to promote intervention fidelity. Fourth, the intended outcomes of technology-based behavioral interventions, such as adoption and enactment of behaviors, may be conflated with components of intervention fidelity. A final consideration is the limited understanding of the social construction of technology (how technology is embedded in its social context), and the role that human factors play in technology acceptance. The latter are perhaps most germane to this discussion, since they determine the degree to which a technology-based behavioral intervention will be adopted.

The purposes of this paper were to: (a) discuss how technology-based behavioral interventions challenge conventions about how intervention fidelity is conceptualized and evaluated, (b) propose an intervention fidelity framework that may be more appropriate for technology-based behavioral interventions, and (c) present a plan for operationalizing each concept in the proposed framework using the intervention fidelity monitoring plan for Pocket PATH®, a mobile health technology designed to promote self-care behaviors after lung transplantation, as an exemplar.

Two bodies of literature were reviewed to explore these unique considerations and inform the development of a technology-specific framework of intervention fidelity: (a) the recent emphasis on evaluating intervention fidelity and the ability to draw conclusions about intervention efficacy and (b) the emergence of the theoretical model of technology acceptance that explains variance among intended users in achieving the desired outcomes of technology-based behavioral interventions. As health technologies become more pervasive, this paper provides theoretical and practical guidance to better define, monitor and quantify intervention fidelity of technology-based interventions.

Intervention Fidelity

Since the early 1990s, there has been a call for a more rigorous, comprehensive approach to intervention fidelity assessment that incorporates multiple informants and multiple methods of measurement and analysis (Song, Happ, & Sandelowski, 2010). According to the revised CONSORT statement for reporting randomized trials (Moher, Schulz, & Altman, 2001), ensuring a reliably delivered intervention is central to the integrity of research findings. However, systematic evaluation of intervention fidelity is often difficult, particularly for complex interventions (Carroll et al., 2007; Santacroce, Maccarelli, & Grey, 2004; Song et al. 2010). Furthermore, the lack of theoretical and practical guidance is regarded as a strong barrier to evaluating intervention fidelity (Perepletchikova, Hilt, Chereji, & Kazdin, 2009).

Defining Intervention Fidelity

Intervention fidelity is defined and applied differently, leading to confusion about its meaning and strategies to evaluate and enhance it (Song et al., 2010). Terms such as treatment fidelity, intervention fidelity, procedural integrity, and intervention integrity are used interchangeably to mean the degree to which interventions are delivered as conceived and planned (Dumas, Lynch, Laughlin, Phillips, & Prinz, 2001; Dusenbury et al., 2003; Leff, Hoffman, & Gullan, 2009; Stein, Sargent, & Rafaels, 2007). When fidelity is defined with the focus solely on intervention delivery, evaluation of intervention delivery alone is typically the cornerstone and thought to be a sufficient measure of fidelity because of the notion that variation of the intervention occurs only by the interventionists. Measurement of delivery typically includes assessing whether all the intervention components and activities were delivered and implemented in the proper manner. In traditional interventions, for example, cognitive behavioral therapy or motivational interviewing, the interventionist typically determines delivery (Waltz, Addis, Koerner, & Jacobsen, 1993). However, in technology-based behavioral interventions, because interventionists’ delivery and participants’ receipt are reciprocal, participants’ receipt of the intervention determines the completion of the intervention delivery-receipt process. Measurement of fidelity for technology-based behavioral interventions therefore extends beyond delivery to include what happens after the technology is introduced to the participant (delivery) and the participant receives the intervention (receipt).

Other authors have extended the definition of intervention fidelity beyond delivery to include an assessment of other components that influence the fidelity with which an intervention is delivered, such as participant responsiveness and engagement (Carroll et al., 2007; Perepletchikova, Treat, & Kazdin, 2007; Song et al., 2010) or components that enhance trial integrity and replication, such as design (e.g., theoretical framework, length of contact, number of contacts, duration of contact over time) and training (e.g., procedures for training across providers, measurement of skill acquisition and maintenance of skill overtime; Bellg et al., 2004; Burgio et al., 2001), although even these components are not defined consistently.

The more comprehensive definition of intervention fidelity by Carroll et al. (2000) “....as the degree to which the intervention implementation process is an effective realization of the intervention as planned....” (p. 1) was used in this study. This broader definition is preferred because it extends beyond mere delivery and receipt and allows for the inclusion of additional components of importance to the fidelity of technology-based behavioral interventions, such as the human factors that account for technology acceptance, known predictors of intention to use and adopt a technology.

Evaluating Intervention Fidelity

The belief that methods for evaluating intervention fidelity can be applied uniformly to all interventions fails to recognize the need for customization. Song et al. (2010) established that evaluating treatment fidelity of complex interventions is challenging due to their dynamic and highly individualized nature and that they must be integrated carefully into the intervention studies. Because technology-based behavioral interventions are often dynamic (e.g., the intervention relies on an interplay between participant and technological application or interface) and lend themselves to individualization (e.g., applications can be programmed to respond to various participant characteristics or patterns of use), elements associated with such characteristics need to be evaluated accordingly. The effects of technology-based interventions are not only a function of how many elements were delivered (quantity) but also a function of how elements were delivered (quality) and the type and quality of interactions between the participant and technology. However, uncertainty exists about how to account for the use of technology. When the technology is considered an essential and distinct element of the intervention, as is the case for a technology-based behavioral intervention that includes interplay between the participant and the technology (e.g., automated decision-support features react to participants’ input to generate prompts recommending that participants perform certain actions), the technological element should be measured and quantified. For other technology-based behavioral interventions, the technology may serve as a platform or vehicle, and would therefore not be considered a distinct element of the intervention.

Confusion Between System Quality and Fidelity

Implementation of an intervention can vary at any stage of the process (e.g., delivery, receipt, acceptance), hence the need to evaluate all components that have the potential to vary aspects of its fidelity. System quality or the quality of the technological system used for the intervention (e.g., usability, functionality, reliability) should not vary, having been methodically attended to during the intervention design phase before the technology-based behavioral intervention is implemented. The methods used to ensure the quality of the technological system prior to introduction to participants (e.g., applying principles of user-centered design; DeVito Dabbs, Myers, et al., 2009; and following industry standards for reliability and security) are distinct from the methods to promote intervention fidelity. Because system quality is essential for the usability and functionality of technology-based behavioral interventions and therefore considered a constant, it is not an element of intervention fidelity.

Conflating Intended Outcomes and Fidelity

Adoption is the extent to which the individual participant uses the technology-based behavioral intervention. It is akin to terms such as intervention usage, utilization, and intervention dose, and should not be confused with the use of the term for describing diffusion of innovations (how new ideas and technologies spread among groups; Rogers, 1983). Enactment is the extent to which the participant performs the behaviors that the technology-based behavioral intervention is intended to promote (e.g., follow an exercise regimen, monitor health indicators). Since adoption moderates the relationship between the intervention and treatment effects (enactment), it is important to quantify both to determine the strength by which one can conclude that the intended outcomes were indeed due to the use of intervention. Since neither adoption nor enactment measure how well the intervention was delivered as conceived and planned, they are not included as components of intervention fidelity. However, because these concepts are often included in other conceptualizations of treatment fidelity, they are described here to differentiate the roles in the application of the model.

Beyond Delivery: Human Factors

Social Construction of Technology

Social construction of technology theorists argue that the ways a technology is used cannot be understood without understanding how that technology is embedded in the social context (Bijker, Hughes, & Pinch, 1987). According to social constructivists, technology is viewed not merely in terms of its inputs and outputs, for which one need not understand anything about what goes on inside (Winner, 1993), nor can the definition of technology be reduced to instruments that merely perform functions (Pinch & Bijker, 1987). Furthermore, human factors literature (Dixon, 1999; Goodhue, 1995) purports that the meanings people attach to a particular technology and its uses can vary widely. Therefore, it is important to account for human factors (i.e., an individual’s perspective about the acceptability of technology) when evaluating intervention fidelity of technology-based behavioral interventions. Perepletchikova and Kazdin (2005) identified the importance of participants’ acceptance of an intervention, yet the relevance of technology acceptance theory to intervention fidelity has received little attention in spite of its potential to influence fidelity of a technology-based behavioral intervention and have profound effects on the outcomes that are intended.

Technology Acceptance and Intention to Use

Measuring fidelity of delivery alone is inadequate with technology-based behavioral interventions because human factors are known to influence and increase variability across all components of implementation fidelity, including technology receipt, acceptance, and intention to use the technology. For technology-based behavioral interventions to be successful in achieving intended behavioral outcomes, it is essential to address the human factors that influence technology acceptance and include these in the evaluation of intervention fidelity. For the intervention implementation to be realized as planned, the emphasis must extend beyond delivery (Was the intervention delivered as planned?) and receipt (Can people demonstrate how to use a technology-based behavioral intervention?), to acceptance (Do people perceive the technology to be useful and easy to use?) and intention (Do they intend to use a technology-based behavioral intervention?), to the outcome (Do they use and understand it in the intended way?). A variety of strategies have been used to evaluate the fidelity of delivery and receipt, and less attention paid to technology acceptance and intention, the moderators (variables that affect the relationship) between technology adoption and enactment. However, these moderators are important to evaluate for technology-based behavioral interventions because it is only by making a comprehensive evaluation of the fidelity with which an intervention has been implemented and accepted that a viable assessment can be made of the contribution of the intervention to outcomes such as adoption (actual usage) and enactment (the effect on performance of intended behaviors).

The Technology Acceptance Model (TAM; Davis, 1989) is parsimonious with concepts that are well-grounded and measures that are standardized, reliable, and valid (Poelmans et al., 2008). The primary strength of the TAM is that it was developed specifically to predict and explain human behavior by measuring behavioral beliefs about technology, such as ease of use, usefulness, and intention to use. The TAM posits that perceptions of usefulness (the degree to which a user believes that using the technology will enhance his or her performance) and perceptions about ease of use (the degree to which the user believes that using the technology will be free of effort) have a significant impact on a user’s intention to use the technology. In turn, intention to use ultimately predicts actual adoption of a technology (Davis, Bagozzi, & Warshaw, 1989). It is robust and has been used across different settings and information systems, and demonstrated that an individual’s acceptance of the technology is a strong predictor of future adoption. Although the TAM has its roots in theories that identify the characteristics of technology that influence user adoption and determine social behavior, the concept of technology acceptance has received little attention among researchers involved in designing and testing technology-based behavioral interventions.

As previously pointed out by Carroll (2007), the uptake of an intervention (adoption of technology) depends on acceptance by and acceptability to those receiving it. Therefore, including measures of technology acceptance in the evaluation of intervention fidelity of technology-based behavioral interventions is crucial because the research subjects must first accept the technology before they intend to use it (adopt) to assist them to enact the intended health behaviors. Individuals’ perceptions of technology acceptance influence their intention to use the technology. Thus, intention to use the technology moderates the relationship between fidelity and quality of the delivery-receipt and the proximal outcome of adoption (actual usage), which ultimately mediates enactment (performance of the intended health behaviors). Measuring the degree of technology acceptance also allows researchers to differentiate the effect of acceptance from the effects of delivery and receipt on the outcomes of adoption and enactment.

Intervention Fidelity Framework for Technology-Based Behavioral Interventions

Implementation of a technology-based behavioral intervention can vary at any stage of the process (delivery, receipt, acceptance, and intention to use), hence the need to evaluate all potentially variable aspects of its fidelity. The proposed intervention fidelity framework (Figure 1) includes the following concepts: (a) Delivery: the extent to which the intervention is delivered as intended; (b) Receipt: the extent to which the intervention is received as intended; and (c) Technology Acceptance: the extent to which the participant has positive perceptions, attitude and intention to use a system. Relationships purported in the framework include: (a) Intervention fidelity extends beyond delivery to include receipt and technology acceptance, (perceived ease of use, perceived usefulness, attitudes toward use, and intention to use); (b) There is a reciprocal relationship between delivery and receipt, (i.e., qualities of delivery affect receipt and vice versa); and (c) Human factors (technology acceptance), moderate the relationship between delivery/receipt and ultimate adoption (use of technology).

FIGURE 1.

FIGURE 1

INTERVENTION FIDELITY FRAMEWORK FOR TECHNOLOGY-BASED BEHAVIORAL INTERVENTIONS

Note: Shaded areas reflect the components of intervention fidelity

The concepts included in the model of intervention fidelity for technology-based behavioral interventions (delivery, receipt, technology acceptance) are thought to be universal for all technologies, yet the information to monitor and the types of data available are intervention-specific. As an example, the plan for monitoring the intervention fidelity of Pocket PATH® (Personal Assistant for Tracking Health) is described below, based on the concepts in the intervention fidelity framework but customized to the specific intervention.

Evaluating Intervention Fidelity of Pocket PATH®

A multidimensional plan is proposed to evaluate intervention fidelity of the Pocket PATH® intervention as an exemplar of how to apply the proposed framework to evaluate intervention fidelity of other technology-based behavioral interventions. Pocket PATH® is a mobile health application with customized data recording, trending, and decision-support programs to promote active involvement of patients in self-care after lung transplantation (DeVito Dabbs, Dew, et al., 2009). The definitions and measures for monitoring and evaluating each component of the intervention fidelity of the Pocket PATH® Intervention are presented in Table 1 and include: (a) evaluating intervention delivery using audiotapes of training sessions to assess the interventionist’s adherence, and real-time observations of training sessions to assess interventionist’s competence; (b) evaluation of a participant’s receipt using data from device logs to assess appropriateness of screen usage and navigation sequences during training and return demonstrations; (c) assessment of technology acceptance using audiotapes and observations of training sessions to assess the level of the participant’s engagement and self-reported perceptions of ease and usefulness, attitude toward use, and intention to use. Data regarding delivery, receipt, technology assessment, adoption, and enactment are being collected as part of a randomized, controlled trial to evaluate the efficacy of the Pocket PATH intervention in promoting self-care behaviors. These data will be used to test the relationships that are purported in the intervention fidelity framework. It is important to note that the concepts of adoption and enactment are included in the table for completeness as outcome measures of intervention effectiveness that are influenced by the concepts of intervention fidelity, but adoption and enactment are not considered to be components of intervention fidelity.

Table 1.

Definitions and Measures for Evaluating Intervention Fidelity of Pocket PATH® Intervention

Component Measurement Data Source Items
Delivery: the extent to which the intervention is delivered as intended Content fidelity (quantity) Audiotapes of Pocket PATH training session • Interventionist’s adherence: percent of prescribed behaviors performed
Process fidelity (quality) Observations of Pocket PATH training session • Interventionist’s competence: quality ratings of skill performing prescribed behaviors
Receipt: the extent to which the intervention is received as intended Content fidelity (quantity) Device logs of Pocket PATH training session • Participant’s adherence: percent of prescribed behaviors demonstrated
Process fidelity (quality) Audiotapes of Pocket PATH training session • Participant’s competence: quality ratings of skill demonstrating prescribed behaviors
Satisfaction with delivery-receipt process After Scenario Questionnaire Overall, I am satisfied with:
  • …the ease of completing the tasks in this scenario

  • …the amount of time it took to complete the tasks

  • …the amount of support I got to complete the tasks

  • Strongly agree---strongly disagree

Technology Acceptance: the extent to which the participant has positive perceptions, attitude and intention to use a system Participant beliefs that using system will enhance performance Perceived Usefulness Scale
  • Using Pocket PATH would enable me to improve performance of tracking my health information

  • Using Pocket PATH would increase productivity of tracking my health information

  • Using Pocket PATH would increase effectiveness of tracking my health information

  • I would find Pocket PATH useful for tracking my health information

Participant’s beliefs that using system will be free from effort Perceived Ease of Use Scale
  • Learning to operate Pocket PATH would be easy for me

  • I would find it easy to get Pocket PATH to do what I want it to do

  • It would be easy for me to become skillful at using Pocket PATH

  • I would find Pocket PATH easy to use

Attitude: the extent to which the participant views a system positively Participant’s positive attitude toward using system Attitude Toward Use Scale All things considered, using Pocket PATH to track my health information is:
  • very wise - - very foolish

  • very negative - - very positive

  • very harmful - - very beneficial

  • very good - - very bad

Intention: the extent to which the participant intends to use the system Participant’s intention to use a system Intention to Use Scale I intend to use Pocket PATH to track my health information.
  • strongly agree - - strongly disagree


*Adoption: the extent to which the participant uses the system(dose) Quantity of system usage Device utilization logs
  • the types of Pocket PATH features participant accesses and uses

  • the frequency and duration participant accesses and uses Pocket PATH features

Quality of system usage Device utilization logs Progress notes • the features participant accesses and uses in relation to his condition changes
*Enactment: the extent to which the participant performs intended behaviors Performance of Intended Self-Care Behaviors: Performing self- monitoring: device logs • the frequency with which the participant tracks data and reviews logs/graphs
Adhering to regimen: Health Habits Survey • percentage of adherence to elements of the medical regimen
Communicating with clinician about condition: review of progress notes • frequency and appropriateness of participant initiated communication to clinician

Notes. The concepts of adoption and enactment are not considered to be components of the proposed intervention fidelity framework for technology-based behavioral interventions; they are included here to differentiate them from the other fidelity components and to describe how they are measured.

Measures of Technology Acceptance

The proposed plan to evaluate the intervention fidelity for Pocket PATH® uses the Perceived Usefulness (PU) and Perceived Ease of Use (PEU) scales to measure the construct of technology acceptance (Davis et al., 1989). Consistent with the developers’ recommendations, the two TAM scales are administered after the Pocket PATH® intervention has been delivered and the user has demonstrated receipt, but before users have any significant experience with the Pocket PATH® system. Each scale is self-administered and comprises 4 items that use Likert-type responses from very likely to very unlikely. These scales were selected because, in previous studies, they were validated and found to be robust in assessing acceptance of technologies for a variety of different tasks, were parsimonious (strongly grounded in existing theory), were easy to administer, and demonstrated desirable psychometric properties (Davis et al., 1989). The TAM scales were deemed internally consistent (Cronbach’s alpha coefficients of 0.90–0.92); confirmatory factor analysis revealed all reliability for all scales were greater than .80, all factor loadings exceeded .7, and statistically significant relationships in the predicted direction between PEU and PU (p < .001) provide evidence of the scales’ construct validity (Morris & Dillon, 1997). PU and PEU were powerful predictors of an individual’s intention to use a system, which subsequently predicts the extent to which the participant uses the system (adoption) and performs the behaviors the technology is intended to promote (enactment).

Monitoring and evaluating intervention fidelity of Pocket PATH® is challenging, but essential to ensure that the intervention is delivered consistently, to explain study findings, draw accurate conclusions about treatment efficacy, increase internal validity (replication), external validity (generalizability), and translate interventions into practice. In order to account for variation in the degree of fidelity between various components of the model, data for each component are measured and analyzed.

Discussion

A variety of models and definitions of intervention fidelity have been proposed. Some models of fidelity focus exclusively on interventionists’ delivery (Dumas et al., 2001; Santacroce et al., 2004; Stein et al., 2007), while other models extend beyond delivery to include moderators of participants’ receipt of the intervention (Carroll et al., 2007; Perepletchikova, Treat, & Kazdin, 2007; Song et al., 2010). Other models (Bellg et al., 2004; Burgio et al., 2001) conflate the concepts of fidelity with such concepts as interventionist training, study integrity, and intended outcomes. However, as discussed above, the generic models were deemed inadequate for evaluating the unique considerations for intervention fidelity of technology-based interventions. None include an assessment of the human factors that influence technology acceptance, which is an important, yet overlooked, dimension of fidelity for technology-based behavioral interventions.

The TAM offers a theoretically grounded approach to the study of the acceptability of technology-based behavioral interventions. An intervention fidelity framework was developed to guide the development of multicomponent plan to evaluate intervention fidelity for the Pocket PATH® project. The exemplar illustrates the components of the framework, how each is measured, and how the data regarding fidelity will be used to test the relationships purported in the model and to draw conclusions about the consistency, validity, and effectiveness of the Pocket PATH® intervention. The proposed framework has the potential to guide the development of implementation fidelity monitoring tools for other technology-based behavioral interventions. While the fidelity evaluation of Pocket PATH® is still underway, the measures were deemed feasible, practical to implement, and showed utility in assessing interventionists’ delivery and participants’ acceptance of the technology-based behavioral intervention. While the proposed framework was derived from long standing theories of social behavior and technology acceptance, caution is warranted until the relationships purported by the framework are tested empirically. Further application of this framework to the evaluation of intervention fidelity for a variety of technology-based behavioral interventions will be warranted. Wider use will allow for a better understanding of the role that technology acceptance plays in adoption and thus enactment of the behaviors that technology-based behavioral interventions are intended to promote.

Acknowledgments

Funding source: NIH, NINR NR010711 (DeVito Dabbs, PI).

Contributor Information

Annette DeVito Dabbs, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Mi-Kyung Song, School of Nursing, University of North Carolina, Chapel Hill, Chapel Hill, North Carolina.

Robert Hawkins, School of Journalism & Mass Communication, University of Wisconsin-Madison Madison, Wisconsin.

Jill Aubrecht, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Karen Kovach, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Lauren Terhorst, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Mary Connolly, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Mary McNulty, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

Judy Callan, School of Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania.

References

  1. Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Czajkowlski S. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology. 2004;23:443–451. doi: 10.1037/0278-6133.23.5.443. [DOI] [PubMed] [Google Scholar]
  2. Bijker W, Hughes TP, Pinch TP. The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, MA: MIT Press; 1987. [Google Scholar]
  3. Burgio L, Corcoran M, Lichstein KL, Nichols L, Czaja S, Gallgher-Thompson D, Schulz R. Judging outcomes in psychosocial interventions for dementia caregivers: The problem of treatment implementation. The Gerontologist. 2001;41:481–489. doi: 10.1093/geront/41.4.481. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Carroll KM, Nich C, Sifry RI, Nuro KF, Frankforter TL, Ball SA, Rounsaville BJ. A general system for evaluating therapist adherence and competence in psychotherapy research in addictions. Drug and Alcohol Dependence. 2000;57:225–238. doi: 10.1016/s0376-8716(99)00049-6. [DOI] [PubMed] [Google Scholar]
  5. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implementation Science. 2007;2:40. doi: 10.1186/1748-5908-2-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Davis FD. Perceived usefulness, perceived ease of use and use acceptance of information technology. MIS Quarterly. 1989;13:319–340. [Google Scholar]
  7. Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: A comparison of two theoretical models. Management Science. 1989;35:982–1003. [Google Scholar]
  8. DeVito Dabbs AJ, Dew MA, Myers B, Begey A, Hawkins R, Ren D, McCurry KR. Evaluation of a hand-held, computer-based intervention to promote early self-care behaviors after lung transplant. Clinical Transplantation. 2009;23:537–545. doi: 10.1111/j.1399-0012.2009.00992.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. DeVito Dabbs AJ, Myers B, McCurry K, Dunbar-Jacob J, Hawkins R, Begey A, Dew MA. User-centered design and interactive health technologies for patients. Computers, Informatics, Nursing. 2009;27:175–183. doi: 10.1097/NCN.0b013e31819f7c7c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dixon DR. The behavioral side of information technology. International Journal of Medical Informatics. 1999;56:117–123. doi: 10.1016/s1386-5056(99)00037-4. [DOI] [PubMed] [Google Scholar]
  11. Dumas JE, Lynch AM, Laughlin JE, Phillips SE, Prinz RJ. Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. American Journal of Preventive Medicine. 2001;20:S38–S47. doi: 10.1016/s0749-3797(00)00272-5. [DOI] [PubMed] [Google Scholar]
  12. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research. 2003;18:237–256. doi: 10.1093/her/18.2.237. [DOI] [PubMed] [Google Scholar]
  13. Goodhue DL. Understanding user evaluations of information systems. Management Science. 1995;41:1827–1844. [Google Scholar]
  14. Leff SS, Hoffman JA, Gullan RL. Intervention integrity: New paradigms and applications. School Mental Health. 2009;1:103–106. doi: 10.1007/s12310-009-9013-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Moher D, Schulz KF, Altman D. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA. 2001;285:1987–1991. doi: 10.1001/jama.285.15.1987. [DOI] [PubMed] [Google Scholar]
  16. Morris M, Dillon A. How user perceptions influence software use. IEEE Software. 1997;14:58–65. [Google Scholar]
  17. Perepletchikova F, Hilt LM, Chereji E, Kazdin AE. Barriers to implementing treatment integrity procedures: Survey of treatment outcome researchers. Journal of Consulting and Clinical Psychology. 2009;77:212–218. doi: 10.1037/a0015232. [DOI] [PubMed] [Google Scholar]
  18. Perepletchikova F, Treat TA, Kazdin AE. Treatment integrity in psychotherapy research: Analysis of the studies and examination of the associated factors. Journal of Consulting and Clinical Psychology. 2007;75:829–841. doi: 10.1037/0022-006X.75.6.829. [DOI] [PubMed] [Google Scholar]
  19. Perepletchikova F, Kazdin AE. Treatment integrity and therapeutic change: Issues and research recommendations. Clinical Psychology: Science and Practice. 2005;12:365–383. [Google Scholar]
  20. Pinch T, Bijker WE. The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other. In: Bijker WE, Hughes TP, Pinch TJ, editors. The social construction of technological systems. Cambridge, MA: MIT Press; 1987. pp. 17–50. [Google Scholar]
  21. Poelmans S, Wessa P, Milis K, Bloemen E, Doom EC. Usability and acceptance of e-learning in statistics education. Proceedings of the International Conference of Education, Research and Innovation; International Association of Technology, Education and Development; 2008. Retrieved from www.iated.org. [Google Scholar]
  22. Rogers E. Diffusion of innovations. New York, NY: The Free Press; 1983. [Google Scholar]
  23. Santacroce SJ, Maccarelli LM, Grey M. Intervention fidelity. Nursing Research. 2004;53:63–66. doi: 10.1097/00006199-200401000-00010. [DOI] [PubMed] [Google Scholar]
  24. Song MK, Happ MB, Sandelowski M. Development of a tool to assess fidelity to a psycho-educational intervention. Journal of Advanced Nursing. 2010;66:673–682. doi: 10.1111/j.1365-2648.2009.05216.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Stein KF, Sargent JT, Rafaels N. Intervention research: Establishing fidelity of the independent variable in nursing clinical trials. Nursing Research. 2007;56:54–62. doi: 10.1097/00006199-200701000-00007. [DOI] [PubMed] [Google Scholar]
  26. Waltz J, Addis ME, Koerner K, Jacobsen NS. Testing the integrity of a psychotherapy protocol: Assessment of adherence and competence. Journal of Consulting and Clinical Psychology. 1993;61:620–630. doi: 10.1037//0022-006x.61.4.620. [DOI] [PubMed] [Google Scholar]
  27. Winner L. Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology, & Human Values. 1993;18:362–378. [Google Scholar]

RESOURCES