Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Sep 1.
Published in final edited form as: Nurs Res. 2016 Sep-Oct;65(5):397–407. doi: 10.1097/NNR.0000000000000168

Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods

Heather M Gilmartin 1, Karen H Sousa 2, Catherine Battaglia 3
PMCID: PMC5010018  NIHMSID: NIHMS777354  PMID: 27579507

Abstract

Background

The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking.

Objectives

Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs.

Methods

A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets.

Results

The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM.

Discussion

There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings.

Keywords: critical care, epidemiology, factor analysis, iatrogenic infection, statistical models


Healthcare-associated infections (HAI) are the fifth leading cause of death in acute-care hospitals in the United States (Septimus et al., 2014). Central line-associated bloodstream infections (CLABSIs) are among the most common HAI and result in prolonged hospital stays, significant morbidity, and an increase in crude mortality (Siempos, Kopterides, Tsangaris, Dimopoulou, & Armaganidis, 2009). Individual risk factors for CLABSIs have been identified, including the lack of implementation of best practices during central line (CL) insertion, heavy microbial colonization at the insertion site, femoral or internal jugular vein insertion (rather than subclavian vein), prolonged duration of catheterization, and inadequate care/maintenance of the CL after insertion (O’Grady et al., 2011). These factors are considered malleable, and prevention strategies, such as hand hygiene, use of maximal sterile barrier precautions, chlorhexidine gluconate (CHG) for skin antisepsis prior, and avoidance of the femoral site, have been grouped into a bundle of interventions to target these risk factors (Chopra, Krein, Olmstead, Safdar, & Saint, 2013; Pronovost et al., 2006).

Use of the CL bundle has been associated with a progressive decrease in the prevalence of CLABSIs within hospitals in the U.S. and internationally (Kim, Holtom, & Vigen, 2011; Pronovost et al., 2010; Pronovost, Watson, Goeschel, Hyzy, & Berenholtz, 2015; Render et al., 2011; Sawyer et al., 2010). Despite the broad acceptance and widespread adoption of the CL bundle interventions, recent reports suggest that many programs experience initial success, yet, are challenged to sustain the improvements (Cardo et al., 2010; Septimus et al., 2014). This has been attributed to the lack of consideration of the role of organizational context on the adoption, adherence to, and long-term sustainability of these programs (Shekelle et al., 2010; Shekelle et al., 2013).

In order to study the influence of organizational context on adherence to the CL bundle interventions and CLABSIs, a method to model the CL bundle interventions within a health systems framework was needed. The creation of the CL bundle measurement model was initially approached using a reflective, latent variable modeling method. Upon consultation with psychometricians, this approach was questioned and a composite, formative modeling approach was offered as a reasonable alternative. Due to the ongoing debate about the viability of formative measurement and reflective measurement (Edwards, 2011; Hardin & Marcoulides, 2011), and known challenges in building, testing, and interpreting composite models (Kline, 2011), we determined that an empirical investigation of the disparate approaches using a real-world example, would be informative for nurse researchers. In an effort to extend previous methodological work (Coltman, Devinney, Midgley, & Venaik, 2008; Diamantopoulos & Siguaw, 2006; Hagger-Johnson, Batty, Deary, & von Stumm, 2011), we undertook a comparison of the two approaches using the CL bundle interventions as a practical example of the implications of adopting a reflective versus a composite measurement perspective when testing systems level concepts within a Structural Equation Modeling (SEM) framework.

Conceptual Framework

This study was guided by the Quality Health Outcomes Model (QHOM), a complex, nonlinear conceptual model that addresses the interrelationships among organizational systems, interventions, the client, and outcomes (Mitchell, Ferketich, & Jennings, 1998). The QHOM is a conceptual model that is an adaptation of the structure-process-outcome model of Donabedian (1980). It is a unique systems model because it challenges the traditional view that interventions directly produce expected outcomes—as adjusted for client characteristics. The QHOM suggests that interventions impact—and are impacted by both organizational context and client characteristics—in producing desired outcomes (Mitchell et al., 1998). The QHOM is intended to be more closely aligned with the dynamic processes of patient care and outcomes than traditional linear models (Mitchell et al., 1998).

Latent Variable Modeling Methods

The development of measurement models for the concepts of interest is the initial step when testing relationships within a SEM approach (Kline, 2011). One of the most important issues when building a measurement model is the distinction between observed variables and latent variables (Raykov & Marcoulides, 2006). Observed variables are the variables that are actually measured or recorded on a sample of subjects. In contrast, latent variables are hypothetically existing constructs of interest, such as intelligence, organizational culture, or depression (Raykov & Marcoulides, 2006). The main characteristic of latent variables is that they cannot be directly measured because they are not directly observable. Observed variables are represented in models as squares or rectangles, while latent variables are represented as circles or ellipses (Raykov & Marcoulides, 2006). Once the measurement models for the latent variables have been identified, the relationships between the variables can be tested using SEM. Graphically, the relationships are depicted as either straight lines with a single-headed arrow that points from the latent variable to the observed variables (Figure 1, Panel A) or correlations with double-headed arrows (Raykov & Marcoulides, 2006).

FIGURE 1.

FIGURE 1

Composite (Panel A) and reflective (Panel B) indicator models for the central line bundle intervention. CHG = 2% chlorhexidine gluconate skin prep; CL = central line; HH = hand hygiene prior to line insertion; MAXB = maximal sterile barrier precautions; OCS = optimal catheter site selected.

Reflective latent variables are the classic representation traditionally used in SEM. This approach is commonly used in social and organizational research. Reflective latent variables are consistent with factor analysis, item reduction, and are rooted in classical test theory and psychometrics (Diamantopoulos, Riefler, & Roth, 2008). The assumption of reflective models is that the latent variable influences the observed variables, and the observed variables are highly correlated. For example, the scores on an organizational culture survey are influenced by the culture within the organization.

In contrast, formative modeling methods assume that the observed variables influence the construct, which summarizes the common variation in the collection of measured items (Edwards, 2011). Due to this, latent variable terminology and methods are not appropriate for formative models for the unobserved variable is not latent, but a composite of the observed variables. A composite modeling approach is appropriate when the observed variables form an exact linear combination of items that are not unified conceptually, and when the observed variables determine the composite construct. The observed variables are not necessarily highly correlated. Composite relationships are graphically represented by arrows that point from the observed variables to the construct (Figure 1, Panel B). For example, job satisfaction may be conceptualized as a composite variable when it is seen as compromising a variety of distinct facets, including satisfaction with one’s work, pay, coworkers, supervisor, and promotion opportunities (MacKenzie, Podsakoff, & Jarvis, 2005). Conceptually, these distinct facets of satisfaction, together, determine a person’s overall level of satisfaction with their job.

Reflective and composite modeling approaches are conceptually, substantively, and psychometrically different (Bollen & Lennox, 1991) Table 1 outlines the statistical assumptions of the reflective and composite approaches that guided the building, testing, and interpretation of the measurement models presented in this paper.

TABLE 1.

Assumptions: Reflective and Composite Measurement Models

Assumption Reflective Composite
Directionality LV→items Items→CV
Items are interchangeable Yes No
Positive interitem correlation Yes Not necessary
Measurement error Yes No
FA: guides item selection Yes No
Validity/reliabilitya Yes No
SEM for model identificationb No Yes

Note. Assumptions are summarized from Diamantopoulos et al. (2008) and Kline (2011)

CV = composite variable; FA = factor analysis; LV = latent variable; SEM = structural equation model.

a

Testing as an individual measurement model.

b

Inclusion in structural model (≥2 paths) necessary for model identification.

Criteria for Modeling Reflective and Composite Measures

The selection of modeling approaches is challenging because there are no definitive statistical tests that dictate the best modeling approach (Bollen & Bauldry, 2011). Selection is guided by the theoretical and applied understanding of the concept. More recently, a brief guide has been developed to instruct researchers on how to model reflective and composite variables (Grace & Bollen, 2008). Initially, the researcher is guided to consider the directionality of the relationships between the observed variables and the construct of interest. Using the CL bundle interventions as an example, we would ask whether the bundling of interventions through the use of a checklist influences the individual interventions, or is each item practiced independently and the bundling is a convenient method to encourage adherence and allow for calculation of a composite variable. In our example, we believe that the individual items are practiced independently, and the rates of adherence to each intervention can be used to calculate a composite variable, supporting a composite modeling approach.

The second consideration pertains to the interchangeability of the observed variables (Grace & Bollen, 2008). If the measured items are interchangeable, such as with scale questions that measure attitudes or personality, then the indicators can be viewed as redundant—supporting a reflective modeling approach. In the case of the CL bundle items, the individual interventions are not redundant, for each item targets unique risk factors associated with CLABSIs. For instance, the cleaning of the skin with CHG aims to reduce microbial colonization at the insertion site, while avoiding the femoral or internal jugular vein aims to select the optimal site for catheter placement (Chopra et al., 2013).

The third consideration is the expectation of the observed variables to covary. Items would be expected to covary if they are controlled by an underlying factor (Grace & Bollen, 2008). This serves as an additional method of evaluating the directionality of the relationships. Since reflective indicators are under common control by their latent factor, when the latent factor varies, the indicators all vary (Grace & Bollen, 2008). In the case of a composite model, no predictions are made about the correlations among measures.

To use the CL bundle example again, previous research suggests that the interventions are related, and when consistently implemented, can decrease CLABSI incidence (Furuya et al., 2011). What is unclear is if the relationships are linear, in that the output is directly proportional to the input—or the relationships are synergistic—in that the total effect is greater than the sum of the individual effects. Collectively, these considerations can guide researchers towards the most appropriate approach to building a measurement model. This does not guarantee that the approach is correct, but it does provide a logical method to develop, justify, and interpret relationships between observed variables and unobserved constructs (Grace & Bollen, 2008).

Reflective and composite approaches are meaningful to a discussion of modeling complex variables using traditional SEM strategies. That being said, other modeling approaches, such as general linear modeling, are available to the researcher and can be viewed as another statistical method—if there are limitations to the data, such as a small sample size, or lack of theoretical support for a SEM strategy.

Latent Variable Modeling versus General Linear Modeling

Latent variable modeling methods are an important statistical approach for nurse researchers to have in their toolbox, for the methods allow for multiple predictors to be tested simultaneously within a SEM framework. In addition, reflective measurement models account for measurement error and can test for both direct and indirect relationships between variables (Raykov & Marcoulides, 2006). These methods are able to capture the real-world nature of clinical practice. General linear modeling methods, such as multiple regression or analysis of variance, permit hypothesis testing of a single observed item that can represent the mean of a single item or a collection of items for independent and dependent variables (Tabachnick & Fidell, 2007). These methods, though informative, limit the ability of nurse researchers to examine interacting, synergistic, or complimentary systems and ignore potential measurement error (Raykov & Marcoulides, 2006).

Specific advantages of latent variable modeling methods over general linear modeling methods pertinent to our study include the ability to examine both observed and unobserved variables, and the ability to assess the individual contributions and statistical significance of each observed variable on the reflective or composite variable and other variables within the larger structural model (Grace & Bollen, 2008; Tabachnick & Fidell, 2007). If we had selected a general linear modeling approach for our study, we would have had to break down each proposed relationship into multiple specific models, limiting the ability to simultaneously test for interacting relationships between the concepts of interest (Grace & Bollen, 2008). Though there are limitations and challenges in the modeling methods presented in this paper, we aim to compare the two approaches to provide a greater understanding of the methods using a practical example that is pertinent to nursing research.

Methods

Design

This study was a secondary data analysis using latent variable modeling methods within a SEM framework. SEM was chosen as the analytic method due to its ability to estimate and test complex latent variable relationships, as well as the ability to simultaneously study both direct and indirect effects of variables involved in the conceptual model (Raykov & Marcoulides, 2006). Variable selection and the directionality of the tested relationships were guided by a systems theory that describes and explains the phenomena under investigation. Adherence to hand hygiene, maximal sterile barrier precautions, CHG skin prep, and optimal site selection were selected as measures for the CL bundle variable for they occur at the same time—prior to CL insertion. Survey items from two psychometrically valid instruments that measure unit climate and the work environment were selected as measures of organizational context. CLABSI rates reported to the National Health Safety Network (NHSN) were selected as the outcome variable. Data on client characteristics were not available. Due to this, only partial testing of the relationships proposed in the QHOM was possible.

Population/Sample

The data for this analysis were drawn from an existing dataset collected in the conduct of the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined (PNICER) study (National Institutes of Health, RO1NR010107: P. Stone, Principal Investigator). The PNICER study was a three-year, mixed-method study that surveyed eligible NHSN hospitals on the state of infection prevention in their organizations (Stone et al., 2014). An electronic survey was sent to participating hospital infection prevention departments with a request for a single respondent to provide data on the observed adherence to the CL bundle interventions for the largest intensive care unit (ICU) in each hospital—along with their perception of the organizational context—through instruments that assess the unit climate and work environment. CLABSI rate data for the matched ICUs in the PNICER sample were drawn from the NHSN database.

For this project, data from 614 hospitals were selected to develop and test the measurement models. Due to the large sample size, the dataset was randomly split into development and validation subsets. The PNICER study was approved by the Columbia University Medical Center, New York University Medical Center, and RAND Corporation. The Colorado Multiple Institutional Review Board approved this secondary data analysis.

Power Analysis

An a priori power calculation was run to determine the sample size needed to detect differences between models using a power algorithm developed by Preacher and Coffman (2006). The power parameters were set at 0.8, with an alpha of .05 (power = .95) (MacCallum, Browne, & Cai, 2006), a null hypothesis root mean square of approximation (RMSEA) at 0.05, and an alternative hypothesis at 0.45. The minimum required sample size for this analysis was calculated at 312. The final sample of 614 hospitals was deemed adequate to detect statistically significant differences and allowed for the dataset to be split into two samples with equal sizes (n1 = n2 = 307).

Variables

Adherence to CL bundle interventions

Four evidence-based interventions that are performed prior to CL insertion were selected from the PNICER dataset to assess adherence to the CL bundle interventions. The variables included monitoring for hand hygiene at line insertion, using maximal barrier precautions during line insertion, using CHG to prepare the skin prior to line insertion, and selecting the optimal catheter site. Respondents reported aggregated, monthly, unit-based compliance with these interventions on a 6-point scale. Scores ranged from 1–6: 1 = all of the time (95–100%); 2 = usually (75–94%); 3 = sometimes (25–74%), 4 = rarely/never (< 25%); 5 = we monitor, don’t know the proportion; and 6 = no monitoring. These results were collapsed to 1= all of the time (95–100%) and 0 = all else due to research that reported lower CLABSIs rates only when adherence to the CL bundle was high (95–100%) (Furuya et al., 2011). Adherence to the CL bundle interventions has been examined for a relationship with CLABSI events using multivariate analyses (Furuya et al., 2011), but to our knowledge, the CL bundle interventions have not been represented in a measurement model.

Organizational context

Items selected to represent organizational context were drawn from two instruments that queried the perception of the unit climate and the work environment. Eighteen items from the Leading a Culture of Quality Instrument (LCQ; Pogorzelska-Maziarz, Nembhard, Schnall, Nelson, & Stone, 2015), and 28 items from the Relational Coordination Survey (RCS; Gilmartin, Pogorzelska-Maziarz, Thompson, & Sousa, 2015) were used. A depiction of the second-order factor model for Organizational Context is available (see Figure, Supplemental Digital Content 1).

Central line-associated bloodstream infection outcomes

The outcome variable was the weighted mean of CLABSI events reported to the NHSN system from adult ICU patients who had contracted a CLABSI during inpatient care. A CLABSI event was defined in accordance with NHSN guidelines as the number of laboratory-confirmed, catheter-related bloodstream infections per 1,000 line days (Centers for Disease Control, 2016) The variable was included in the model as a single-measured item.

Data Analysis

The level of analysis of this study was at the hospital level. Descriptive statistics were used to describe the study variables. Phi coefficients (Graziano & Raulin, 2013) were used to estimate correlations among dichotomized responses to items about adherence to the CL interventions.

To build the reflective model, the factor structure was developed in one half of the sample (n = 307) and validated in the second half (n = 307) using maximum likelihood factor analysis with orthogonal rotation (Varimax) in Mplus, version 7.2. In accordance with testing a reflective model in a SEM framework, uniquenesses were estimated and a single indicator was restricted to 1.0 to define the metric of the latent variable (Muthén & Muthén, 1998–2012). Reliability for an additive scale defined by adherence to the four CL interventions, using dichotomized responses, was estimated using Cronbach’s alpha.

The CL bundle as a composite model does not undergo factor analysis, for indicators in a composite model do not have conceptual unity (Bollen & Bauldry, 2011). The CL bundle interventions included in the reflective model were also used in the composite model—and within the larger structural model—using the validation dataset (n = 307) with Mplus, version 7.2. In accordance with translating a composite model within a SEM framework, the measured items were allowed to correlate freely, a single indicator was set to 1.0, and the residual variance of the composite variable was fixed at 0 (Muthén & Muthén, 1998–2012).

To investigate the relationship between organizational context, adherence to the CL intervention bundle, and CLABSIs, two SEMs were specified and was tested—one with a reflective measurement model for adherence to the CL intervention bundle and one using a composite approach. Though not supported by the QHOM, a direct pathway was set from the CL bundle model to CLABSIs to allow for identification of the composite model (Figure 3). Since this pathway is not theoretically supported by the QHOM, but is required for identification of the model, interpretation of this pathway will be limited.

Parameter estimates for the SEMs were obtained using the robust weighted least square estimator using a diagonal weight matrix, with standard errors and mean-adjusted chi-squared test statistic that uses the full weight matrix (WLSMV). The WLSMV is appropriate for nonnormal, categorical data if sample size is at least 200 (Ullman, 2007). The fit of the competing models was evaluated by assessing various statistics that measured the closeness of the model fit of the estimated population covariance matrix to the sample covariance matrix (Ullman, 2007). The χ2 test statistic was initially used to assess the magnitude of the discrepancy between the sample and the fitted covariance matrices. Additional goodness of fit indices were used to assess the model fit, such as the comparative fit index (CFI) and RMSEA. The Mplus code for the two structural models is available (see Figure, Supplemental Digital Content 2).

Results

As shown in Table 2, the hospitals in the samples represented a variety of settings, locations, hospital sizes, and were primarily medical/surgical ICUs. Adherence to the central line bundle interventions using the original 6-point response options and the transformed values are shown in Table 3 for the combined samples. Adherence all of the time (95–100%) was highest for use of 2% chlorhexidine gluconate for skin preparation prior to central line insertion.

TABLE 2.

Characteristics of Samples

EFA (N = 307)
CFA (N = 307)
Characteristic M (SD) M (SD)
Hospital patient days 56,348 (61,463) 61,356 (60,585)
ICU beds (number) 30 (37) 39 (48)


n (%) n (%)


Setting
    Urban 67 (22) 96 (31)
    Suburban 106 (35) 104 (34)
    Rural 131 (43) 106 (35)
Location
    Northeast 61 (20) 65 (21)
    South 102 (33) 102 (33)
    Midwest 96 (31) 82 (27)
    West 45 (15) 54 (18)
    Other (AK, PR, HI) 3 (1) 4 (1)
Beds
    ≤200 158 (52) 142 (48)
    201–500 106 (35) 111 (37)
    501–1,000 32 (10) 41 (14)
    ≥1,001 1 (1) 4 (1)
ICU type
    Medical/surgical 238 (78) 218 (71)
    Medical 37 (12) 41 (13)
    Other 32 (10) 48 (16)

Note. The overall sample of 614 was randomly split into EFA and CFA subsamples for the analyses

AK = Alaska; CFA = confirmatory factor analysis; EFA = exploratory factor analysis; HI = Hawaii; ICU = intensive care unit; PR = Puerto Rico; SD = standard deviation.

TABLE 3.

Adherence to Central Line Bundle Interventions

HH
(n = 592)
CHG
(n = 613)
OCS
(n = 587)
MAXB
(n = 610)
Adherence n (%) n (%) n (%) n (%)
Alwaysa 362 (56) 446 (73) 270 (44) 381 (62)
Not alwaysb 230 (44) 167 (27) 317 (54) 229 (38)
    Usuallyc 123 (20) 73 (12) 190 (31) 122 (20)
    Sometimesd 14 (2) 8 (1) 28 (5) 14 (2)
    Rarely/nevere 5 (<1) 2 (<1) 5 (<1) 3 (<1)
    Monitor/don’t
knowf
56 (9) 48 (8) 57 (9) 55 (9)
    Do not monitor 32 (5) 36 (6) 37 (6) 35 (6)

Note. N = 614. CHG = 2% chlorhexidine gluconate used for skin preparation

HH = hand hygiene before catheter insertion; MAXB = maximal sterile barrier precaution used; OCS = optimal catheter site selected

a

95–100% adherence, scored as 1.

b

≤94% adherence, scored as 0.

c

75–94% adherence.

d

25–74% adherence.

e

< 25% adherence.

f

Proportion adherence unknown.

The organizational context measurement model underwent exploratory (EFA) and confirmatory factor analysis (CFA) within a SEM framework—using the development and validation datasets. The results supported organizational context as a second-order factor model using the subconcepts of psychological safety, climate, and leadership from the LCQ and physician, nurse, environmental services, and healthcare administrator relational coordination from the RCS, and their respective measures (Figure 2). The organizational context measurement model was noted to have an adequate fit and was in alignment with the theoretical guidance of the QHOM (χ2 [980] = 1, 680.75, p < .01; CFI = .94; RMSEA = .05).

FIGURE 2.

FIGURE 2

Structural equation model with central line bundle interventions in a composite indicator model (Panel A), and associations with organizational context and CLABSIs. Structural equation model with central line bundle intervention in a reflective indicator model (Panel B), with organizational context and CLABSI outcomes. The WLSMV estimator in MPlus was used. CHG = 2% chlorhexidine gluconate skin prep; CL = central line; CLABSI = central line-associated bloodstream infection; HH = hand hygiene prior to line insertion; MAXB = maximal sterile barrier precautions; ns = not significant (p > .05). OCS = optimal catheter site selected.

Reflective CL Bundle Measurement Model

Interitem analysis

The statistical relationships between the four CL bundle interventions were examined (Table 4). All of the bundle intervention items resulted in statistically significant relationships with all other bundle intervention items (range: .46 - .71, p < .01).

TABLE 4.

Correlations Among Central Line Bundle Interventions

Variable 1 2 3 4
1. CHG --- .94 .93 .81
2. HH .69 --- .90 .77
3. MAXB .71 .70 --- .70
4. OCS .49 .53 .46 ---

Note. Correlations below the diagonal are phi coefficients (used for reflective models);

correlations above the diagonal are tetrachoric correlations (used for the composite models). n1 = n2 = 307. CHG = 2% chlorhexidine gluconate used for skin preparation

HH = hand hygiene; MAXB = maximum sterile barrier protection used; OCS = optimal catheter insertion site used. All correlations were significant (p <.01).

Exploratory factor analysis

A maximum likelihood factor analysis was conducted with orthogonal rotation (varimax) in the development dataset (n = 307). The Kaiser-Meyer-Olkin (KMO) measure verified the sampling adequacy for the analysis, KMO = .80 (Tabachnick & Fidell, 2007). Bartlett’s test of sphericity χ2 (6) = 396.29, p < .01 indicated that correlations between items were sufficiently large for factor analysis (Tabachnick & Fidell, 2007). An initial analysis was run to obtain eigenvalues for each component in the data. A single component had an eigenvalue over Kaiser’s criterion of 1.0. This single component explained 66.30% of the variance. The scree plot demonstrated inflexion at one component. Cronbach’s alpha for the scale was .82.

Confirmatory factor analysis

The measurement model for the CL bundle interventions identified in the EFA using the development dataset (n = 307) was confirmed in the validation dataset (n = 307). The CFI and RMSEA suggested a good fit for the model (χ2 (2) = 1.97, p < .37; CFI = 1.00; RMSEA = .00). Factor loadings ranged from .78 to .99. Standard errors are available from the authors. No post-hoc model modifications were made.

Reflective CL bundle model in QHOM structural model

Relationships were tested between: (a) the CL bundle using the reflective measurement model and organizational context; (b) the CL bundle using the reflective measurement model and CLABSIs; and (c) organizational context and CLABSIs. The resulting model is shown in Figure 3, Panel A. The model showed good fit (χ2 [1,214] = 1,894.76, p <.01; CFI = .95; RMSEA = .04). The χ2 was significant, which can occur with larger sample sizes (Raykov & Marcoulides, 2006). The factor loadings—which indicate the strength of the relationship between the factor and its reflective indicators, were significant (.78–.97). The optimal site selection variable had the lowest factor loading (.78) and highest uniqueness (.64). The relationship between the CL bundle intervention and organizational context reached significance (coefficient = .23, p = .01); the relationship between organizational context and CLABSI outcomes was not significant (coefficient = −.06, p = .48). The relationship between the CL bundle intervention and CLABSIs reached significance (coefficient = −.28, p = .01). The findings indicated partial support of the QHOM.

Composite CL Bundle Measurement Model

Composite CL bundle model in QHOM structural model

The composite CL bundle model, as part of the QHOM structural model, resulted in all relationships among the composite indicators being significant and positive, which is implied by the double-headed arrows next to the CL bundle model (Figure 3, Panel B). Table 4 presents the tetrachoric correlations of the indicators of the CL bundle. The tetrachoric correlation coefficient is a similar to the phi coefficient and is the selected method for testing the association between dichotomous variables in Mplus (Muthén & Muthén, 1998–2012). As in the reflective model, relationships were tested between the CL bundle model and organizational context, the CL bundle model and CLABSIs, and organizational context and CLABSIs. The resulting model showed a similar good fit (χ2 [1,208] = 1,899.87, p < .01; CFI = .94; RMSEA = .04), as presented in Figure 3, Panel B. The χ2 was significant—which can occur with larger sample sizes (Raykov & Marcoulides, 2006). The regression weights from the indicators to the bundled intervention variable—which represent the best linear composite of variables—were not significant (p >.05) after allowing for correlations with the other indicators (CHG = .17. p = .69; HH = .58, p = .26; MAXB = −.21, p = .65; OCS = = .55, p = .09). This is in contrast to the reflective approach, where all indicators were significantly associated with the latent variable.

Similar to the results of the reflective model, the relationship between the CL bundle interventions and organizational context reached significance (coefficient = .20, p = .01); the relationship between organizational context and CLABSI outcomes was not significant (coefficient = .04, p = .60); and the relationship between the CL bundle intervention variable and CLABSI variable reached significance (coefficient = −.25, p = .01) (Figure 3, Panel B). These findings also indicate partial support of the QHOM. In summary, the reflective and composite modeling approaches for the CL bundle items produced structural models that fit the data, and resulted in partial support of the relationships tested in the QHOM. In addition, the two models produced statistically significant relationships between the CL bundle interventions and organizational context, and the CL bundle interventions and CLABSIs.

Discussion

The purpose of this study was to test the CL bundle interventions in reflective and composite modeling approaches to compare the conceptually disparate modeling methods on a systems-level investigation of the role of organizational context on adherence to the CL bundle interventions and CLABSI outcomes. Both modeling approaches resulted in a similar fit to the data and were able to detect similar relationships within the QHOM, though the proposed theory was not fully supported. In addition, the two modeling approaches resulted in only small differences in the structural path parameters. The primary difference between the models concerned the interpretation of the directionality of the relationships within the CL bundle measurement models.

The reflective model, which was shown through EFA and CFA to be a single-factor model with good internal reliability, resulted in good model fit, and all relationships between the latent variable and the indicators were in the same direction and statistically significant. The results are mathematically interesting, but when examined using clinical guidance and expert opinion (Grace & Bollen, 2008), the results are difficult to interpret. Ideally, the reflective model would have been poorly specified. This would have provided empirical support for a composite modeling approach for the CL bundle items and would have been an excellent example of how the contrasting directional approaches of the two models can influence model fit. The lack of misspecification brings us back to the theoretical grounds upon which we began the study.

The composite model—which was challenging to create and interpret due to a lack of standardized development procedures (Bollen & Bauldry, 2011)—fit the larger structural model, but none of the indicators reached statistical significance with the composite variable. Though the findings do not empirically support the CL bundle items as a composite measures, the theoretical support for this approach remains strong. Specifically, that the act of bundling the interventions together is theorized to increase the compliance with each individual intervention, but bundling does not ensure full compliance (Pronovost et al., 2015). Second, the interventions are not interchangeable, for each act as a unique contributor to the bundle process. Lastly, the interventions are not expected to covary. This suggests that the CL bundle is not a latent variable, but a convenient composite of unique variables that impact an outcome.

Lessons Learned

Through this study, we have demonstrated how to build and test a reflective and composite model using a practical example pertinent to nursing research. Lessons learned from this study are many. First, the nonsignificant pathways in the composite measurement model could be explained by the high correlations between the observed measures. This can lead to the loadings becoming unstable—similar to multicollinearity in multiple regression—making it difficult to separate the distinct influence of individual observed variables (Edwards, 2011).

The literature proposes two different approaches for dealing with multicollinearity of composite measures. The first is to view observed variables that are highly correlated as near perfect linear combinations that may contain redundant information (Diamantopoulos et al., 2008). Variable elimination—based on the variance inflation factor—is a statistical method to identify a redundant observed variable. A weakness of this approach is the elimination on purely statistical grounds can alter the meaning of the construct (Diamantopoulos et al., 2008). The second approach for overcoming multicollinearity is to combine composite indicators into an index and use the single-item construct in the analysis (Diamantopoulos et al., 2008). Though appealing, this approach limits the interpretation of the meaning of the joint index, and removes the ability to investigate the individual influences of each indicator (Diamantopoulos et al., 2008).

The second lesson is the finding of consistent parameter estimates for the structural models. The intuitive expectation when using disparate measurement modeling approaches within a SEM is that a misspecified model would result in biased parameter estimates that would lead to poor model fit. This would guide a researcher on the identification of the most appropriate modeling approach (Diamantopoulos et al., 2008). As seen in this study, goodness-of-fit indices failed to detect misspecification of the measurement models. Such biases could lead to incorrect conclusions—especially if a study is not conceptually and theoretically justified prior to testing the relationships empirically. Due to this, it is clear that the results of the fit indices are not the most appropriate method to answer a research question. The selection of variables, testing of relationships, and interpretation of the fit indices must be theory driven.

A lack of misspecification when testing reflective versus composite modeling approaches has been previously reported in the literature when the influence of childhood socioeconomic status was tested within a prospective life course model of health (Hagger-Johnson et al., 2011). The authors found little difference in the model fit statistics or the predicted relationships when testing the two modeling approaches. Similar to our findings, they also reported that pathways that reached significance in the reflective model did not all reach significance in the composite model. The predictive ability and ranking of the indicators within the composite model did permit comparison of their findings to previous research (Hagger-Johnson et al., 2011). Though this was not possible in our study, future research may lead to such type of analysis, allowing for detection of the individual influence of each intervention in the CL bundle on patient outcomes. This could lead to an unpacking of the bundle and a recommendation to adopt only the most influential practices, versus mandating the entire bundle.

The third lesson is that the relationships within the QHOM were partially supported in this study. The finding of a relationship between the CL bundle interventions and organizational context is a new result and one that requires further study. The inverse relationship between the CL bundle interventions and CLABSIs has previously been reported in the literature (Furuya et al., 2011), but was only entered into the model due to the requirements of a composite modeling approach (Kline, 2011). Future work using the QHOM will require client characteristic data to operationalize the full model, and to provide a greater understanding of our findings with a systems framework.

Although the CL bundle interventions were theorized to influence CLABSI outcomes through the concept of organizational context, this has not been empirically studied. Failure to support the theory may be related to multiple factors, including the challenge of using an outcome that is difficult to detect (Rich, Reese, Bol, Gilmartin, & Janosz, 2013), and requires well-trained personnel to perform surveillance, validation, and reporting (Neidner, 2010); it has been suggested to include a level of surveillance bias due to lower reimbursement and poor public perception when CLABSI rates are high (Furuya et al., 2011; Haut & Pronovost, 2011; Talbot et al., 2013). In addition, the assessment of the concept of organizational context through self-report surveys by one respondent in a hospital may have limited the detectability of significant contextual effects on outcomes (Podsakoff & Organ, 1986).

The final lesson is that there is much to be learned from a study that results in nonsignificant findings. Though frustrating for the research team, it encourages reflection—which can move the field forward. One concept that requires greater discussion is the role of bundling of process measures. This study provides preliminary evidence that the bundling of the individual CL infection prevention interventions may have a synergistic influence on CLABSI events. This is supported by the high intercorrelations between the composite indicators (Table 4) and subjective reports from clinical practice that have suggested that the individual CL bundle interventions are more powerful when grouped together into a bundle (Pronovost et al., 2015). This may explain why the relationships within the larger structural model were supported—even though the composite pathways were nonsignificant. The synergistic effect of the CL bundle items on the larger model was captured in the composite variable.

Limitations

The present study is the first to demonstrate the impact of modeling the CL bundle with composite and reflective indicators in a large, national study of ICUs. Other samples—particularly with different indicators for HAI prevention bundles and outcomes—may not produce equivalent results. Other limitations include missing data patterns—which may have distorted some model estimates. However, the use of the WLSMV estimator uses all available data. Additionally, data were not available on client characteristics—which may have influenced the tested relationships. Finally, the data used in this study were not collected for the purpose of testing the relationships within the QHOM. These limitations will be factored into future studies to progress this field of research. Though the use of a single indicator, such as the average of all CL bundle items, is an alternative to creating a composite measurement model; this approach does not provide an opportunity to evaluate the specific contribution of each CL bundle intervention on organizational context and CLABSI outcomes.

Conclusions

In conclusion, we have provided a practical example of the implications of reflective and composite measurement modeling approaches using the CL bundle interventions in a SEM. Our study suggests that for constructs that have not been previously tested in a measurement model or have questionable directionality, nurse researchers should select a modeling approach using theoretical and clinical guidance, then compare the results using opposing methods to bring transparency to their methods and confidence to their study findings. We have presented our lessons learned, expanded the methodological base, and informed nurse researchers about the challenges and benefits of latent variable modeling methods. Replication of our findings using datasets that have similar, if not identical, variables, and with data collected specifically to address this issue may ultimately answer how best to operationalize bundled interventions within a latent variable modeling framework.

Supplementary Material

Supplemental Data File _doc_ pdf_ etc.__1

Supplemental Digital Content 1: Figure depicts the second-order factor model for Organizational Context. .doc

Supplemental Data File _doc_ pdf_ etc.__2

Supplemental Digital Content 2: The Mplus code for the two structural models is shown in Figure. .doc

Acknowledgments

The authors acknowledge that funding for the PNICER study was provided by the National Institute of Nursing Research (R01NR010107).

The authors also acknowledge that the contents of this manuscript do not represent the views of the Department of Veterans Affairs or the United States Government.

The authors would like to thank Dr. Patricia Stone, PhD, RN, FAAN, Centennial Professor of Health Policy, Columbia University School of Nursing, for use of the PNICER data for secondary analysis, and the infection preventionists who responded to the survey. They also thank the editor and three anonymous reviewers at Nursing Research for invaluable feedback that greatly improved this paper.

Footnotes

The authors have no conflicts of interest to report.

Contributor Information

Heather M. Gilmartin, Denver-Seattle Center of Innovation, Department of Veterans Affairs, Denver VA Medical Center, Denver, CO.

Karen H. Sousa, University of Colorado, College of Nursing.

Catherine Battaglia, Denver-Seattle Center of Innovation, Department of Veterans Affairs, Denver VA Medical Center, Denver, CO.

References

  1. Bollen KA, Bauldry S. Three C’s in measurement models: Causal indicators, composite indicators, and covariates. Psychological Methods. 2011;16:265–284. doi: 10.1037/a0024448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bollen KA, Lennox R. Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin. 1991;110:305–314. [Google Scholar]
  3. Cardo D, Dennehy PH, Halverson P, Fishman NO, Kohn M, Murphy CL, Whitley RJ. Moving toward elimination of healthcare-associated infections: A call to action. Infection Control and Hospital Epidemiology. 2010;31:1101–1105. doi: 10.1086/656912. [DOI] [PubMed] [Google Scholar]
  4. Centers for Disease Control. Device Associated Module BSI. Atlanta, GA: 2016. Bloodstream infection event (central line-associated bloodstream infection and non-central line-associated bloodstream infection) pp. 1–32. Author. Retrieved from http://www.cdc.gov/nhsn/PDFs/pscManual/4PSC_CLABScurrent.pdf. [Google Scholar]
  5. Chopra V, Krein SL, Olmstead RN, Safdar N, Saint S. Making health care safer II: An updated critical analysis of the evidence for patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality; 2013. Prevention of central line-associated bloodstream infections: A brief review. (pp. Chapter 10) [Google Scholar]
  6. Coltman T, Devinney TM, Midgley DF, Venaik S. Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research. 2008;61:1250–1262. [Google Scholar]
  7. Diamantopoulos A, Siguaw JA. Formative versus reflective indicators in organizational measure development: A comparison and empirical illustration. British Journal of Management. 2006;17:263–282. [Google Scholar]
  8. Diamantopoulos A, Riefler P, Roth KP. Advancing formative measurement models. Journal of Business Research. 2008;61:1203–1218. [Google Scholar]
  9. Donabedian A. Explorations in quality assessment and monitoring: The definition of quality and approaches to its assessment. Vol. 1. Ann Arbor, MI: Health Administration Press; 1980. [Google Scholar]
  10. Edwards JR. The fallacy of formative measurement. Organizational Research Methods. 2011;14:370–388. [Google Scholar]
  11. Furuya EY, Dick A, Perencevich EN, Pogorzelska M, Goldmann D, Stone PW. Central line bundle implementation in US intensive care units and impact on bloodstream infections. PLOS ONE. 2011;6:e15452. doi: 10.1371/journal.pone.0015452. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gilmartin HM, Pogorzelska-Maziarz M, Thompson S, Sousa KH. Confirmation of the validity of the Relational Coordination Survey as a measure of the work environment in a national sample of infection preventionists. Journal of Nursing Measurement. 2015;23:379–392. doi: 10.1891/1061-3749.23.3.379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Grace JB, Bollen KA. Representing general theoretical concepts in structural equation models: The role of composite variables. Environmental and Ecological Statistics. 2008;15:191–213. [Google Scholar]
  14. Graziano AM, Raulin ML. Research methods: A process of inquiry. 8th. New York, NY: Pearson; 2013. [Google Scholar]
  15. Hagger-Johnson G, Batty GD, Deary IJ, von Stumm S. Childhood socioeconomic status and adult health: Comparing formative and reflective models in the Aberdeen Children of the 1950s Study (prospective cohort study) Journal of Epidemiology and Community Health. 2011;65:1024–1029. doi: 10.1136/jech.2010.127696. [DOI] [PubMed] [Google Scholar]
  16. Hardin A, Marcoulides GA. A commentary on the use of formative measurement. Educational and Psychological Measurement. 2011;71:753–764. [Google Scholar]
  17. Haut ER, Pronovost PJ. Surveillance bias in outcomes reporting. JAMA. 2011;305:2462–2463. doi: 10.1001/jama.2011.822. [DOI] [PubMed] [Google Scholar]
  18. Kim JS, Holtom P, Vigen C. Reduction of catheter-related bloodstream infections through the use of a central venous line bundle: Epidemiologic and economic consequences. American Journal of Infection Control. 2011;39:640–646. doi: 10.1016/j.ajic.2010.11.005. [DOI] [PubMed] [Google Scholar]
  19. Kline RB. Principles and practices of structural equation modeling. 3rd. New York, NY: Guilford Press; 2011. [Google Scholar]
  20. MacCallum RC, Browne MW, Cai L. Testing differences between nested covariance structure models: Power analysis and null hypothesis. Psychological Methods. 2006;11:19–35. doi: 10.1037/1082-989X.11.1.19. [DOI] [PubMed] [Google Scholar]
  21. MacKenzie SB, Podsakoff PM, Jarvis CB. The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. Journal of Applied Psychology. 2005;90:710–730. doi: 10.1037/0021-9010.90.4.710. [DOI] [PubMed] [Google Scholar]
  22. Mitchell PH, Ferketich S, Jennings BM. Quality health outcomes model. Image. 1998;30:43–46. doi: 10.1111/j.1547-5069.1998.tb01234.x. [DOI] [PubMed] [Google Scholar]
  23. Muthén LK, Muthén BO. Mplus user’s guide. Los Angeles, CA: Muthén & Muthén; 1998–2012. pp. 1–856. [Google Scholar]
  24. Neidner MF. The harder you look, the more you find: Catheter-associated bloodstream infection surveillance variability. American Journal of Infection Control. 2010;38:585–595. doi: 10.1016/j.ajic.2010.04.211. [DOI] [PubMed] [Google Scholar]
  25. O’Grady NP, Alexander M, Burns LA, Dellinger P, Garland J, Heard SO Healthcare Infection Control Practices Advisory Committee (HICPAC) Guidelines for the prevention of intravascular catheter-related infections, 2011. Atlanta, GA: Centers for Disease Control; 2011. Retrieved from http://www.cdc.gov/hicpac/pdf/guidelines/bsi-guidelines-2011.pdf. [Google Scholar]
  26. Podsakoff PM, Organ DW. Self-reports in organizational research: Problems and prospects. Journal of Management. 1986;12:531–544. [Google Scholar]
  27. Pogorzelska-Maziarz M, Nembhard IM, Schnall R, Nelson S, Stone PW. Psychometric evaluation of an instrument for measuring organizational climate for quality: Evidence from a national sample of infection preventionists. American Journal of Medical Quality. 2015 doi: 10.1177/1062860615587322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Preacher KJ, Coffman DL. Computing power and minimum sample size for RMSEA [Computer software] 2006 Retrieved from http://quantpsy.org.
  29. Pronovost PJ, Goeschel CA, Colantuoni E, Watson S, Lubomski LH, Berenholtz SM, Needham D. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: Observational study. BMJ. 2010;340:c309. doi: 10.1136/bmj.c309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Pronovost PJ, Watson SR, Goeschel CA, Hyzy RC, Berenholtz SM. Sustaining reductions in central line-associated bloodstream infections in Michigan intensive care units: A 10-year analysis. American Journal of Medical Quality. 2015 doi: 10.1177/1062860614568647. Advance online publication. [DOI] [PubMed] [Google Scholar]
  31. Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, Cosgrove S, Groeschel C. An intervention to decrease catheter-related bloodstream infections in the ICU. New England Journal of Medicine. 2006;355:2725–2732. doi: 10.1056/NEJMoa061115. [DOI] [PubMed] [Google Scholar]
  32. Raykov T, Marcoulides GA. A first course in structural equation modeling. 2nd. Mahway, NJ: Erlbaum; 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Render ML, Hasselbeck R, Freyberg RW, Hofer TP, Sales AE, Almenoff PL. Reduction of central line infections in Veterans Administration intensive care units: An observational cohort using a central infrastructure to support learning and improvement. BMJ Quality & Safety. 2011;20:725–732. doi: 10.1136/bmjqs.2010.048462. [DOI] [PubMed] [Google Scholar]
  34. Rich KL, Reese SM, Bol KA, Gilmartin HM, Janosz T. Assessment of the quality of publicly reported central-line associated bloodstream infection data in Colorado, 2010. American Journal of Infection Control. 2013;41:874–879. doi: 10.1016/j.ajic.2012.12.014. [DOI] [PubMed] [Google Scholar]
  35. Sawyer M, Weeks K, Goeschel C, Thompson DA, Berenholtz SM, Marsteller JA, Pronovost PJ. Using evidence, rigorous measurement, and collaboration to eliminate central catheter-associated bloodstream infections. Critical Care Medicine. 2010;38(Suppl 8):S292–S298. doi: 10.1097/CCM.0b013e3181e6a165. [DOI] [PubMed] [Google Scholar]
  36. Septimus E, Yokoe DS, Weinstein RA, Perl TM, Maragakis LL, Berenholtz SM. Maintaining the momentum of change: The role of the 2014 updates to the Compendium in Preventing Healthcare-Associated Infections. Infection Control and Hospital Epidemiology. 2014;35:460–463. doi: 10.1086/675820. [DOI] [PubMed] [Google Scholar]
  37. Shekelle PG, Pronovost PJ, Wachter RM, Taylor SL, Dy S, Foy R, Rubenstein L. Assessing the evidence for context-sensitive effectiveness and safety of patient safety practices: Developing criteria. Rockville, MD: Agency for Healthcare Research and Quality; 2010. [Google Scholar]
  38. Shekelle PG, Wachter RM, Pronovost PJ, Schoelles K, McDonald KM, Dy SM, Winters BD. Making health care safer II: An updated critical analysis of the evidence for patient safety practices. Rockville, MD: Agency for Healthcare Research and Quality; 2013. Mar, [Comparative Effectiveness Review No. 211] [PMC free article] [PubMed] [Google Scholar]
  39. Siempos II, Kopterides P, Tsangaris I, Dimopoulou I, Armaganidis AE. Impact of catheter-related bloodstream infections on the mortality of critically ill patients: A meta-analysis. Critical Care Medicine. 2009;37:2283–2289. doi: 10.1097/CCM.0b013e3181a02a67. [DOI] [PubMed] [Google Scholar]
  40. Stone PW, Pogorzelska-Maziarz M, Herzig CTA, Weiner LM, Furuya EY, Dick A, Larson E. State of infection prevention in US hospitals enrolled in the National Health and Safety Network. American Journal of Infection Control. 2014;42:94–99. doi: 10.1016/j.ajic.2013.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th. Boston, MA: Pearson; 2007. [Google Scholar]
  42. Talbot TR, Bratzler DW, Carrico RM, Diekema DJ, Hayden MK, Huang SS, …Fishman NO. Public reporting of health care-associated surveillance data: Recommendations from the Healthcare Infection Control Practices Advisory Committee. Annals of Internal Medicine. 2013;159:631–635. doi: 10.7326/0003-4819-159-9-201311050-00011. [DOI] [PubMed] [Google Scholar]
  43. Ullman JB. Structural equation modeling. In: Tabachnick BG, Fidell LS, editors. Using multivariate statistics. 5th. New York, NY: Pearson; 2007. pp. 676–780. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Data File _doc_ pdf_ etc.__1

Supplemental Digital Content 1: Figure depicts the second-order factor model for Organizational Context. .doc

Supplemental Data File _doc_ pdf_ etc.__2

Supplemental Digital Content 2: The Mplus code for the two structural models is shown in Figure. .doc

RESOURCES