Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2018 Nov 29;10(1):213–222. doi: 10.1093/tbm/iby099

Examining the external validity of the CRUZA study, a randomized trial to promote implementation of evidence-based cancer control programs by faith-based organizations

Jennifer Dacey Allen 1,, Rachel C Shelton 1, Lindsay Kephart 1, Laura S Tom 1, Bryan Leyva 1, Hosffman Ospino 1, Adolfo G Cuevas 1
PMCID: PMC7529011  PMID: 30496532

Parishes that agreed to participate in an intervention trial to promote implementation of cancer control programs had similar organizational characteristics as those that declined participation.

Keywords: Latinos, Faith-based organizations, Cancer screening, Evidence-based interventions, External generalizability, Implementation science

Abstract

The CRUZA trial tested the efficacy of an organizational-level intervention to increase capacity among Catholic parishes to implement evidence-based interventions (EBIs) for cancer control. This paper examines the external generalizability of the CRUZA study findings by comparing characteristics of parishes that agreed to participate in the intervention trial versus those that declined participation. Sixty-five Roman Catholic parishes that offered Spanish-language mass in Massachusetts were invited to complete a four-part survey assessing organization-level characteristics that, based on the Consolidated Framework for Implementation Research (CFIR), may be associated with EBI implementation. Forty-nine parishes (75%) completed the survey and were invited to participate in the CRUZA trial, which randomized parishes to either a “capacity enhancement intervention” or a “standard dissemination” group. Of these 49 parishes, 31 (63%) agreed to participate in the trial, whereas 18 parishes (37%) declined participation. Parishes that participated in the CRUZA intervention trial were similar to those that did not participate with respect to “inner organizational setting” characteristics of the CFIR, including innovation and values fit, implementation climate, and organizational culture. Change commitment, a submeasure of organizational readiness that reflects the shared resolve of organizational members to implement an innovation, was significantly higher among the participating parishes (mean = 3.93, SD = 1.08) as compared to nonparticipating parishes (mean = 3.27, SD = 1.08) (Z = −2.16, p = .03). Parishes that agreed to participate in the CRUZA intervention trial were similar to those that declined participation with regard to organizational characteristics that may predict implementation of EBIs. Pragmatic tools to assess external generalizability in community-based implementation trials and to promote readiness among faith-based organizations to implement EBIs are needed to enhance the reach and impact of public health research.

Clinical Trial information: The CRUZA trial identifier number with clinicaltrials.gov is NCT01740219.


Implications

Practice: Practitioners may want to consider assessing change commitment among key organizational members (e.g., pastors, ministry leaders) prior to initiating efforts so that they could target organizations with favorable characteristics associated with greater success of adoption and implementation of interventions.

Policy: In order to realize the full impact of evidence-based interventions in community organizations such as faith-based organizations, interventions must be feasible and acceptable to the intended audiences; policies are needed to fully fund and report on characteristics of organizations that are able to commit to and successfully implement these programs.

Research: Our work highlights the need for more detailed reporting of community-based intervention trials at the organizational level and the need to accurately measure organizational characteristics of units or entities that elect to participate in research.

INTRODUCTION

Faith-based organizations (FBOs) have increasingly partnered with public health practitioners to offer health interventions to their congregants [1]. Many faith-based interventions have demonstrated efficacy in reaching medically underserved groups [2] and positively impacting health behaviors, including interventions focused on cancer screening [3–6], physical activity [7, 8], dietary habits [7, 8], and substance use [9]. While these positive findings demonstrate a strong potential for efficacy and impact across interventions offered in FBOs, there has been a call for increased attention to the external validity of research findings [10–12]. According to standards for research [13], tightly controlled efficacy trials should be followed by effectiveness studies to determine the impact of interventions in more diverse “real-world” settings and populations. Ideally, interventions that have demonstrated a positive impact in both efficacy and effectiveness trials (thus becoming evidence-based interventions, or EBIs) should then be disseminated to a broader audience and range of settings to maximize their impact and reach [14, 15]. This latter stage of the research continuum is critical in order to realize the full impact of EBIs.

External validity refers to the extent to which the results of a scientific study (i.e., treatment/intervention outcomes) can be broadly generalized to other settings, samples, or populations [15]. Researchers have traditionally emphasized internal validity in order to make inferences regarding cause-effect or causal relationships, and reliably make claims that positive study results regarding experimental treatments/interventions reflect a true effect [16]. Although a focus on internal validity is one important consideration, many interventions based on internally valid results do not get broadly adopted by other settings, samples, or populations [17]. Understanding factors that influence external validity, such as selection bias, will help advance understanding of how to facilitate more widespread dissemination and adoption of research-tested interventions, particularly in community settings.

To better understand and improve confidence in the external validity of research findings, it is important to be transparent in the research process. Specifically, it is useful to examine how research participants are sampled, how many choose to participate, and to assess whether the characteristics of those that elect to participate in intervention trials are different from those that decline participation. With regards to individual-level interventions in FBOs, this is often achieved by comparing how research participants differ from nonparticipants (e.g., by comparison of factors such as socioeconomic status, race/ethnicity, disease comorbidity, and other individual-level characteristics that might have affected study results). However, in assessing external generalizability of organizational-level interventions in FBOs, organizational-level sampling procedures and organizational-level characteristics—not simply individual-level factors—must be considered. Unfortunately, few intervention studies in FBOs or in other community-based settings have reported detailed information about sampling procedures or organizational response/participation rates [10]. In addition, in our literature review, we could not identify any studies that examine and compare organizational-level characteristics between participating and nonparticipating FBOs associated with a community-based implementation trial.

The 3-year CRUZA trial, funded by the National Cancer Institute, tested the efficacy of an organizational-level intervention to promote uptake of EBIs for cancer control among FBOs serving Latinos (i.e., Roman Catholic parishes that offered at least one Spanish-language mass per week). We focused on Catholic parishes because, at the time of study initiation, a majority of Latinos in the USA identified themselves as Catholic [18]. In preparation for the trial, we identified potentially eligible parishes in the state of Massachusetts and recruited organizational leaders (e.g., pastors, lay ministers) to take part in the baseline organizational survey. Details regarding recruitment for the CRUZA trial and data collection strategies are provided elsewhere [19–21].

Among FBOs that were eligible (n = 65), 75% (n = 49) completed baseline surveys and 63% (n = 31) of those agreed to participate in the 3-year CRUZA trial. Briefly, the n = 31 parishes were randomized on a 2:1 ratio to either a “capacity enhancement” or “standard dissemination” group. All parishes received a Program Implementation Manual and Toolkit with materials adapted and packaged for parishes with Latino memberships. Materials for these EBIs were based on recommendations from the U.S. Preventive Services Task Force Community Guide to promote screening for breast, cervical, and colorectal cancer. EBIs (at the individual level) included use of small media, group education, client reminders, reductions of structural barriers to screening, and one-to-one education. Capacity enhancement parishes were offered a menu of organizational capacity-building activities over a 3-month period which included technical assistance by an intervention specialist, assistance with formation of health committees or ministries, facilitation of inter-institutional partnerships, and skill-building workshops. Parishes assigned to “standard dissemination” were offered a one-time consultation from a health education specialist. We found that implementation of EBIs increased substantially among both parishes randomized to the capacity enhancement arm and to the standard dissemination arm, suggesting that Catholic parishes may only require low levels of support to carry out programs [20]. In the CRUZA trial, the only statistically significant difference between parishes in the intervention arm and those in the standard dissemination arm was in the implementation of small media, with the capacity enhancement arm having significantly higher use of small media [20].

The purpose of this paper is to assess the external validity of CRUZA findings by comparing Catholic parishes that chose to participate in the CRUZA trial with those that chose not to participate. Specifically, we compared key organizational characteristics believed to impact readiness to adopt and implement EBIs across trial participants and trial nonparticipants, guided by the Consolidated Framework for Implementation Research (CFIR) [22].

METHOD

Conceptual model

The CRUZA study was guided by the CFIR [22], which posits that factors across multiple levels affect adoption and implementation of innovations in organizational settings. These include internal organizational characteristics and dynamics (“inner setting” factors), which were the focus of this study, as well as factors external to the organization (“outer setting” factors), processes, characteristics of the intervention, and characteristics of interventionists. CFIR posits that within the inner organizational setting, organizations that have leadership support and engagement, sufficient resources for implementation, and access to the knowledge and skills necessary for adoption/implementation of the innovation (i.e., “readiness for implementation”) are more likely to implement innovations. Moreover, organizations that have capacity for and collective receptivity to change (i.e., positive “implementation climate”), and values consistent with the innovation (i.e., conducive “organizational culture”), are more likely to adopt and/or implement innovations [22]. Thus, for successful implementation, an organization needs both infrastructure (i.e., policies, procedures, and resources) and the implementers (people with the expertise who will “champion” the program). In this analysis, we examine characteristics of the inner setting (i.e., organizational readiness, innovation and value fit, implementation climate, and organizational culture) to determine whether parishes that agreed to participate in the CRUZA trial were different from the nonparticipants.

Sample and setting

The Catholic Church in the USA advances its religious mission through well-defined structures such as parishes, dioceses, organizations, hospitals, agencies, and educational institutions functioning as individual FBOs connected to a central organization. From the variety of Catholic FBOs, the CRUZA study engaged parishes. At the time of study initiation, the research team identified 577 Catholic parishes located in Massachusetts [23]. Recruitment efforts are described elsewhere [19]. Briefly, we compiled initial lists of parishes and pastors by searching print and online archives of the four dioceses (administrative units that bring together parishes territorially adjacent to one another) in Massachusetts, reviewing parish websites for availability of Spanish-language mass, and making scripted calls to parishes to verify mailing addresses and pastor names. These lists were subsequently reviewed by diocesan leaders, who noted parish closures and consolidations, a common phenomenon during the time period of data collection [24, 25]. Through this process, we identified 70 potentially eligible parishes.

We mailed recruitment packets to each parish’s pastor. Packets included a project brochure, which described the study’s goals and procedures and provided informed consent information, as well as a letter of support from the local bishop in each diocese. We enclosed a return reply form for pastors to indicate their interest in participating in the study, the name(s) of appropriate parish representative(s) to complete relevant sections of the organizational survey, and preferred mode of contact (phone/in-person). Approximately 2 weeks following the mailing, trained bilingual survey assistants initiated recruitment calls or in-person visits to meet with pastors. After the pastor consented, parishes became participants in the study.

Data collection

Detailed data collection information is available elsewhere [19]. In short, bilingual survey assistants conducted recruitment calls/visits and interviews in 2012. Surveys were administered to pastors or their authorized representatives (e.g., administrative staff, leaders in health committees or ministries or Hispanic ministries) by phone (71%), in-person (24%), or by mail (5%) and took between 20 and 60 minutes.

Measures

Few standardized or validated measures exist of latent organizational characteristics associated with adoption or implementation of innovations [26, 27]. When feasible, we adapted items from existing organizational surveys [19]. Modifications to existing surveys largely involved changing the terminology so that it was relevant for a FBO and for health programs and activities (e.g., “Your organization [parish] is expected to carry out this program [health programs or activities”). Due to lack of existing measures, we developed items to assess the structural characteristics of parishes (see below).

Based on our conceptual model, we assessed organizational characteristics, including (i): organizational readiness—12 items measuring organizational members’ shared resolve to implement a change (change commitment) and collective capability to do so (shared efficacy), adapted from prior research on healthcare settings (e.g., following a description of program requirements: “How confident are you that your parish can carry out program activities”) [28, 29]; (ii) innovation/values fit—5 items assessing the extent to which health programs fit with the organization’s overall mission and values (e.g., “Offering health-related activities and programs in parishes is relevant to the mission of the Church”), adapted from measures developed by Belkhodja et al. [30]; (iii) implementation climate—7 items assessing the extent to which organizational policies and practices encourage, support, and reward implementation of programs (e.g., “Your parish is expected to offer health activities and programs”), adapted from the work of Weiner and colleagues [31]; and (iv) organizational culture—7 items adapted from the measures developed by Helfrich et al. measuring the extent to which the organization has an environment of trust, support, flexibility, participative decision-making, and innovation, and that values are in place to optimize implementation and facilitate knowledge sharing (e.g., “The pastoral team rewards innovation and creativity to improve parish programming”) [32]. Respondents were asked about the extent to which they agreed with statements on a 5-point Likert scale (1 = low agreement, 5 = high agreement). Items were summed for each construct, and divided by the total number of items in the scale, with 1 indicating the lowest level of a given factor and 5 indicating the highest. It should be noted that respondents were asked to report their perceptions about these factors within their parish.

We also measured parish resources (e.g., monetary collections, volunteerism), leadership and staff characteristics (e.g., number of pastoral and administrative staff, pastor’s educational level, number of Latino pastoral leaders), and existing resources to promote health (e.g., presence of health ministry, money spent on health programs, health-related events). Items to assess these characteristics were developed by the investigator team for this study.

The study was carried out under the approval of the Institutional Review Board at the Harvard School of Public Health and University of Massachusetts, Boston.

Analysis

Our analytic goal was to compare organizational characteristics from our conceptual model (latent constructs from the CFIR), other structural organizational characteristics that could influence implementation (e.g., parish size), and health program offerings within the prior year between parishes that agreed to participate in the CRUZA trial versus those that declined participation. For all variables, responses of “don’t know” or “refused” were coded as missing. Cases with missing values for the latent organizational constructs of interest were excluded from analysis.

We first conducted descriptive analyses, including means, standard deviations (SDs), medians, and interquartile ranges for continuous variables. Categorical variables were examined with frequencies. Percent missing data were calculated for each variable. Preliminary analyses were performed to determine if the data met assumption of equal variance using Levene’s test [33]. Due to skewedness of ordinal organizational characteristics (toward higher values) as tested by Q–Q plots and the Shapiro–Wilk test [34], comparisons of means were done using nonparametric tests, including the Mann–Whitney U-test [35]. Parishes were stratified by participation status and means were compared using the Kruskal–Wallis one-way analysis of variance test. Due to the small sample, we were not able to conduct multivariate analyses. p-Values were considered statistically significant at the p ≤ .05 level.

RESULTS

Structural characteristics

Out of a total of n = 65 eligible parishes identified, n = 49 (75%) completed baseline organizational surveys. Of these, 31 (63%) completed the baseline survey and went on to participate in the CRUZA trial (CRUZA participants). Just over a third of parishes (n = 18; 37%) completed the baseline survey but declined participation (nonparticipants).

Among all parishes that participated in the baseline organizational survey (n = 49), the congregation size ranged from 60 to 7,741 (mean = 2,020, SD = 1,829) with anywhere between 1% and 100% of the congregation being Latino (mean = 41%, median = 46%). Parishes offered 1–11 (mean = 2.7, SD = 2.5) Masses in Spanish each week. The parishes had offered Spanish Mass for between 2 and 62 years (median = 18, mean = 20, SD = 14).

In terms of parish resources, weekly collections varied between $0 and $216.67 per member (mean = $11.58, SD = 36.29). The percentage of congregants that regularly volunteered in the parishes ranged from 0% to 66% (median = 4.3%, mean = 8.7%, SD = 14.6%). With regard to leadership and staff characteristics, parishes had an average of six pastoral staff (SD = 8, range = 0–44). Between both full-time and part-time pastoral staff, there were very few Latino priests overall (n = 7; 15%). Priests tended to be well educated, with 78% holding a graduate degree. The average size of parish administrative staff was two full-time (median = 2, SD = 1.6) and two (median = 1, SD = 2.2) part-time individuals. There were relatively few health programs offered in the parishes. Only 16 parishes reported previously offered health education or health service (e.g., screening, vaccines) programs. The most common types of programs included flu vaccines (n = 4), blood drives (n = 2), and cooking/nutrition classes (n = 2). Only two parishes had cancer-related education about screening: breast (n = 1) and prostate (n = 1) (data not shown).

Comparison of structural characteristics of participating versus nonparticipating parishes

There were no significant differences between participating and nonparticipating parishes in terms of structural characteristics, leadership and administrative staff, or health resources (see Table 1).

Table 1.

| Structural characteristics of parishes by participation status (n = 49)

Organizational characteristics CRUZA participants (n = 31) Nonparticipants (n = 18) p-Value
N Median Mean SD (range) N Median Mean SD (range)
Resources
 Congregation size 30 1750 2299.4 1944.2 (60–7741) 12 675 1321.9 1324 (70–4000) .50
 Percent of congregation that is Latino 29 40.0% 43.4% 28.9% (1.3%–100%) 8 47.7% 53.5% 33% (11.5%–100%) .70
 Years of Spanish mass offered 30 15.5 20 14.6 (2–62) 16 20 18.8 11.6 (3–43) .31
 Number of weekly Spanish masses 31 2 2.7 2.3 (1–11) 17 2 2.8 2.9 (1–10) .29
Leadership and staff
 Full-time paid pastoral staff 30 4 6.2 8.1 (0–44) 13 2 4.7 9.3 (0–35) .59
  % Latino 29 25% 26.4% 27.0 (0%–100%) 11 0% 21.6% 27.3% (0%–66.7%) .74
 Part-time paid pastoral staff 29 1 3.7 4.3 (0–16) 12 0.5 2.3 3.0 (0–9) .50
  % Latino 21 20% 34.0% 38.5% (0%–100%) 6 52.2% 47.4% 41.2% (0%–100%) .34
 Full-time paid administrative staff 31 1 1.6 1.6 (0–6) 13 2 2.5 1.7 (0–6) .46
  % Latino 22 0% 12.1% 19.9% (0%–50%) 11 0% 23.5% 36.3 (0%–100%) .24
 Part-time paid administrative staff 31 1 1.7 2.2 (0–10) 13 2 2.4 2.4 (0–9) .34
  % Latino 19 0% 33.0% 40.8% (0%–100%) 10 0% 35.0% 47.4% (0%–100%) .74
 Total full-time staff 31 6 7.6 8.3 (0–44) 13 5 7.2 9.4 (1–37) .85
 Number of Latino priests 29 4 (13.8%) 17 3 (17.6%) .73
 Percent of priests with grad degree 30 50.0% 16 69.4% .25

Significance determined by chi-square tests.

Comparison of organizational (inner setting) characteristics from CFIR

There were no significant differences between participating and nonparticipating parishes with regards to prior health programming (see Table 2). Among both parishes that participated in the CRUZA trial and those that completed organizational survey but declined trial participation, mean organizational readiness scores were high (respective means = 3.88 and 3.47, respective SD = 0.93 and 1.02), with no significant differences between parishes. Among submeasures of organizational readiness, there was a significant difference between the two groups for change commitment. Change commitment was significantly higher among the participating parishes (mean = 3.93, SD = 1.08) than nonparticipating parishes (mean = 3.27, SD = 1.08) (Z = −2.16, p = .03).

Table 2.

| Parish health programming

Parish Health Programming All (N = 49) CRUZA participants (n = 31) Nonparticipants (n = 18) p-Value
N Frequency (%) N Frequency (%) N Frequency (%)
Health education and service programs 46 16 (34.8%) 31 11 (35.5%) 15 5 (33.3%) .89
 Health education programs 46 8 (17.4%) 31 5 (16.2%) 15 3 (20%) .75
 Health service programs 46 7 (15.2%) 31 7 (22.5%) 15 0.0% .05*
 “Other” health programs 46 7 (15.2%) 31 5 (16.2%) 15 2 (13.3%) .80
Cancer specific programs 46 2 (4.4%) 31 1 (3.2%) 15 1 (66.7%) .59

*Significance was determined chi-square tests.

Scores on the innovation/values fit scale were high for both participating parishes (mean = 4.50, SD = 0.70) and nonparticipating parishes (mean = 4.19, SD = 0.91) with no observable differences between groups. Implementation climate scores were the lowest of the CFIR constructs, with a mean of 3.06 (SD = 1.17) for participants and a mean of 2.52 (SD = 1.19) for nonparticipating parishes, with no significant differences between groups. Organizational culture scores were very high among participating and nonparticipating parishes (mean = 4.42 and 4.26, SD = 0.67 and 0.85), with no statistically significant differences between groups. There were also no significant differences in the submeasures of innovation/values fit, implementation climate, and organization culture between the two groups (Table 3).

Table 3.

| Mann–Whitney U-tests of differences between participating and nonparticipating Latino parishes

Organizational characteristics (range: 1 = low, 5 = high) CRUZA participants (N = 30) Nonparticipants (N = 15)
Mean SD (range) Mean rank Sum of ranks Mean SD (range) Mean rank Sum of ranks Z p-Value
Organizational readiness 3.88 0.93 (1–5) 24.65 739.5 3.47 1.02 (1–5) 19.70 295.5 −1.18 .24
 Change efficacy 3.86 0.92 (1–5) 24.05 721.5 3.53 1.03 (1–5) 20.90 313.5 −0.75 .45
 Change commitment 3.93 1.08 (1–5) 25.95 778.5 3.27 1.08 (1–5) 17.10 256.5 −2.16 .03*
Innovation and values fit 4.50 0.70 (2.2–5) 24.37 731.0 4.19 0.91 (2–5) 20.27 304.0 −1.02 .31
 Respondent’s belief in fit 4.39 0.91 (1–5) 24.45 733.5 4.04 1.0 (2–5) 20.10 301.5 −1.10 .27
 Respondent’s perception of parish fit 4.67 0.56 (3–5) 24.17 725.0 4.40 0.89 (2–5) 20.67 310.0 −0.96 .34
Implementation climate 3.06 1.17 (1–5) 25.00 750.0 2.52 1.19 (1–4.9) 19.00 285.0 −1.44 .15
 Expectations 3.53 1.68 (1–5) 23.95 718.5 3.20 1.80 (1–5) 21.10 316.5 −0.71 .48
 Support 3.16 1.43 (1–5) 25.07 752.0 2.42 1.49 (1–5) 18.87 283.0 −1.50 .13
 Recognition 2.45 1.42 (1–5) 24.45 733.5 2.00 1.57 (1–5) 20.10 301.5 −1.10 .27
Organizational culture 4.42 0.67 (2.7–5) 24.20 726.0 4.26 0.85 (1.6–5) 20.60 309.0 −0.87 .39
 Pastoral team culture 4.67 0.50 (3.7–5) 24.47 734.0 4.29 0.96 (2.3–5) 20.07 301.0 −1.15 .25
 Parishioner’s culture 4.24 1.12 (1–5) 23.75 712.5 4.23 0.99 (1–5) 21.50 322.5 −0.55 .59

*Significance is at p < .05.

DISCUSSION

In this sample of 49 Catholic parishes, we found very few differences between participating and nonparticipating parishes regarding organizational characteristics hypothesized by the CFIR to influence implementation of innovations [22]. While there were differences between the groups with regard to change commitment, a submeasure of organizational readiness, there were no detectible differences in terms of other constructs in the ‘inner setting’ of the CFIR model or concerning organizational structural characteristics. These results suggest that in FBOs similar to the Catholic parishes engaged by the CRUZA study, the “shared resolve” of members to implement a change may be an important indicator of willingness to participate in a capacity-enhancement intervention trial. It is important to note that survey respondents were asked to provide their individual perceptions about their organizations; as such, it is possible that our results have been potentially been skewed by the ability of 1–2 individuals being able to accurately assess features of their organization. Furthermore, parishes’ participation could have also been related to factors not assessed in this study (e.g., factors or commitments not related to health matters).

Our findings are consistent with the broader literature that suggests that change commitment may be important in understanding successful implementation of EBIs for health promotion in community organizations. For example, a study among nurses in hospitals found that a high level of change commitment was positively associated with compliance with the requirements of a change (e.g., an ongoing organizational change that had an impact on the way they perform their job, like department mergers, new technology, etc.), and even uncommitted individuals were generally willing to comply with changes in their organization [36]. However, research is needed to better understand how leadership shapes change commitment. One study surveyed 343 employees from 30 organizations representing a wide variety of industry sectors in the USA (e.g., technology, building, banking, telecom), and asked a manager to identify a specific change in their work unit that had a significant impact. They found that having a strong transformational leader to navigate change was positively associated with change commitment [37].

Although our research did not suggest that organizational or implementation climate was strongly associated with willingness to participate in the trial, this area of research deserves further attention. Studies have shown that organizational culture can, in part, explain differences in the quality of care across organizations and is associated with greater use of quality improvement programs in hospitals and human service organizations [38, 39]. Other research suggests that organizational culture may be an important driver of an employee’s willingness to participate in improvement projects in healthcare settings [40]. This was one of the first studies examining the external validity of Latino FBOs implementing EBIs, and helped expand understanding of the inner organizational and structural factors potentially associated with willingness to adopt and implement EBIs in this understudied community setting.

While churches have long been as viewed as important partners in public health efforts to reach medically underserved populations, little research has systematically examined organizational characteristics in FBOs that could facilitate the uptake and adoption of EBIs. Furthermore, few trials have tested the efficacy of organizational-level interventions to improve the capacity of FBOs to implement EBIs. A review published in 2004 of 27 interventions to promote behavior change (e.g., smoking cessation, dietary changes, physical activity) in community settings found that only 11% of studies reported information on organizational-level characteristics of participating sites, compared to 88% of studies that reported individual participant-level information [41]. Similarly, a review of behavior change interventions in healthcare settings (n = 36) [42] published in 2002 found that only n = 10 (28%) reported data that could be used to assess the representativeness of the study participants and only four studies (11%) included procedures for site selection and recruitment and participation rates. This review noted that none of the studies compared the sites or intervention agents that participated to the ones that declined.

Since these earlier systematic reviews, subsequent reviews (2008–2012) of a variety of community-based interventions suggest similar lack of attention to external validity in their reporting, as depicted in reviews of interventions targeting childhood obesity prevention [43, 44], diabetes [45], health literacy [46], and physical activity [47, 48]. In recent years, frameworks like RE-AIM have helped to place increased attention on the importance of including information about the generalizability of research findings by collecting data on the characteristics of both participants and nonparticipants at the individual and setting/organization levels [49]. Furthermore, there are a number of reporting guidelines for clinical trials, implementation studies, interventions, and observational studies that have been introduced that all acknowledge to some extent the importance of reporting on the external validity and/or generalizability of study findings at the individual and setting levels [50–54]. To help address the relative lack of standards related specifically to external validity, Green and Glasgow (2006) also propose a set of specific criteria and ratings of external validity to be added or used in addition to existing guidelines listed above [16].

This study has limitations. It was conducted with a relatively small sample size. With only 49 units of observation (FBOs) and a ratio of nearly 2:1 in terms of trial participants and nonparticipants, the power to detect differences was admittedly small. We also acknowledge the limitation that measures used within this study have not been validated. A recent systematic review of 76 measures of factors associated with organizational adoption and implementation found limited information about psychometric properties of measures utilized in prior studies across a range of disciplines (e.g., community psychology, social work, business, public health, and medicine) and similarly, called for increased attention to reliability and validity of measures [26]. The limitations of the measures are also reflected in that we observed strong ceiling effects—where a high proportion of respondents have the maximum score—on most of the latent organizational characteristics that we assessed [55]. With the exception of implementation climate, scores were markedly skewed toward higher values and there was limited variability for some measures. Moreover, we did not assess all of the constructs in the CFIR. It is likely that factors and policies external to the organization also impact willingness to adopt EBIs. For example, the recent scandals regarding sexual abuse by clergy may well have influenced willingness to take on additional programming given existing resources and competing priorities. This suggests that more research is needed to develop and validate measures to assess factors associated with adoption/implementation among community organizations. We also acknowledge the potential for social-desirability bias on the part of survey respondents. Respondents, including pastors, were aware that the CRUZA trial had support from the very top of the organizational hierarchy in Catholic parishes in the state (i.e., the Archbishop). Still, if this were the case, we have no reason to believe that this bias would be differential between groups. An additional limitation includes the fact that we asked individuals to assess and report on organizational characteristics, which reflects only their perceptions about the organization. We also recognize that we focused on one component of the CFIR framework (inner organizational context); we encourage other researchers to test other constructs and levels from the CFIR framework. Despite these limitations, this study is one of a few community-based intervention trials to compare organizational characteristics of participating and nonparticipating with regard to latent constructs that may be associated with adoption of innovations. As discussed above, such information is critical to support the generalizability (or lack thereof) of study findings.

With this study, we hope to begin to address the gap in knowledge about organizational willingness to adopt and/or implement innovations. FBOs are a particularly important setting for the delivery of interventions to reach underserved populations and many faith-based leaders feel that health programs are highly consistent with the mission of churches. According to Chaves (2004), nearly one-third of FBOs in the USA were involved in the “physical healing” of their congregants (e.g., anointing the sick with oil, praying for those with illness) [56], but findings from the 2009 National Congregation Study suggest that only 10% of congregations actually offered formal health-related programming [57]. Unfortunately, the NCS data were not analyzed by denomination, so we are not able to directly compare our findings with that national sample. However, our finding of high scores on scores for “innovation values/fit” suggests that these Catholic parishes perceive health programming as part of their mission [58]. Regardless, while FBOs as a whole may have a predilection for health programming, the low actual implementation of health programs suggests that there may be barriers preventing churches from enacting programs. Our prior qualitative research identified a number of obstacles and resource gaps preventing parishes from implementation cancer control EBIs, ranging from inadequate time and knowledge to lack of financial resources and volunteer personnel [58]. These gaps may indicate a need for interventions to enhance the capacity and resources of FBOs to adopt and implement, EBIs.

There are important implications for research that follow. First, consistent with other researchers (e.g., Green and Glasgow, 2006), our work highlights the need for more detailed reporting of community-based intervention trials at the organizational level [12, 16]. There has been a lack of attention, transparency and accountability for researchers to reporting information at the setting level. In order to make progress in this area, it is important that parameters for implementation science that pertain to external validity be included and expected in publications, including the sampling frame at the organizational level, as well as organizational characteristics that may influence the willingness and capacity of organizations to implement innovations. Furthermore, consistent with other community-based trials, the ability to detect change over time among FBOs is a possible challenge, given that the organization is the unit of analysis and is often small in number compared to individual-level studies. It is possible that studies focused on external validity will need to combine findings across studies and settings in order to achieve large enough sample sizes to provide meaningful information about external validity. Finally, researchers may need to balance intervention arms on these organizational characteristics either through selection or stratification.

The study also has implications for practice. For example, practitioners charged with implementing EBIs in community settings (including FBOs) may want to consider assessing change commitment among key organizational members (e.g., pastors, ministry leaders) prior to initiating efforts. Practitioners could then target organizations with favorable characteristics associated with greater success of adoption and implementation of interventions. Alternatively, interventions could be “tailored” on key organizational characteristics. For example, organizations with low levels of change commitment could receive additional support, resources, and capacity building as needed. The latter approach is consistent with an “assets”-based approach that acknowledges that the capacity and resources for implementation of EBIs vary widely across organizations and community settings [58, 59].

Acknowledgements:

This work was supported in part by the National Cancer Institute (U54CA156732, UMASS Boston/Dana-Farber Harvard Cancer Center Comprehensive Cancer Partnership Program), the National Institute on Minority Health and Health Disparities (R21MD005976), and through a cooperative agreement by the Centers for Disease Control and Prevention with the National Cancer Institute (U48DP001946, Massachusetts Cancer Prevention and Control Research Network). The authors gratefully acknowledge the time and insights provided by study participants and our community advisory committee. We also acknowledge the many contributions made by Milagros Abreu, Amanda Bartholomew, Lois Biener, Deb Bowen, Melissa Colon, Daisy Diaz, Karen Emmons, Ericka Gonzalez, Elizabeth Gonzalez Suarez, Elizabeth Harden, Christina Hernandez, Lina Jandorf, Alan Juarez, Ruth Lederman, Laura Linnan, Carol Lowenstein, Hannah Mills, Rosalyn Negron, Aida Palencia, Beninson Peña, John Perez, Luciano Ramos, Lori-Anne Ramsey, Bianka Recinos, Maria Sesma, Sarfaraz Shaikh, Rachel Tsavalakoglou, Emeli Valverde, Bryan Weiner and from the UMASS Boston Center for Survey Research—Scott McInerny, Lee Hargraves, Philip Brenner, George Markos.

Compliance with Ethical Standards

Conflict of Interest: None of the authors have reported conflict of interest.

Authors’ contributions: JDA conceived and designed the study and contributed to writing the manuscript. RS contributed to writing the manuscript and approved final version. LK oversaw and conducted data analysis and contributed to writing the manuscript and approved the final version. BL contributed to writing the manuscript and approved the final version. LST oversaw data collection and contributed to writing the manuscript; she also approved the final version. AC contributed to writing the manuscript and approved the final version. All authors contributed to the writing of this manuscript.

Ethical approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institution and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals.

Informed consent: Informed consent was obtained from all individual participants included in the study.

References

  • 1. Levin J. Partnerships between the faith-based and medical sectors: implications for preventive medicine and public health. Prev Med Rep. 2016;4(C):344–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Hou SI, Cao X. A systematic review of promising strategies of faith-based cancer education and lifestyle interventions among racial/ethnic minority groups. J Cancer Educ. 2017. [DOI] [PubMed] [Google Scholar]
  • 3. Allen JD, Pérez JE, Tom L, Leyva B, Diaz D, Idalí Torres M. A pilot test of a church-based intervention to promote multiple cancer-screening behaviors among Latinas. J Cancer Educ. 2014;29(1):136–143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Studts CR, Tarasenko YN, Schoenberg NE, Shelton BJ, Hatcher-Keller J, Dignan MB. A community-based randomized trial of a faith-placed intervention to reduce cervical cancer burden in Appalachia. Prev Med. 2012;54(6):408–414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Holt CL, Wynn TA, Litaker MS, Southward P, Jeames S, Schulz E. A comparison of a spiritually based and non-spiritually based educational intervention for informed decision making for prostate cancer screening among church-attending African-American men. Urol Nurs. 2009;29(4):249–258. [PMC free article] [PubMed] [Google Scholar]
  • 6. Campbell MK, Hudson MA, Resnicow K, Blakeney N, Paxton A, Baskin M. Church-based health promotion interventions: evidence and lessons learned. Annu Rev Public Health. 2007;28:213–234. [DOI] [PubMed] [Google Scholar]
  • 7. Lancaster KJ, Carter-Edwards L, Grilo S, Shen C, Schoenthaler AM. Obesity interventions in African American faith-based organizations: a systematic review. Obes Rev. 2014;15(suppl 4):159–176. [DOI] [PubMed] [Google Scholar]
  • 8. Kinney K, Serrano E, Hosig K, et al. . Faith-based nutrition and physical activity interventions: a systematic review of the literature with future recommendations. J Nutr Educ Behav. 2017;49(7):S36. [Google Scholar]
  • 9. Francis SA, Liverpool J. A review of faith-based HIV prevention programs. J Relig Health. 2009;48(1):6–15. [DOI] [PubMed] [Google Scholar]
  • 10. Leviton LC. Generalizing about public health interventions: a mixed-methods approach to external validity. Annu Rev Public Health. 2017;38:371–391. [DOI] [PubMed] [Google Scholar]
  • 11. Steckler A, McLeroy KR. The importance of external validity. Am J Public Health. 2008;98(1):9–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Glasgow RE, Green LW, Klesges LM, et al. . External validity: we need to do more. Ann Behav Med. 2006;31(2):105–108. [DOI] [PubMed] [Google Scholar]
  • 13. Flay BR, Biglan A, Boruch RF, et al. . Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prev Sci. 2005;6(3):151–175. [DOI] [PubMed] [Google Scholar]
  • 14. Bowen DJ, Sorensen G, Weiner BJ, Campbell M, Emmons K, Melvin C. Dissemination research in cancer control: where are we and where should we go?Cancer Causes Control. 2009;20(4):473–485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Gottfredson DC, Cook TD, Gardner FE, et al. . Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: next generation. Prev Sci. 2015;16(7):893–926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126–153. [DOI] [PubMed] [Google Scholar]
  • 17. Brownson RC, Colditz GA, Proctor EK.. Dissemination and implementation research in health: translating science to practice. New York, NY: Oxford University Press; 2012. [Google Scholar]
  • 18. Funk C, Martínez J. The shifting religious identity of Latinos in the United States: Nearly one-in-four Latinos are former Catholics. Pew Research Center’s Religion & Public Life Project website: 2014. http://www.pewforum.org/files/2014/05/Latinos-and-Religion-05-06-full-report-final.pdf. [Google Scholar]
  • 19. Allen JD, Tom LS, Leyva B, et al. . Recruiting and surveying Catholic parishes for cancer control initiatives: lessons learned from the CRUZA implementation study. Health Promot Pract. 2015;16(5):667–676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Allen JD, Torres MI, Tom LS, Leyva B, Galeas AV, Ospino H. Dissemination of evidence-based cancer control interventions among Catholic faith-based organizations: results from the CRUZA randomized trial. Implement Sci. 2016;11(1):74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Allen JD, Torres MI, Tom LS, et al. . Enhancing organizational capacity to provide cancer control programs among Latino churches: design and baseline findings of the CRUZA Study. BMC Health Serv Res. 2015;15(1):147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Kenedy P. The official Catholic directory. New York: PJ Kenedy & Sons; 2012. [Google Scholar]
  • 24. Rzeznik T. No closure: Catholic practice and Boston’s parish shutdowns. J Am Hist. 2012;98(4):1224–1225. [Google Scholar]
  • 25. Zech CE, Miller RJ.. Listening to the people of god: closing, rebuilding, and revitalizing parishes. Mahwah, NJ: Paulist Press; 2008. [Google Scholar]
  • 26. Allen JD, Towne SD Jr, Maxwell AE, et al. . Meausures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Clinton-McHarg T, Yoong SL, Tzelepis F, et al. . Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the Consolidated Framework for Implementation Research: a systematic review. Implement Sci. 2016;11(1):148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436. [DOI] [PubMed] [Google Scholar]
  • 30. Belkhodja O, Amara N, Landry R, Ouimet M. The extent and organizational determinants of research utilization in Canadian health services organizations. Sci Commun. 2007;28(3)377–417. [Google Scholar]
  • 31. Weiner BJ, Belden CM, Bergmire DM, Johnston M. The meaning and measurement of implementation climate. Implement Sci. 2011;6(1):78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Helfrich CD, Li YF, Mohr DC, Meterko M, Sales AE. Assessing an organizational culture instrument based on the Competing Values Framework: exploratory and confirmatory factor analyses. Implement Sci. 2007;2(1):13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Brown MB, Forsythe AB. Robust tests for the equality of variances. J Am Stat Assoc. 1974;69(346):364–367. [Google Scholar]
  • 34. Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab. 2012;10(2):486–489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Nachar N. The Mann-Whitney U: a test for assessing whether two independent samples come from the same distribution. Tutor Quant Methods Psychol. 2008;4(1):3–20. [Google Scholar]
  • 36. Herscovitch L, Meyer JP. Commitment to organizational change: extension of a three-component model. J Appl Psychol. 2002;87(3):474–487. [DOI] [PubMed] [Google Scholar]
  • 37. Herold DM, Fedor DB, Caldwell S, Liu Y. The effects of transformational and change leadership on employees’ commitment to a change: a multilevel study. J Appl Psychol. 2008;93(2):346–357. [DOI] [PubMed] [Google Scholar]
  • 38. Hemmelgarn AL, Glisson C, James LR. Organizational culture and climate: implications for services and interventions research. Clin Psychol Sci Pract. 2006;13(1):73–89. [Google Scholar]
  • 39. Tyagi RK, Cook L, Olson J, Belohlav J. Healthcare technologies, quality improvement programs and hospital organizational culture in Canadian hospitals. BMC Health Serv Res. 2013;13(1):413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Lam M, Robertson D. Organizational culture, tenure, and willingness to participate in continuous improvement projects in healthcare. Qual Manag J. 2012;19(3):7–15. [Google Scholar]
  • 41. Dzewaltowski DA, Estabrooks PA, Klesges LM, Bull S, Glasgow RE. Behavior change intervention research in community settings: how generalizable are the results?Health Promot Int. 2004;19(2):235–245. [DOI] [PubMed] [Google Scholar]
  • 42. Glasgow RE, Bull SS, Gillette C, Klesges LM, Dzewaltowski DA. Behavior change intervention research in healthcare settings: a review of recent reports with emphasis on external validity. Am J Prev Med. 2002;23(1):62–69. [DOI] [PubMed] [Google Scholar]
  • 43. Klesges LM, Dzewaltowski DA, Glasgow RE. Review of external validity reporting in childhood obesity prevention research. Am J Prev Med. 2008;34(3):216–223. [DOI] [PubMed] [Google Scholar]
  • 44. Klesges LM, Williams NA, Davis KS, Buscemi J, Kitzmann KM. External validity reporting in behavioral treatment of childhood obesity: a systematic review. Am J Prev Med. 2012;42(2):185–192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Laws RA, St George AB, Rychetnik L, Bauman AE. Diabetes prevention research: a systematic review of external validity in lifestyle interventions. Am J Prev Med. 2012;43(2):205–214. [DOI] [PubMed] [Google Scholar]
  • 46. Allen K, Zoellner J, Motley M, Estabrooks PA. Understanding the internal and external validity of health literacy interventions: a systematic literature review using the RE-AIM framework. J Health Commun. 2011;16(suppl 3):55–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. White SM, McAuley E, Estabrooks PA, Courneya KS. Translating physical activity interventions for breast cancer survivors into practice: an evaluation of randomized controlled trials. Ann Behav Med. 2009;37(1):10–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. McMahon S, Fleury J. External validity of physical activity interventions for community-dwelling older adults with fall risk: a quantitative systematic literature review. J Adv Nurs. 2012;68(10): 2140–2154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12): 1495–1499. [DOI] [PubMed] [Google Scholar]
  • 51. Moher D, Hopewell S, Schulz KF, et al. ; Consolidated Standards of Reporting Trials Group CONSORT 2010 Explanation and elaboration: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010;63(8):e1–37. [DOI] [PubMed] [Google Scholar]
  • 52. Des Jarlais DC, Lyles C, Crepaz N; TREND Group Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94(3):361–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Neta G, Glasgow RE, Carpenter CR, et al. . A Framework for enhancing the value of research for dissemination and implementation. Am J Public Health. 2015;105(1):49–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Pinnock H, Barwick M, Carpenter CR, et al. ; StaRI Group Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356(8096):i6795. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Aron A, Coups E, Aron EN.. Statistics for the behavioral and social sciences: Pearson new international edition: a brief course. Upper Saddle River, NJ: Pearson Higher Ed; 2013. [Google Scholar]
  • 56. Chaves M,. Congregations in America. Cambridge, MA: Harvard University Press; 2004. [Google Scholar]
  • 57. Trinitapoli J, Ellison CG, Boardman JD. US religious congregations and the sponsorship of health-related programs. Soc Sci Med. 2009;68(12):2231–2239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Leyva B, Allen JD, Ospino H, et al. . Enhancing capacity among faith-based organizations to implement evidence-based cancer control programs: a community-engaged approach. Transl Behav Med. 2017;7(3):517–528. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Leeman J, Calancie L, Kegler MC, et al. . Developing theory to guide building practitioners’ capacity to implement evidence-based interventions. Health Educ Behav. 2017;44(1):59–69. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Translational Behavioral Medicine are provided here courtesy of Oxford University Press

RESOURCES