Abstract
Multi-sector partnerships are core in efforts to improve population health but are often not as fully developed or positioned to advance health and equity in their communities as believed to be. Therefore, measuring the collaborations multi-sector partnerships undertake is important to document the inputs, processes, and outcomes that evolve as they work together towards achieving their goals, which ultimately creates a greater sense of shared accountability. In this study we present the development and validation of the Assessment for Advancing Community Transformation (AACT), a new tool designed to measure readiness to advance health and health equity. Development of the AACT included initial item pool creation, external evaluation from five subject matter experts, and pilot testing (including user feedback surveys) among 103 individuals. Validation of the AACT was performed using a series of confirmatory factor analyses on an expanded dataset representing 352 individuals from 49 multi-sector collaboratives across the United States. The results of our study indicate the items in the AACT align to six domains created during the scale development process, and that the tool demonstrates desirable measurement characteristics for use in research, evaluation, and practice.
Keywords: confirmatory factor analysis, health, health equity, multi-sector collaboratives, readiness, validation
Introduction
Multi-sector partnerships are core in efforts to improve population health. The impetus behind them is clear: long-term collaborative work conducted well by partners from multiple health and social service sectors, industries, and agencies can better achieve public health related goals and outcomes than had the work been undertaken alone or using a siloed approach (Bryson et al., 2015; Lasker et al., 2001). Accordingly, policy makers and program funders have increasingly called on collaboratives and gone so far as to require their creation to receive funding for programs designed to address various social determinants of health (Alley et al., 2016; Daniel et al., 2018). Over the past several decades, the number of multi-sectoral collaboratives working to address public health issues has grown, and they will likely remain key players as larger health systems reform continues and the evidence base related to broader cross-sector alignment between healthcare, public health, and social services grows (Erickson et al., 2017; Fichtenberg et al., 2020; Lanford et al., 2022).
There are some key factors that appear to contribute to strong multi-sector partnerships. These elements include having a common vision for the partnership’s goals and outcomes; influential champions; effective leadership, particularly the ability to deal with conflict; membership comprised of diverse partners who are committed to the cause; strong infrastructure; communication among partners and transparency in decision-making; sustainable financing; and awareness of external and contextual factors (Erickson et al., 2017; Hajjar et al., 2020; Towe et al., 2016; Woulfe et al., 2010). At the same time, multi-sector partnerships are shaped by a range of internal and external dynamics that include changes in leadership, infrastructure, funding, policy, and momentum (Erickson et al., 2017; Fawcett et al., 2010; Woulfe et al., 2010). These influences can have more or less impact on groups during different phases of development and in many cases do not evolve along a linear path (Dickson et al., 2020). For example, many partnerships in an early stage have not yet established stable infrastructures or authority to lead health improvement. These early efforts can benefit from building vision and shared values among multi-sector partners. Similarly, middle stage partnerships tend to have challenges with measuring progress and can benefit from highlighting early wins, while partnerships in a later phase do not gain as much from these same processes (Erickson et al., 2017).
Measuring collaborative efforts towards advancing health and health equity is important for several reasons. At a basic level, collaboratives themselves have identified it as an important factor influencing success of the partnership (Erickson et al., 2017). Research has also shown that multi-sector partnerships are more effective when systems of feedback are developed that measure and document the inputs, processes, and outcomes that evolve as they work together towards achieving their goals, which ultimately creates a greater sense of shared accountability (Bryson et al., 2006; Romzek et al., 2014). Assuming that collaboratives should be working synergistically (Lasker et al., 2001), measuring the extent to which partners are working together along varied dimensions may bring to light the ways in which they are working ineffectively or may possibly highlight partners that feel unincluded or left out of the process. When potential areas for improvement such as these are measured and identified, technical assistance providers, funders, and evaluators can tailor their approaches to help improve the ways in which the partnership functions. From an implementation science perspective, these types of “support system activities” have been identified as critical to viewing gaps and needs, as well as having a shared understanding and common language among multiple stakeholders (Scaccia et al., 2015).
Unfortunately, while many tools exist for evaluating collaborative efforts within just the health sector, only a small number of measures exist specific to multi-sector collaborative efforts focused on improving health and health equity (Brewster et al., 2019; Castañeda et al., 2012), and those that do exist are limited for several reasons. For example, the R = MC2 tool and the Tri-Ethnic Model of Community Readiness Model focus on implementation or readiness of collaboratives to take on a specific issue, such as domestic violence, HIV/AIDS, or drug and alcohol use. While useful, such measurements are predicated on the assumption that multi-sector collaboratives are not working towards addressing multiple issues, or that collaboratives do not re-align their efforts as environmental, policy, or programmatic changes occur. For example, the onset of the COVID-19 pandemic required rapid changes and re-alignment in the shared purpose and governance among cross-sector health and social services (Landers et al., 2020). Therefore, instruments that take a broader approach to the outcomes that multi-sectoral collaboratives are trying to achieve or improve is needed. Additionally, currently existing instruments such as the CSAP Prevention Platform and the Community Key Leader Survey are lengthy (∼50 questions), and instruments such as the Minnesota Institute of Public Health’s Community Readiness Survey requires random samples of community members and four to six weeks to complete. These types of instruments may increase respondent burden, have a high administration cost, and may dissuade multi-sector collaboratives from engaging in the important process of measuring and documenting their partnerships.
Considering the important need to assess progress towards advancing health and health equity among multi-sector partnerships alongside the limitations of currently existing instruments described above, we developed a new measure: the Assessment for Advancing Community Transformation (AACT). In this paper, we report on the development and validation of the instrument. The AACT is a tool that allows groups to directly collect and use data about their multi-sector collaborative by creating a shared point of reference for where partners are at in their stage of development across multiple factors and was designed to put data in the hands of communities as they reflect, prioritize, and align actions based on their shared understanding and agreement of where they are now and where they want to go. Additionally, the AACT can be used to design support from technical assistance providers, evaluators, and others based on communities’ stages of development and to guide communities to appropriate and targeted technical assistance.
Methods
Development of the AACT was carried out in three broad stages following general methodological guidelines for scale development procedures (Boateng et al., 2018; Carpenter, 2018; DeVellis & Thorpe, 2021). Across all stages the creation of the instrument was driven by six professionals from three public health institutes with deep experience and leadership in community-level health improvement processes. A seventh person was identified to coordinate across all three organizations to convene and lead the process, and an eighth individual with subject matter expertise in scale development and validation guided the technical components necessary to create the AACT.
Initial Item Development
Stage one of the scale development consisted of creating the initial pool of items for the instrument, grouping the items into relevant domains, and determining the procedures necessary for respondents to complete the instrument. Prior to constructing items, a targeted literature review was conducted for two purposes: first, to understand the key factors influencing successful collaboration among multi-sector partnerships working to advance health and health equity; and second, to understand the strengths and limitations of currently existing measures in this area of evaluation and program planning. Findings of the literature review were used to strategically create the item wording and to group items into domains informed by the evidence base.
External Review by Subject Matter Experts
Stage two of the scale development was review by subject matter experts external to the group creating the AACT. Across a three-month period, five external reviewers with practice and research-based expertise in community change processes and assessment design provided feedback on the initial pool of AACT domains and items. Three subject matter experts completed a review of the entire tool while two equity experts focused specifically on the “advance equity” domain. The reviewers were asked to respond to a set of five open-ended questions regarding the overall utility of the instrument, the clarity of the instructions and items, and the ability of the instrument to measure its intended constructs. Reviewers were also given the opportunity to provide any comments they felt the AACT development team should find useful. Feedback from the external reviewers was used to revise the AACT before moving into the final stage of the scale development process.
Community Field Test
Stage three of the scale development was a community field test. The community field test was conducted as a pilot test of the AACT to examine preliminary response patterns across items in the tool, to determine the average amount of time taken to complete the instrument, and to solicit feedback from real users of the instrument. Collectively, these findings offered the development team the opportunity to make any final adjustments to the AACT prior to continued data collection. In total, 103 respondents from nine multi-sector collaboratives across the United States agreed to participate in the community field test of the instrument and 96 respondents completed the accompanying field test questionnaire about the AACT experience (response rate = 93%). The nine collaboratives represented a combination of rural, suburban, and urban collaboratives involved in cohort learning or receiving direct technical assistance or coaching from two of the three public health institutes involved with creating the AACT. Together, these collaboratives were typically comprised of public health departments, human service and housing agencies, community-based nonprofits, healthcare representatives, community members, and local funders collectively focused on improving community-level health outcomes. We intentionally sought out coalitions representing a diversity of geographies and community size (mostly rural and small cities) working at differing stages of development in their health improvement efforts.
Scale Validation
Measurement characteristics of the AACT were examined using descriptive techniques and a factor analytic approach. First, for each item in the instrument we examined descriptive statistics including the mean, median, range and percentage of responses within each response option. The goal of the descriptive analysis was to examine the distribution of the item-level responses and to determine if there were any unused or overused response options. Second, confirmatory factor analysis (CFA) was utilized to evaluate construct validity by determining if the six domains originally created in the scale development process aligned to a six-factor model, indicating that items in each domain were conceptually grouping as intended (Boateng et al., 2018). Therefore, because our goal was to test an a priori factor structure of items created during the scale development stage (described above), we utilized confirmatory factor analysis rather than exploratory factor analysis. Similar to other confirmatory factor analytic studies (Konkolÿ Thege et al., 2017), we also considered two other alternative plausible models. The first alternative model was a unidimensional model, where all 22 items were allowed to load onto a single factor. The second alternative model was a second-order model, in which the initial six factors aligned to the AACT domains were all allowed to load onto a global “AACT” factor. All models were estimated using the lavaan package in R version 3.6.0 (R Core Team, 2019; Rosseel, 2012). Because a key interest of the analysis was to evaluate factor loadings for each item, all latent factors were set to have a mean of zero and variance of one, rather than identifying the model by constraining the first factor loading of each latent variable to one, the default lavaan setting. Regarding model building, we first estimated the unidimensional model, followed by the six-factor model, and concluded with the second-order model.
All three CFAs were evaluated using the criteria outlined by Brown and Moore (2012). Overall goodness of fit was evaluated using standard model fit indices as suggested by Kline (2011) and Schreiber et al. (2006): the model statistic, where a probability greater than 0.05 indicates acceptable model fit; the Tucker-Lewis index (TLI) and Comparative Fit Index (CFI), where values greater than 0.95 indicate acceptable model fit; and the Root Mean Square Error of Approximation (RMSEA), where values less than 0.06 indicate acceptable model fit. We also compared model fit across the three CFAs by evaluating changes in the Akaike information criterion (AIC) and Bayesian information criterion (BIC), where smaller values indicate better model fit. To evaluate the presence or absence of localized areas of strain the CFA solutions, we examined correlations of the residuals, specifically looking for absolute values larger than 0.10 as defined by Kline (2011, p. 202). We also examined the size and statistical significance of the model’s parameter estimates. We evaluated the standardized loadings for each item to ensure that all loadings were bound between -1 and 1, otherwise the item would be indicative of a Heywood case. We considered factor loadings of at least 0.30 or higher to be indicative of a meaningful relationship between the latent factor and the item. Furthermore, we evaluated the R2 values for each item in the instrument, where values of at least 0.5 were considered ideal. All three models were estimated with no convergence issues. Prior to estimating each model, we also evaluated the following characteristics of the data associated with our sample, as described below: the sample size, multivariate normality, multicollinearity, and the extent of missing data.
Due to the onset of the COVID-19 pandemic, in this study we were limited to collecting responses in our validation stage of the study to 352 completed AACT instruments from 49 multi-sector collaboratives across the United States. The validation sample included the original data collected from the community field test (N = 103 individuals from nine collaboratives) supplemented with completed AACT instruments from 249 individuals representing an additional 40 collaboratives. Minimum required sample size recommendations in CFA are vague and somewhat conflicting and depend on many factors (Knekta et al., 2019; Mundfrom et al., 2005; Schmitt, 2011). Simulation studies indicate that one of the important drivers of the required minimum sample size in CFAs is the strength of the standardized loadings. As loading strength increases, required sample sizes decrease (Boomsma & Hoogland, 2001; Ondé & Alvarado, 2018). In conditions similar to the findings of our study (for example, 6 lower-level factors with standardized loadings above 0.8), required minimum sample sizes between 200 and 300 were adequate in recovering true model parameters (Koran, 2020). Additionally, the size of our sample, while small, aligns with other studies reporting the development of new instruments. For example, in a recent content analysis of 600 articles reporting the scale development of new instruments, Carpenter (2018) found that ∼43% of the studies used sample sizes of 300 or smaller.
CFAs assume multivariate normality (MVN) of the data used to estimate the model’s parameters. To evaluate this criteria, we first examined univariate normality by examining skewness and kurtosis values for each item, which were all within the generally accepted boundaries of -2 to 2 (Byrne, 2010; George & Mallery, 2010). We then specifically tested for MVN using a Mardia test, which failed, indicating that our data were not multivariate normal. In CFA, violations of the MVN assumption lead to underestimated standard errors, which causes a higher type 1 error rate (Curran et al., 1996). Therefore, to overcome the likelihood of finding a statistically significant effect where it does not truly exist, we utilized Satorra-Bentler robust standard errors, which applies a scaling correction factor to the standard error derived from a multivariate kurstosis estimate (Satorra & Bentler, 1988, 1994). The Satorra-Bentler approach was selected because more commonly utilized bootstrapped approaches that account for non MVN data require much larger samples than available in the current study, ideally 500–1000 or more (Nevitt & Hancock, 2004). In addition to the MVN assumption, CFAs also assume no multicollinearity among the items in the model. To evaluate this assumption, we examined a correlation matrix and variance inflation factors among the items, finding that all correlations were above generally accepted values of 0.3 and variance inflation factors were all below generally accepted values of 10. Finally, regarding the extent of missing data, all AACT instruments were completed in their entirety. Therefore, there was no missing data in the current study.
Results
Initial Item Development
An initial set of 22 items was developed across eight domains: collaboration, communication, advance equity, meaningful engagement, plan for action, measure to improve, sustainability, and systems approach. The “collaboration” domain was designed to assess the extent to which partners from multiple sectors work together to build trust among the group, develop leadership and decision-making guidelines, and agree on a vision and direction for the collaborative. The “communication” domain was designed to measure the ways in which the collaborative communicates internally and externally as well as how the partners communicate and deal with conflict. The “advance equity” domain was designed to evaluate the extent to which the partners identify and understand inequities in their community, partner with people most affected by poor outcomes and injustice, and develop strategies to address root causes of inequities. The “meaningful engagement” domain was designed to measure the ways in which collaboratives engaged the individuals most affected by their work. The “plan for action” domain was created to understand the degree to which the collaborative understands the community’s needs and assets, as well as its ability to select and design strategies for change. The “measure to improve” domain was made to assess the extent to which the partners measure the impact they are making and focus on continuous improvement, in addition to spreading knowledge about their work to help others. The “sustainability” domain was created to measure how far along the collaborative is towards establishing various deliberate processes that set partners up for long term success. Finally, the “systems approach” domain was designed to evaluate the extent to which collaboratives worked towards holistic, systems level solutions rather than on siloed or disconnected initiatives.
Across all domains, items are scored by respondents using a four-stage reporting system. Movement across the four stages represents a greater level of development towards any given item. The first stage, “not yet started,” allows respondents to document areas in which they believe the multi-sector collaborative has not begun work towards a given item (scored a value of 1). Within each of the subsequent stages (starting – scores of 2, 3, and 4; gaining skill – scores of 5, 6, and 7; and sustaining – scores of 8, 9, and 10) there are three possible scores, designed to give respondents the flexibility to report if they believe the collaborative is beginning to work on that stage, clearly working in that stage, or starting to move into the next stage but not quite there yet. To complete the AACT, the collaborative first determines who will complete the assessment. Then, each participating member individually scores each item in the assessment after reading all items and reflecting on their collaborative. Then, all members come together to talk about the responses as a group and try to reach group consensus on a score for each item. After generating agreed upon scores for each item, the collaborative then has the opportunity to make a plan to make six month and 12-month improvements for each item by documenting what will be needed to achieve those goals.
External Review by Subject Matter Experts
Based on their subject matter expertise and input, the meaningful engagement domain and systems approach domain were both removed from the instrument and items in these two domains were re-assigned to better fitting domains as determined by the external experts. Across the entire AACT instrument, item and domain descriptions were slightly reworded for easier reading comprehension, scoring instructions were modified to be clearer, and the number of items in each domain was balanced so that domains had either three or four items each. This revision resulted in the final AACT instrument, consisting of six domains and 22 items.
Community Field Test
Analysis of the 103 completed AACT instruments from the community field test indicated that the items were preliminarily performing well. Across all items, each response category was populated to varying degrees, indicating that the score options from 1 to 10 were adequate to capture real-world developmental stages (that is, no response options were left unused or unpopulated). Moreover, response patterns from the field test also indicated relatively normal distributions for most items. Correlation analysis indicated that all items correlated well with each other (Pearson’s r range = 0.42–0.91) and no negative correlations were found among the study items. On average, the AACT was completed in 28 min (median completion time = 25 min), while the minimum completion time was two minutes and the maximum completion time was 120 min.
Results of the field test questionnaire, displayed in Table 1, indicated a very positive experience regarding completion of the AACT. For example, more than 90% of respondents either agreed or strongly agreed that the AACT scoring was intuitive and easy to use, and that the language used in the AACT was easy to understand. Additionally, more than 85% of respondents agreed or strongly agreed that the AACT helped them better understand and identify areas for improvement and that the tool would be helpful to the work of their collaboration or partnership. Respondents also exhibited high ratings regarding the extent to which the ACCT helped them to assess each domain. Specifically, across all domains 90% or more of respondents agreed or strongly agreed that the AACT helped them to assess their collaboration in that domain, with responses particularly high for the collaboration domain. Finally, ∼75% of respondents agreed or strongly agreed that completing the AACT was a useful exercise and that they would recommend use of the AACT to others.
Table 1.
Responses to Community Field Test of the AACT.
Question/rating statement | Strongly agree, % | Agree, % | Neutral, % | Disagree, % | Strongly disagree, % |
---|---|---|---|---|---|
The instructions provided with the AACT provided clear guidance on how to use the tool. | 30.21 | 54.17 | 7.29 | 7.29 | 1.04 |
The language used in the AACT was easy to understand. | 36.46 | 56.25 | 6.25 | 0.00 | 1.04 |
The scoring was intuitive and easy to use. | 32.29 | 58.33 | 6.25 | 2.08 | 1.04 |
Completing the AACT assessment was a useful exercise for me. | 26.04 | 48.96 | 22.92 | 2.08 | 0.00 |
The ACCT tool helps me assess each theme. Please indicate your response for each theme: | |||||
Collaboration | 37.50 | 57.29 | 3.13 | 2.08 | 0.00 |
Communication | 29.17 | 64.58 | 4.17 | 2.08 | 0.00 |
Advance equity | 26.04 | 66.67 | 6.25 | 0.00 | 1.04 |
Plan for action | 25.00 | 65.63 | 7.29 | 1.04 | 1.04 |
Measure to improve | 28.13 | 62.50 | 7.29 | 2.08 | 0.00 |
Sustainability | 28.13 | 62.50 | 6.25 | 3.13 | 0.00 |
The tool helped me better understand and identify areas for improvement. | 33.33 | 54.17 | 10.42 | 2.08 | 0.00 |
The tool will be helpful to the work of our collaboration/partnership. | 36.46 | 52.08 | 9.38 | 2.08 | 0.00 |
I would recommend use of this tool to others. | 32.29 | 43.75 | 21.88 | 2.08 | 0.00 |
Note. N = 96. Cell values represent percentage of responses in each category. Rows may not total to 100% due to rounding.
Scale Validation Results
Item-level descriptive statistics are displayed in Table 2. All the item-level response options appeared to be appropriately utilized. That is, none of the response options were empty, and none appeared to be underutilized or overutilized. The mean item response ranged from 5.16 to 6.84, the median item response ranged from 5.00 to 7.00, and the standard deviation for the items ranged from 2.17 to 2.93. Looking across all 22 items collaboratives on average tended to rate their partnerships in the “gaining skill” stage (responses of 5, 6, or 7). There were four items where more than 11 percent of respondents evaluated their collaborative in the “not yet started” stage: item 19, plan for sustainability (11.65%); item 6, deal with conflict (11.65%); item 22, build and maintain momentum (13.92%); and item 21, focus on and advocate for policy (15.34%). On the opposite end of the response options, there were two items where more than 11% of respondents evaluated their collaborative in the highest “sustaining” stage: item 12, set goals based on community assets and needs (11.08%), and item 11, identify community needs and assets (12.22%). One interesting finding regarding the response patterns across the developmental stages emerged. In both the “starting” and “gaining skill” stages, a higher frequency of responses tended to occur for the highest response options (scores of 4 and 7, respectively). However, in the “sustaining” stage, response patterns tended to be higher for the lowest response option (score of 8), or more uniformly distributed across all response options compared to the other two stages.
Table 2.
AACT Item-Level Descriptive Statistics.
Frequency of response options (%) | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Not yet started | Starting | Gaining skill | Sustaining | |||||||||||
Item | Mean | Median | SD | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1. Work with partners from different sectors | 6.84 | 7.00 | 2.17 | 2.27 | 2.27 | 4.55 | 5.11 | 8.81 | 16.48 | 17.05 | 20.45 | 12.50 | 10.51 | |
2. Strengthen collaboration | 6.19 | 6.00 | 2.25 | 2.27 | 2.84 | 6.25 | 16.76 | 9.38 | 13.07 | 18.75 | 15.06 | 7.95 | 7.67 | |
3. Develop leadership and decision-making guidelines | 5.49 | 5.00 | 2.35 | 5.40 | 3.41 | 10.80 | 21.88 | 9.66 | 11.08 | 15.63 | 11.36 | 5.68 | 5.11 | |
4. Agree on vision and direction | 6.20 | 7.00 | 2.37 | 3.41 | 3.13 | 6.82 | 15.91 | 8.81 | 9.66 | 19.32 | 16.19 | 7.95 | 8.81 | |
5. Communicate within the collaboration (internal) | 6.49 | 7.00 | 2.18 | 2.56 | 1.99 | 4.55 | 9.94 | 12.50 | 14.49 | 20.17 | 14.49 | 11.08 | 8.24 | |
6. Deal with conflict | 5.37 | 6.00 | 2.55 | 11.65 | 4.83 | 7.39 | 12.22 | 13.07 | 12.78 | 17.90 | 9.94 | 4.26 | 5.97 | |
7. Communicate with external stakeholders (external) | 5.74 | 6.00 | 2.35 | 5.40 | 3.69 | 7.95 | 15.91 | 12.78 | 12.22 | 17.61 | 11.93 | 7.39 | 5.11 | |
8. Identify and understand inequities in our community and work | 6.06 | 6.50 | 2.34 | 3.13 | 5.97 | 7.39 | 9.94 | 10.80 | 12.78 | 23.58 | 11.36 | 7.95 | 7.10 | |
9. Partner with people most affected by poor outcomes and injustice | 5.45 | 5.00 | 2.60 | 8.24 | 7.67 | 8.81 | 16.48 | 9.09 | 8.52 | 15.34 | 13.07 | 7.39 | 5.40 | |
10. Develop strategies to address root causes of inequities | 5.16 | 5.00 | 2.59 | 10.51 | 6.82 | 9.94 | 19.89 | 7.95 | 9.09 | 13.07 | 12.22 | 5.97 | 4.55 | |
11. Identify community needs and assets | 6.56 | 7.00 | 2.38 | 3.69 | 0.85 | 8.52 | 7.95 | 10.80 | 11.08 | 19.60 | 13.92 | 11.36 | 12.22 | |
12. Set goals based on community assets and needs | 6.39 | 7.00 | 2.58 | 5.97 | 2.56 | 6.82 | 10.51 | 11.08 | 7.39 | 15.06 | 16.76 | 12.78 | 11.08 | |
13. Understand what drives health | 6.52 | 7.00 | 2.25 | 2.84 | 2.84 | 4.83 | 9.66 | 11.36 | 9.94 | 22.44 | 15.63 | 12.78 | 7.67 | |
14. Select and design strategies for change | 6.14 | 7.00 | 2.51 | 6.53 | 1.99 | 7.95 | 12.78 | 9.66 | 9.94 | 13.64 | 18.75 | 12.50 | 6.25 | |
15. Measure impact | 5.62 | 6.00 | 2.70 | 10.80 | 4.83 | 6.82 | 17.33 | 7.10 | 6.82 | 17.33 | 12.78 | 10.23 | 5.97 | |
16. Focus on continuous improvement | 5.79 | 6.00 | 2.67 | 9.38 | 4.55 | 6.53 | 15.06 | 8.81 | 7.95 | 18.18 | 13.07 | 7.95 | 8.52 | |
17. Spread knowledge about our work to help others | 6.01 | 6.00 | 2.53 | 6.25 | 4.26 | 7.95 | 10.23 | 12.22 | 9.94 | 19.32 | 11.65 | 9.94 | 8.24 | |
18. Expand effective strategies to improve outcomes for more people | 5.73 | 6.00 | 2.48 | 6.53 | 6.53 | 8.24 | 10.23 | 12.22 | 9.94 | 21.88 | 10.51 | 9.09 | 4.83 | |
19. Plan for sustainability | 5.66 | 6.00 | 2.85 | 11.65 | 7.10 | 7.67 | 8.52 | 10.51 | 10.80 | 13.35 | 11.36 | 8.81 | 10.23 | |
20. Diversify resources | 5.46 | 5.00 | 2.67 | 7.95 | 7.39 | 9.09 | 17.90 | 11.36 | 7.67 | 11.08 | 11.65 | 7.95 | 7.95 | |
21. Focus on and advocate for policy | 5.24 | 5.00 | 2.81 | 15.34 | 5.68 | 7.10 | 17.90 | 5.40 | 9.09 | 13.35 | 10.80 | 10.51 | 4.83 | |
22. Build and maintain momentum | 5.72 | 6.00 | 2.93 | 13.92 | 5.97 | 4.83 | 10.51 | 11.08 | 5.40 | 14.77 | 13.64 | 9.38 | 10.51 |
Note. N = 352.
Standardized factor loadings for the three CFAs are displayed in Table 3. Regarding the unidimensional model, all item loadings were above the meaningful threshold of 0.3 and statistically significant at p < .001. In fact, item loadings were particularly strong, ranging from 0.72 (item 1, work with partners from different sectors) to 0.90 (item 18, expand effective strategies to improve outcomes for more people). All R2 values for the unidimensional model also crossed the meaningful threshold of 0.50, with values ranging from 0.52 (item 1, work with partners from different sectors) to 0.81 (item 18, expand effective strategies to improve outcomes for more people). Despite many of the strong loadings in the unidimensional CFA, overall model-data fit was found to be inadequate (Mx2 (209) = 958.31, p < 0.001; TLI = 0.87; CFI = 0.91; RMSEA = 0.10).
Table 3.
Standardized Factor Loadings from 3 Confirmatory Factor Analyses of the AACT.
Six-factor model | Second-order model | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Item | Uni-dimensional | Coll. | Comm. | Adv. Eq. | Pln. Act. | Msr. Imp. | Sust. | Coll. | Comm. | Adv. Eq. | Pln. Act. | Msr. Imp. | Sust. |
1. Work with partners from different sectors | 0.72 | 0.77 | 0.77 | ||||||||||
2. Strengthen collaboration | 0.81 | 0.87 | 0.87 | ||||||||||
3. Develop leadership and decision-making guidelines | 0.83 | 0.88 | 0.88 | ||||||||||
4. Agree on vision and direction | 0.85 | 0.89 | 0.89 | ||||||||||
5. Communicate within the collaboration (internal) | 0.78 | 0.85 | 0.83 | ||||||||||
6. Deal with conflict | 0.79 | 0.84 | 0.84 | ||||||||||
7. Communicate with external stakeholders (external) | 0.84 | 0.88 | 0.89 | ||||||||||
8. Identify and understand inequities in our community and work | 0.77 | 0.85 | 0.85 | ||||||||||
9. Partner with people most affected by poor outcomes and injustice | 0.77 | 0.87 | 0.87 | ||||||||||
10. Develop strategies to address root causes of inequities | 0.79 | 0.90 | 0.90 | ||||||||||
11. Identify community needs and assets | 0.83 | 0.88 | 0.88 | ||||||||||
12. Set goals based on community assets and needs | 0.85 | 0.89 | 0.90 | ||||||||||
13. Understand what drives health | 0.78 | 0.84 | 0.83 | ||||||||||
14. Select and design strategies for change | 0.83 | 0.86 | 0.86 | ||||||||||
15. Measure impact | 0.85 | 0.89 | 0.89 | ||||||||||
16. Focus on continuous improvement | 0.88 | 0.93 | 0.93 | ||||||||||
17. Spread knowledge about our work to help others | 0.85 | 0.88 | 0.88 | ||||||||||
18. Expand effective strategies to improve outcomes for more people | 0.90 | 0.91 | 0.91 | ||||||||||
19. Plan for sustainability | 0.89 | 0.93 | 0.93 | ||||||||||
20. Diversify resources | 0.85 | 0.91 | 0.90 | ||||||||||
21. Focus on and advocate for policy | 0.80 | 0.84 | 0.84 | ||||||||||
22. Build and maintain momentum | 0.84 | 0.85 | 0.86 |
Note. N = 352. Coll.: collaboration; Comm.: communication; Adv. Eq.: advance equity; Pln. Act.: plan for action; Msr. Imp.: measure to improve; Sust.: sustainability.
Regarding the six-factor model, all factor loadings were above the meaningful threshold of 0.3 and statistically significant at p < .001. Similar to the unidimensional model, all factor loadings were particularly strong, ranging from 0.77 (item 1, work with partners from different sectors) to 0.93 (item 16, focus on continuous improvement, and item 19, plan for sustainability). Moving from the unidimensional model to the six-factor model, item loadings generally remained the same or slightly increased. R2 values for the six-factor model ranged from 0.59 (item 1) to 0.87 (item 19), and in general increased for each item relative to the unidimensional model. Except for the model chi-square test, fit indices for the six-factor model were ideal (Mx2 (194) = 393.84, p < 0.001; TLI = 0.97; CFI = 0.99; RMSEA = 0.05). Correlations among the six factors, corresponding to the six AACT domains, were high and ranged from 0.82 to 0.95.
Regarding the second-order model, all factor loadings were above the meaningful threshold of 0.3 and statistically significant at p < .001 and all R2 values were above 0.50. Moving from the six-factor model to the second-order model, factor loadings for all items generally remained identical and within the same overall range (loading range of 0.77–0.93). Similarly, the R2 values also generally remained the same or slightly increased when moving from the six-factor to the second-order model. The additional factor loadings higher to the domain-level items for the second order model were as follows: collaboration = 0.93, communication = 0.94, advance equity = 0.88, plan for action = 0.94, measure to improve = 0.95, sustainability = 0.94. Model fit indices for the second-order model were mixed, with only the TLI and CFI statistics meeting the acceptable threshold: Mx2 (203) = 463.58, p < 0.001; TLI = 0.96; CFI = 0.97; RMSEA = 0.06).
Looking across the three CFAs, comparative fit indices for the models were as follows. For the unidimensional model, AIC = 28,649 and BIC = 28,812. For the six-factor model, AIC = 27,897 and BIC = 28,125. For the second-order model, AIC = 27,974 and BIC = 28,167. Ultimately, we retained the six-factor model aligned to the six AACT domains as the final model. Previous methodological research advises that when correlations among latent factors are high, as was the case in six-factor model, second-order models may be preferential because they are more parsimonious, given they eliminate the correlations among the latent factors (Byrne, 2005). However, we retained the six-factor model over the second-order model given the better results of the model fit indices for the six-factor model, and the incrementally better AIC and BIC statistics for the six-factor model. Moreover, the factor loadings and R2 values between the two models were virtually identical, and the six-factor model conceptually aligns to the way in which the AACT was originally developed.
Discussion
The purpose of this article was to describe the development and validation of a new measure to assess readiness to advance health and health equity among multi-sectoral partnerships – the Assessment for Advancing Community Transformation (AACT). Development of the AACT followed a three-stage process. First, an initial item pool was created following a literature review to understand factors influencing successful collaboration among multi-sectoral partners. Second, five external subject matter experts reviewed the instrument for clarity and content of questions, which motivated minor revisions to the instrument’s domains, items, and instructions. Third, the revised instrument was piloted with 103 individuals from nine individuals across the United States and 96 individuals completed a user feedback survey. Completion of the three stages to develop the AACT resulted in a useable set of 22 items across six domains, with an average completion time of 28 minutes. Additionally, many users reported high levels of satisfaction with the process of completing the AACT, with the majority reporting that it was useful to their collaboration and that they would recommend the AACT to others.
Following the development of the AACT, a series of three confirmatory factor analyses were estimated using responses from 352 individuals across 49 U.S. collaboratives to test the construct validity of the tool. Results from all factor analyses found particularly strong factor loadings and R2 values for all items. Results of model fit indices indicated that a unidimensional model for the AACT did not fit the data well. Further comparison of model fit indices indicated that a six-factor model, with items aligned to the domains originally created by the scale developers, fit the data best, although correlations between the six domains were relatively high. Taken together, the results of the validation process point towards the AACT as a psychometrically sound instrument with desirable measurement characteristics.
Considering the intentional design of the AACT and the results following the development and validation of the instrument, we believe this new tool offers several advantages for multi-sectoral collaboratives. The AACT is designed to indicate stage of development across a complement of factors that communities typically address in their efforts to advance health and equity. The six domains are aligned with four distinct developmental stages that can change over time as the community context or ecological environment changes. Additionally, the AACT is designed as a self-assessment that does not require administration, analysis, or interpretation by external experts. It is a platform for data collection by and for collaboratives and thus democratizes data for community action. Because initial scoring is done independently by individuals in the collaborative, the intention is that groups then come together in deliberation to establish an overall score and to prioritize focus areas and action steps, a component that we believe to be one of the key and unique features of the AACT process. Perhaps most importantly, the AACT contains items designed to measure the extent to which collaboratives are working towards achieving health equity, a measurement domain missing in currently existing instruments. Increasingly more frameworks for improving population health center health equity at the core of all efforts (Peterson et al., 2021), which may make the AACT particularly advantageous for collaboratives working to improve health-related outcomes.
The measurement characteristics of the AACT also have important implications for evaluators and researchers studying multi-sector collaboratives. For evaluators, the AACT can be used to formally document the developmental stage the collaborative is in, which may be particularly useful for early-stage collaboratives to know before they spend vital time and resources moving in directions that may not be necessary. Because of the intentional design of the AACT, many of the AACT domains align with constructs found to heavily influence the success collaboratives have in influencing health-related outcomes (Calancie et al., 2021). Therefore, for researchers, domain-level scores measured by the AACT may serve as useful mediators or moderators in studies that analyze factors influencing outcomes achieved by multi-sector partnerships.
There are limitations to our study that should be acknowledged. Although in line with other studies developing new instruments, the sample size in our analyses was relatively small and based on a convenience sample of individuals working with collaboratives in the U.S. Therefore, as utilization of the AACT increases it would be helpful to collect additional responses and analyze the measurement characteristics in separate samples–an approach that has been useful in previous psychometric studies where the development sample size was low (Attell et al., 2020). Similarly, the AACT was written in English, and therefore may not perform well for collaboratives with individuals who speak other languages, or for multi-sector collaboratives working outside of the U.S. Finally, our analyses substantively focused on assessing construct validity by testing the factor structure of the AACT. Future studies could further examine the validity of the instrument by correlating AACT scores with other previously established measures. Along those lines, it would also be useful to examine if the factor structure of the AACT found in the current study was replicable in new samples from other multi-sector collaboratives.
Given the successful measurement properties of the AACT described in this study, the scale developers have launched an online website for multi-sector collaborators, researchers, and evaluators interested in using the tool. The website consists of instructions and guides for using the instrument, a data entry platform to aid in administering the AACT in the community, and tools to help interpret scale scores. Following completion of the AACT in the online platform, targeted resources for multi-sector collaboratives looking to improve low-scoring domains can also be obtained. Those interested can access the AACT platform at this link: https://bit.ly/AACT-TOOL.
Footnotes
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Robert Wood Johnson Foundation (75425).
ORCID iD
Brandon K. Attell https://orcid.org/0000-0002-4370-3924
References
- Alley D. E., Asomugha C. N., Conway P. H., Sanghavi D. M. (2016). Accountable health communities—addressing social needs through medicare and medicaid. New England Journal of Medicine, 374(1), 8–11. 10.1056/NEJMp1512532 [DOI] [PubMed] [Google Scholar]
- Attell B. K., Cappelli C., Manteuffel B., Li H. (2020). Measuring Functional Impairment in Children and Adolescents: Psychometric Properties of the Columbia Impairment Scale (CIS). Evaluation & the Health Professions, 43(1), 3–15. 10.1177/0163278718775797. [DOI] [PubMed] [Google Scholar]
- Boateng G. O., Neilands T. B., Frongillo E. A., Melgar-Quiñonez H. R., Young S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boomsma A., Hoogland J. J. (2001). The Robustness of LISREL Modeling Revisited. [Google Scholar]
- Brewster A. L., Tan A. X., Yuan C. T. (2019). Development and application of a survey instrument to measure collaboration among health care and social services organizations. Health Services Research, 54(6), 1246–1254. 10.1111/1475-6773.13206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown T., Moore M. (2012). Confirmatory factor Analysis. In Handbook of Structural Equation Modeling (pp. 361–379). [Google Scholar]
- Bryson J. M., Crosby B. C., Stone M. M. (2006). The design and implementation of cross-sector collaborations: Propositions from the literature. Public Administration Review, 66(s1), 44–55. 10.1111/j.1540-6210.2006.00665.x [DOI] [Google Scholar]
- Bryson J. M., Crosby B. C., Stone M. M. (2015). Designing and implementing cross-sector collaborations: Needed and challenging. Public Administration Review, 75(5), 647–663. 10.1111/puar.12432 [DOI] [Google Scholar]
- Byrne B. M. (2005). Factor Analytic models: Viewing the structure of an assessment instrument from three perspectives. Journal of Personality Assessment, 85(1), 17–32. 10.1207/s15327752jpa8501_02 [DOI] [PubMed] [Google Scholar]
- Byrne B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Routledge. [Google Scholar]
- Calancie L., Frerichs L., Davis M. M., Sullivan E., White A. M., Cilenti D., Corbie-Smith G., Hassmiller Lich K. (2021). Consolidated Framework for collaboration research derived from a systematic review of theories, models, frameworks and principles for cross-sector collaboration. Plos One, 16(1), Article e0244501. 10.1371/journal.pone.0244501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carpenter S. (2018). Ten steps in scale development and reporting: A guide for researchers. Communication Methods and Measures, 12(1), 25–44. 10.1080/19312458.2017.1396583 [DOI] [Google Scholar]
- Castañeda S. F., Holscher J., Mumman M. K., Salgado H., Keir K. B., Foster-Fishman P. G., Talavera G. A. (2012). Dimensions of community and organizational readiness for change. Progress in Community Health Partnerships : Research, Education, and Action, 6(2), 219–226. 10.1353/cpr.2012.0016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Curran P. J., West S. G., Finch J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16–29. 10.1037/1082-989X.1.1.16 [DOI] [Google Scholar]
- Daniel H., Bornstein S. S., Kane G. C. (2018). Addressing social determinants to improve patient care and promote health equity: An American college of physicians position paper. Annals of Internal Medicine, 168(8), 577–578. 10.7326/M17-2441 [DOI] [PubMed] [Google Scholar]
- DeVellis R., Thorpe C. (2021). Scale development: Theory and applications. Sage. [Google Scholar]
- Dickson E., Magarati M., Boursaw B., Oetzel J., Devia C., Ortiz K., Wallerstein N. (2020). Characteristics and practices within research partnerships for health and social equity. Nursing Research, 69(1), 51–61. 10.1097/NNR.0000000000000399 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Erickson J., Milstein B., Schafer L., Pritchard K. E., Levitz C., Miller C., Cheadle A. (2017). A pulse check on multi-sector partnerships. The Rippel Foundation. http://wp.riefmedia.com/wp3/wp-content/uploads/2017/03/2016-Pulse-Check-Narrative-Final.pdf [Google Scholar]
- Fawcett S., Schultz J., Watson-Thompson J., Fox M., Bremby R. (2010). Building multisectoral partnerships for population health and health equity. Preventing Chronic Disease, 7(6), A118–A118. [PMC free article] [PubMed] [Google Scholar]
- Fichtenberg C., Delva J., Minyard K., Gottlieb L. M. (2020). Health and human services integration: Generating sustained health and equity improvements. Health Affairs, 39(4), 567–573. 10.1377/hlthaff.2019.01594 [DOI] [PubMed] [Google Scholar]
- George D., Mallery M. (2010). SPSS for windows step by step: A simple guide and reference. Pearson. [Google Scholar]
- Hajjar L., Cook B. S., Domlyn A., Ray K. A., Laird D., Wandersman A. (2020). Readiness and relationships are crucial for coalitions and collaboratives: Concepts and evaluation tools. New Directions for Evaluation, 2020(165), 103–122. 10.1002/ev.20399 [DOI] [Google Scholar]
- Kline R. (2011). Principles and practice of structural equation modeling. Guildford Press. [Google Scholar]
- Knekta E., Runyon C., Eddy S. (2019). One size doesn’t fit all: Using factor Analysis to gather validity evidence when using surveys in your research. CBE—Life Sciences Education, 18(1), Article rm1. 10.1187/cbe.18-04-0064 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konkolÿ Thege B., Ham E., Ball L. C. (2017). A factor Analytic investigation of the person-in-recovery and provider versions of the revised recovery self-assessment (RSA-R). Evaluation & the Health Professions, 40(4), 505–516. 10.1177/0163278716674247 [DOI] [PubMed] [Google Scholar]
- Koran J. (2020). Indicators per factor in confirmatory factor analysis: More is not always better. Structural Equation Modeling: A Multidisciplinary Journal, 27(5), 765–772. 10.1080/10705511.2019.1706527 [DOI] [Google Scholar]
- Landers G., Minyard K., Lanford D., Heishman H. (2020). A Theory of Change for Aligning Health Care, Public Health, and Social Services in the Time of COVID-19. American Journal of Public Health, 110(S2), S178–S180. 10.2105/AJPH.2020.305821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lanford D., Petiwala A., Landers G., Minyard K. (2022). Aligning healthcare, public health and social services: A scoping review of the role of purpose, governance, finance and data. Health & Social Care in the Community, 30(2), 432–447. 10.1111/hsc.13374. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lasker R. D., Weiss E. S., Miller R. (2001). Partnership synergy: A practical framework for studying and strengthening the collaborative advantage. The Milbank Quarterly, 79(2), 179–205. 10.1111/1468-0009.00203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mundfrom D. J., Shaw D. G., Ke T. L. (2005). Minimum sample size recommendations for conducting factor Analyses. International Journal of Testing, 5(2), 159–168. 10.1207/s15327574ijt0502_4 [DOI] [Google Scholar]
- Nevitt J., Hancock G. R. (2004). Evaluating small sample approaches for model test statistics in structural equation modeling. Multivariate Behavioral Research, 39(3), 439–478. 10.1207/S15327906MBR3903_3 [DOI] [Google Scholar]
- Ondé D., Alvarado J. M. (2018). Scale validation conducting confirmatory factor Analysis: A Monte Carlo simulation study with LISREL. Frontiers in Psychology, 9, Article 751. 10.3389/fpsyg.2018.00751 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peterson A., Charles V., Yeung D., Coyle K. (2021). The health equity framework: A science- and justice-based model for public health researchers and practitioners. Health Promotion Practice, 22(6), 741–746. 10.1177/1524839920950730 [DOI] [PMC free article] [PubMed] [Google Scholar]
- R Core Team . (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/ [Google Scholar]
- Romzek B., LeRoux K., Johnston J., Kempf R. J., Piatak J. S. (2014). Informal accountability in multisector service delivery collaborations. Journal of Public Administration Research and Theory, 24(4), 813–842. 10.1093/jopart/mut027 [DOI] [Google Scholar]
- Rosseel Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. 10.18637/jss.v048.i02 [DOI] [Google Scholar]
- Satorra A., Bentler P. M. (1988). Scaling corrections for chi-square statistics in covariance structure analysis. Proceedings of the Business and Economic Statistics Section of the American Statistical Association. [Google Scholar]
- Satorra A., Bentler P. M. (1994). Corrections to test statistics and standard errors in covariance structure analysis. In Latent variable analysis: Applications to developmental research (pp. 399–419). Sage. [Google Scholar]
- Scaccia J. P., Cook B. S., Lamont A., Wandersman A., Castellow J., Katz J., Beidas R. S. (2015). A practical implementation science heuristic for organizational readiness: R = MC2. Journal of Community Psychology, 43(4), 484–501. 10.1002/jcop.21698 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmitt T. A. (2011). Current methodological considerations in exploratory and confirmatory factor Analysis. Journal of Psychoeducational Assessment, 29(4), 304–321. 10.1177/0734282911406653 [DOI] [Google Scholar]
- Schreiber J. B., Nora A., Stage F. K., Barlow E. A., King J. (2006). Reporting structural equation modeling and confirmatory factor Analysis results: A review. The Journal of Educational Research, 99(6), 323–338. 10.3200/JOER.99.6.323-338 [DOI] [Google Scholar]
- Towe V. L., Leviton L., Chandra A., Sloan J. C., Tait M., Orleans T. (2016). Cross-sector collaborations and partnerships: Essential ingredients to help shape health and well-being. Health Affairs, 35(11), 1964–1969. 10.1377/hlthaff.2016.0604 [DOI] [PubMed] [Google Scholar]
- Woulfe J., Oliver T. R., Zahner S. J., Siemering K. Q. (2010). Multisector partnerships in population health improvement. 7(6), Article 7. [PMC free article] [PubMed] [Google Scholar]