Abstract
Rationale
Aims and Objective: The validation study of the Patient Assessment of Chronic Illness Care (PACIC) questionnaire suggested a 5-factor structure determined a priori, but subsequent analyses have questioned the validity of the original factor structure. This study analyzed the factor structure of the PACIC using a large and diverse patient sample, and evaluated the identified factors through the lens of recent transformational initiatives in primary care.
Methods
Convenience samples of adults completed surveys in waiting rooms during clinic visits. Primary care patients with 1 or more chronic illnesses with complete PACIC responses at baseline from 39 clinics (n=1,567) and at follow-up from 36 clinics (n=1,536) participated. Exploratory and confirmatory factor analyses were conducted on baseline and follow-up patient questionnaire data from a cluster randomized controlled trial. Identified factors were evaluated in terms of item loadings, content, reliability, and the extent to which items reflected advances in the delivery of chronic illness care.
Results
Analyses supported the use of the PACIC summary score. Although a 5-factor model was retained, factor loadings were different from the original PACIC validation study. All factors had sufficient reliability, but findings suggested potential revisions to enhance the factor structure.
Conclusions
It may be time to revise the PACIC to enhance the stability of the subscales (factors) and better reflect recent transformations in the delivery of chronic illness care.
Keywords: primary care, chronic illness, patient care experience, questionnaire, factor analysis
INTRODUCTION
Assessment of patient care experiences is increasingly recognized as essential for understanding and improving the quality of healthcare and patient outcomes [1, 2]. Unlike traditional patient satisfaction questionnaires, which ask general questions about patients’ satisfaction with their physician or health care and use more subjective multiple choice responses [3–7], care experience questionnaires directly capture objective health care experience by asking patients to quantify the extent to which they received specific, recommended care [3–7]. Research has found that patient care experience questionnaires produce greater variability in responses and are more highly associated with quality indicators and performance measures [3–7]. As a result patient care experience questionnaires are recommended to guide quality improvement initiatives [8, 9].
One of most widely-used measures of patient care experiences is the Patient Assessment of Chronic Illness Care (PACIC). Developed in 2005, the PACIC captures patients’ experience of receiving recommended care considered to be optimal in caring for chronic illnesses such as diabetes [10], as suggested by the Chronic Care Model [CCM; 11, 12]. The original 20-item questionnaire consists of 5 subscales [(10]. Although prior research indicates that the PACIC summary score is suitable for use with patients with a variety of chronic illnesses, several studies have raised questions about the validity of the subscales suggesting that it may not be appropriate to use them separately or interpret them as suggested by the developers [13, 14]. Prior analyses of the PACIC factor structure have used a variety of analytic methods with samples varying greatly in origin, size, and clinical characteristics; those that have employed both exploratory and confirmatory factor analysis are limited in that they have typically used the same or split samples of patients assessed at a single time point [10, 13–24].
Despite concerns about its measurement properties, the PACIC has been used in over 80 research publications and translated into over 11 languages. The PACIC has good face validity with clinical stakeholders and is sensitive to change [19, 25]. Also, as evidenced by numerous queries and requests for technical assistance received by PACIC developers, clinician and health care system stakeholders are increasingly interested in using the PACIC to support quality improvement activities, indicating that it is more practical to use than similar instruments that are longer or that require licensing fees. The PACIC’s utility as a research tool and quality improvement instrument, however, is dependent upon the stability and validity of its constructs or subscales [14]. Healthcare systems must be able to trust that the subscale scores can be used to reliably detect gaps in chronic illness care and to guide specific quality improvement efforts in response.
Furthermore, there are additional concerns that the PACIC does not capture recent advances in our understanding of how to optimize the delivery of chronic illness care in primary care settings. Since the PACIC’s development in 2005, primary care services have undergone significant redesign in many health care systems, driven in part by the widespread implementation of Patient-Centered Medical Home (PCMH) principles [26], and by the widespread adoption and meaningful use of electronic health records [EHRs; 27]. The passage of the Affordable Care Act in 2010 accelerated these changes [28]. For example, when the PACIC was originally developed [10], EHRs had not been widely adopted and technological advances such as patient portals and mobile health apps simply did not exist. Therefore, the developers did not incorporate the Computer Information Systems or other domains of the CCM into the PACIC that were considered to be not apparent to patients. This most likely has contributed to recent findings by our team that patient experiences as measured by the PACIC are not predicted by clinician and staff assessments of how they deliver chronic illness care as measured by the Assessment of Chronic Illness Care (ACIC) survey [29].
The purpose of this study was to conduct a more robust analysis of the factor structure of the PACIC by taking advantage of its administration at two points in time across a large number of primary care practices. We evaluated factors through the lens of the transformative effects of the PCMH on primary care, make recommendations how best to use and interpret the resulting scores of the current PACIC, and propose revisions to the PACIC to improve its utility for health services research and quality improvement.
METHODS
Study Design, Participants, and Data Collection
We used baseline and 12 to 15 month follow-up patient questionnaire data collected during a cluster randomized controlled trial testing the effectiveness of a practice facilitation intervention to implement the CCM and improve the quality of diabetes care. The trial design and details about the intervention have been previously reported [30, 31]. A total of 40 small, primary care clinics in South Texas serving a diverse patient population were recruited to participate in the study. As part of the intervention, patient care experiences were measured before and after the intervention using the PACIC survey and baseline results were provided to the clinicians and staff of each practice.
For baseline and follow-up data collection, independent convenience samples of adults completed surveys in waiting rooms during clinic visits. We asked each person to indicate the purpose of their visit to insure they were clinic patients. The surveys were anonymous and were accompanied by an IRB-approved “Information Sheet” which described the study and participants’ rights as research subjects. Because the majority of clinics served communities with substantial Hispanic populations, both English language and Spanish-language versions of the survey were made available. This study was approved by the University of Texas Health Science Center at San Antonio’s Institutional Review Board.
One clinic withdrew from the study before baseline data collection was completed. Two additional clinics dropped out before or soon after the initial intervention started [31]. A fourth clinic completed the intervention, but did not participate in the follow-up patient data collection. Thus, 2,493 baseline questionnaires were collected from 39 clinics, while 1,968 follow-up questionnaires were collected from 36 clinics. For this analysis we excluded 79 baseline respondents and 43 follow-up respondents who reported they were only accompanying another person (e.g., child) who had a clinic appointment. Due to missing data, responses from 1,567 baseline surveys and 1,538 follow-up surveys were used for the present analysis.
Measures
Patient Assessment of Chronic Illness Care (PACIC)
The PACIC was developed to measure the extent to which patients with chronic illness receive clinical care that is patient-centered, proactive, and planned, and that includes collaborative goal-setting, problem-solving, and follow-up support [10]. The 20-item survey consists of 5 subscales representing components of the CCM as experienced by patients: 1) Patient Activation; 2) Delivery System Design/Decision Support; 3) Goal-setting; 4) Collaborative Problem-solving; and 5) Follow-up & Coordination. Patients rate each item with a 5-point scale ranging from 1-5 indicating how often they experienced the recommended components when they received care for their chronic conditions over the past 6 months. For this study, the stem was modified to explicitly instruct patients to rate their experiences when receiving care in “this clinic.” Ratings were averaged to yield subscale scores and a summary score; with higher scores indicating patients’ perception that their care is more consistent with the CCM.
Statistical Analysis
We used an exploratory factor analytic approach on the baseline data and then, based on those results, conducted confirmatory factor analysis (CFA) on the follow-up data and compared our results to the original validation study [10] and another large factor analytic study of the PACIC conducted in the United States [14]. For the exploratory factor analysis (EFA) we used the Comprehensive Exploratory Factor Analysis software [32]. We tested 1-, 2-, 3-, 4- and 5-factor models using oblique rotation (quartimax) and, as this was exploratory, allowed all items to load on each factor. Model fit was evaluated by the chi-square tests of perfect and close fit, the root mean square error of approximation (RMSEA) and the expected cross validation index (ECVI). The test of perfect fit tests the null hypothesis that the specified model fits perfectly. A significant result indicates that the model significantly differs from a perfect fitting model. Since perfect fit is almost impossible, the test of close fit tests the null hypothesis that the model fits closely to the data. Lower values of the RMSEA indicate a better fit with values below .08 or .10 considered acceptable, depending on sample size [33]. The ECVI indicates how well a model would fit if specified in another sample and lower values indicate better fit. Reliability for each factor was examined with Cronbach’s alpha for each subscale and we tested whether reliability was increased by deleting certain items to inform revisions.
For the CFA, we used Lisrel 8.8 to conduct the analyses. Based on the EFA (reported below), a five factor model was tested. In addition to the five-factor model, we also tested a bifactor model. The bifactor model included all five factors but also added a general factor to explain any relationship between the factors instead of correlations between factors [34]. The bifactor model can support the use of total scores as well as subscale scores. A polychoric correlation matrix with diagonally weighted least squares estimation tested the models from the exploratory analyses. For these analyses, we evaluated model fit using the tests of perfect and close fit, the RMSEA, the ECVI, the comparative fit index (CFI) and the root mean square residual (RMSR). The CFI tests the fit of the model compared to a null model (does not fit at all) and a perfect fitting model, with values over .94 considered acceptable. The RMSR is the mean residuals for the correlation matrix and values under .05 are generally considered acceptable. For the CFA sample, we also examined the Cronbach’s alpha for each subscale and whether deleting each item increased reliability, using SPSS to conduct the analyses.
RESULTS
Descriptive statistics are reported in Table 1. As would be expected for south Texas, 48% of the participants at baseline and 54% at follow-up were Hispanic. A substantial minority at baseline (33%) and follow-up (30%) reported a diagnosis of diabetes.
Table 1.
Demographics of the two samples.
Exploratory Factor Analysis (Baseline) Sample (n=1567) | Confirmatory Factor Analysis (Follow-up) Sample (n=1536) | |
---|---|---|
Age | 52.16 (SD 16.03) | 50.60 (SD 16.54) |
| ||
Female | 62.7% (982) | 63.0% (968) |
| ||
Education, Bachelor’s degree or higher | 28.3% (443) | 30.3% (465) |
| ||
Race/Ethnicity | ||
nonHispanic White | 40.4% (633) | 36.1% (600) |
Black/African American | 5.9% (92) | 4.0% (61) |
Hispanic | 48.2% (756) | 53.8% (826) |
Native American | 2.0% (32) | 1.4% (22) |
Other | 5.4% (85) | 3.9% (60) |
| ||
Diabetes | 32.7% (502) | 29.7% (456) |
| ||
Fair or poor self-rated health | 27.2% (426) | 25.1% (386) |
EFA
Based on the fit indices (Table 2), the 1-, 2- and 3-factor models did not fit the data well. All RMSEAs were over .08 and the ECVIs were higher than for the 4- and 5-factor models. Although the tests of perfect and close fit were significant, the test of perfect fit can be misleading with large sample sizes [35]. Although the 4-factor model provided a good fit for the data, the 5-factor model was the only model with a confidence interval for the RMSEA that was entirely below .08 and the ECVI for the 5-factor model was substantially lower than for the 4-factor model. Hence, the 5-factor model was retained. Each of the five factors had high reliability. Factors 1 (0.870), 3 (0.887) and 5 (0.871) had Cronbach’s alphas above 0.80 but below 0.90. Factors 2 (0.905) and 4 (0.907) had alphas above 0.90. Deleting an item only increased the reliability for Factor 3 when item 16 was deleted (new alpha 0.895).
Table 2.
Model fit indices for the exploratory and confirmatory factor analyses.
Exploratory Factor Analysis | Confirmatory Factor Analysis | ||||||
---|---|---|---|---|---|---|---|
1 factor | 2 factor | 3 factor | 4 factor | 5 factor | 5 factor | Bifactor | |
Chi-square (df) | 4904.223 (170) | 2672.233 (151) | 1871.649 (133) | 1174.196 (116) | 791.763 (100) | 961.71 (142) | 840.91 (133) |
Perfect Fit | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 |
Close fit | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | 0.099 |
RMSEA (95% CI) | 0.133 (0.130, 0.137) | 0.103 (0.100, 0.107) | 0.091 (0.088, 0.095) | 0.076 (0.072, 0.080) | 0.066 (0.062, 0.071) | 0.055 (0.052, 0.058) | 0.053 (0.049, 0.056) |
ECVI | 3.183 | 1.782 | 1.294 | 0.870 | 0.646 | 0.55 | 0.50 |
CFI | 0.99 | 1.00 | |||||
RMSR | 0.028 | 0.032 |
Df=degrees of freedom, RMSEA=root mean square error of approximation, ECVI=expected cross validation index, CFI=comparative fit index, RMSR=root mean square residual, CI=confidence interval
Although a 5-factor model was retained (Table 3), factor loadings were different from the original 5-factor structure in the PACIC validation study [10]. The first three items loaded together on Factor 1 (F1), consistent with the original Patient Activation subscale. The five items of Factor 2 (F2) included two items from the original Delivery System Design/Decision Support (#4 and #6) subscale and three items from the original Goal Setting subscale (#7, #8, and #9). The four items in Factor 3 (F3) were originally from the Goal Setting subscale (#10), and the Follow-up/Coordination subscale (#16, #17, #18). The five items for Factor 4 (F4) came from the original Goal Setting subscale (#11), and the original Problem-solving/Contextual Counseling subscale (#12, #13, #14, #15). Two items (#19 and #20) from the original Follow-up/Coordination subscale loaded by themselves on Factor 5 (F5). Item 5 did not load sufficiently on any factor and was not included in the confirmatory analyses. Items 11 and 16 also had relatively low factor loadings, but unlike #5, they each clearly loaded on a specific factor so were retained for the confirmatory analyses.
Table 3.
Factor loadings for the five factor model from the exploratory analyses with subscales from the original validation study.
5 Factor | ||||||
---|---|---|---|---|---|---|
Item | Original Subscales | 1 | 2 | 3 | 4 | 5 |
1 | Patient Activation | −0.02 | 0.05 | 0.79 | 0.07 | 0.00 |
2 | Patient Activation | 0.01 | 0.06 | 0.84 | −0.04 | 0.05 |
3 | Patient Activation | 0.24 | −0.09 | 0.52 | 0.08 | 0.09 |
4 | Delivery System Design/Decision Support | −0.06 | 0.11 | 0.08 | −0.03 | 0.71 |
5 | Delivery System Design/Decision Support | 0.31 | −0.20 | 0.24 | 0.11 | 0.22 |
6 | Delivery System Design/Decision Support | 0.10 | −0.14 | 0.08 | 0.14 | 0.65 |
7 | Goal Setting | 0.06 | −0.03 | 0.12 | 0.02 | 0.75 |
8 | Goal Setting | 0.02 | 0.05 | −0.03 | 0.04 | 0.82 |
9 | Goal Setting | 0.12 | 0.23 | −0.06 | −0.03 | 0.57 |
10 | Goal Setting | 0.04 | 0.69 | 0.09 | 0.00 | 0.10 |
11 | Goal Setting | 0.53 | 0.18 | 0.15 | −0.03 | −0.02 |
12 | Problem-solving/Contextual Counseling | 0.72 | 0.00 | 0.13 | 0.04 | −0.06 |
13 | Problem-solving/Contextual Counseling | 0.85 | 0.01 | −0.02 | 0.02 | 0.05 |
14 | Problem-solving/Contextual Counseling | 0.65 | 0.11 | −0.03 | 0.02 | 0.20 |
15 | Problem-solving/Contextual Counseling | 0.57 | 0.07 | 0.04 | 0.19 | 0.07 |
16 | Follow-up/Coordination | 0.18 | 0.34 | 0.01 | 0.20 | 0.12 |
17 | Follow-up/Coordination | 0.11 | 0.79 | 0.02 | 0.05 | 0.00 |
18 | Follow-up/Coordination | −0.04 | 0.73 | 0.04 | 0.14 | 0.03 |
19 | Follow-up/Coordination | −0.03 | 0.06 | −0.02 | 0.85 | 0.05 |
20 | Follow-up/Coordination | 0.07 | 0.03 | 0.07 | 0.77 | −0.03 |
CFA
We tested the five-factor model from the EFA using the follow-up assessment sample. Fit indices for the five-factor and bifactor models are reported in Table 2. As would be expected for a model with more parameters, the bifactor model fit better than the five-factor model based on the test of close fit, RMSEA, CFI and ECVI. However, the fit indices for the five-factor model were still acceptable and the RMSR was lower for the five-factor model than the bifactor model. Factor loadings for the five-factor model were all acceptable (Table 4). For the bifactor model, loadings on the general factor were high and loadings for F1, F3, and F5 were acceptable. However, the loadings for F2 and F4 were relatively lower in the bifactor model. The new PACIC subscales had sufficient reliability (F1 0.870, F2 0.916, F3 0.887, F4 0.910 and, F5 0.871). In the follow-up sample, deleting an item only increased alpha for Factor 3 by removing item 16 (new alpha 0.897) and Factor 1 by removing item 3 (new alpha 0.884).
Table 4.
Factor loadings for the confirmatory factor analysis.
5 Factor | Bifactor | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Factor | 1 | 2 | 3 | 4 | 5 | General factor | 1 | 2 | 3 | 4 | 5 |
Item 1 | 0.89 | 0.74 | 0.53 | ||||||||
Item 2 | 0.90 | 0.75 | 0.60 | ||||||||
Item 3 | 0.91 | 0.78 | 0.30 | ||||||||
Item 4 | 0.84 | 0.80 | 0.24 | ||||||||
Item 6 | 0.88 | 0.83 | 0.33 | ||||||||
Item 7 | 0.94 | 0.89 | 0.29 | ||||||||
Item 8 | 0.91 | 0.86 | 0.29 | ||||||||
Item 9 | 0.86 | 0.82 | 0.19 | ||||||||
Item 10 | 0.90 | 0.81 | 0.37 | ||||||||
Item 11 | 0.81 | 0.78 | 0.21 | ||||||||
Item 12 | 0.85 | 0.80 | 0.36 | ||||||||
Item 13 | 0.92 | 0.87 | 0.37 | ||||||||
Item 14 | 0.93 | 0.89 | 0.26 | ||||||||
Item 15 | 0.91 | 0.88 | 0.16 | ||||||||
Item 16 | 0.88 | 0.82 | 0.18 | ||||||||
Item 17 | 0.93 | 0.84 | 0.47 | ||||||||
Item 18 | 0.89 | 0.80 | 0.42 | ||||||||
Item 19 | 0.94 | 0.84 | 0.43 | ||||||||
Item 20 | 0.94 | 0.84 | 0.41 |
DISCUSSION
This study confirmed that the PACIC measures five distinct factors, although they did not precisely match those developed in the original validation study (see Table 5). Results also indicated an overall patient experience of chronic illness care factor, supporting the use of the PACIC total score. Of the 5 identified factors, only F1 was identical to one of those created a priori and confirmed by the developers (Items 1–3, originally named “Patient Activation”). Based upon the content of these items, we believe they more appropriately reflect Patient Engagement in that they focus on soliciting patient input and preferences (Table 5). Our F4 was also generally consistent with the original Problem Solving/Contextual Counseling subscale, capturing all 4 of its original items, as well as 1 additional item from the original Goal Setting subscale. We believe, however, that “Collaborative Care Planning” more accurately reflects the content of these items.
Table 5.
Original PACIC items and comparison of factors loadings originally identified by Glasgow et al. [10], Fan et al. [14], and Noël et al.
Item | Original Item | Original Subscales 5 factor solution | Subscales from Fan, et al, 2014 4 factor solution | Proposed Subscales 5 factor solution |
---|---|---|---|---|
1 | Asked for my ideas when we made a treatment plan | Patient Activation | Collaboration | Patient Engagement |
2 | Given choices about treatment to think about | Patient Activation | Collaboration | Patient Engagement |
3 | Asked to talk about any problems with my medicines or their effects | Patient Activation | Collaboration | Patient Engagement |
4 | Given a written list of things I should do to improve my health | Delivery System Design/Decision Support |
Goal Setting/Action Plans | Patient Activation |
5 | Satisfied that my care was well organized | Delivery System Design/Decision Support |
Evaluation of Services | Deleted |
6 | Shown how what I did to take care of myself influenced my condition | Delivery System Design/Decision Support |
Goal Setting/Action Plans | Patient Activation |
7 | Asked to talk about my goals in caring for my condition | Goal Setting | Goal Setting/Action Plans | Patient Activation |
8 | Helped to set specific goals to improve my eating or exercise | Goal Setting | Goal Setting/Action Plans | Patient Activation |
9 | Given a copy of my treatment plan | Goal Setting | Goal Setting/Action Plans | Patient Activation |
10 | Encouraged to go to a specific group or class to help me cope with my chronic condition | Goal Setting | Social Network | Linkages to Self-Care Support |
11 | Asked questions, either directly or on a survey, about my health | Goal Setting | Evaluation of Services | Collaborative Care Planning |
12 | Sure that my doctor or nurse thought about my values, beliefs and traditions when they recommended treatments to me. | Problem- solving/Contextual Counseling | Evaluation of Services | Collaborative Care Planning |
13 | Helped make a treatment plan that I could carry out in my daily life | Problem- solving/Contextual Counseling | Evaluation of Services | Collaborative Care Planning |
14 | Helped to plan ahead so I could take of my condition even in hard times | Problem- solving/Contextual Counseling | Evaluation of Services | Collaborative Care Planning |
15 | Asked how my chronic condition affects my life | Problem- solving/Contextual Counseling | Evaluation of Services | Collaborative Care Planning |
16 | Contacted after a visit to see how things were going | Follow-up/Coordination | Evaluation of Services | Linkages to Self-care Support |
17 | Encouraged to attend programs in the community that could help me | Follow-up/Coordination | Social Network | Linkages to Self-care Support |
18 | Referred to a dietician, health educator, or counselor | Follow-up/Coordination | Social Network | Linkages to Self-care Support |
19 | Told how my visits with other types of doctors, like an eye doctor or surgeon, helped my treatment | Follow-up/Coordination | Evaluation of Services | Care Coordination |
20 | Asked how my visits with other doctors were going. | Follow-up/Coordination | Evaluation of Services | Care Coordination |
The items from the other 3 original subscales, however, loaded differently in our analysis. For example, 3 of the 5 items from the original Goal Setting subscale and 2 two of the 3 items from the original Delivery System Design/Decision Support subscale loaded on our F2. These items, however, more accurately reflect activating processes and we believe Patient Activation is an appropriate name for these items. In addition, the 5 original Follow-up/Coordination subscale items loaded separately onto two different factors in our analysis (F3 and F5), suggesting that these items capture two distinct constructs. Our F3 captured 3 items that focus primarily on non-physician referrals and support within the health care system or community for patient self-management. Although F3 captured a fourth item (#16) assessing generic follow-up, its relatively low loading suggests that this item could be deleted or revised. Therefore, we suggest Linkages to Self-care Support is an appropriate name for this factor. In contrast, F5 captured 2 of the original Follow-up/Coordination items that focus on referrals and care provided by other physicians. We believe that Care Coordination is an appropriate name for this factor. Item 5 was not included in the CFA due to insufficient factor loadings. This is not surprising, given that item 5 is the only PACIC item that that asks about satisfaction.
There have been at least 12 previously published studies exploring the factor structure of the PACIC (Table 6, supplementary file). The results have been quite mixed, most likely due to variations in analytic methods. The developers used CFA only to validate the mapping of the proposed items onto 5 domains determined a priori [10]. Some subsequent analyses have found a 4 or 5 factor structure using factor analysis [14–17, 23]. Others, however, have found that 1 or 2 factor models fit the data better using Principal Components Analysis [13, 20, 21, 24]. In factor analysis, factors are conceptualized as “real world” entities and examines shared variance. In contrast, PCA considers components as geometrical abstractions, tries to combine variables into the smallest number of subsets that may not map easily onto real world phenomena, and examines observed variance [36]. Therefore, it is perhaps not surprising that disparate results should have been obtained from such different methods.
Furthermore, study populations used in prior factors analytic studies have also varied greatly with sample sizes ranging from 100 to over 3,300 [10, 13–24]. Four of the studies used data reported by patients with type 2 diabetes [13,14,16,23], while the others used individuals with other chronic illnesses or non-disease specific groups. These studies used patient responses captured at a single point in time, using the same or split samples to conduct both EFA and CFA. Our analysis used two different samples collected at two different points in time to conduct the EFA and CFA. These prior PACIC studies have also used primarily homogenous non-US populations [15, 17, 19–24] or US samples that were predominantly non-Hispanic White [10, 13, 14]. One US study factor analyzed data from 100 Spanish-speaking patients with diabetes [16]. Given that the healthcare system of the US differs vastly from other countries and there are significant health inequities, it is understandable that results may differ between US and non-US studies and depending on the race/ethnicity of the US samples.
Our study represents the largest US patient sample to date (combined n=3,103), the majority of whom were racial or ethnic minorities. The use of patient responses collected at two separate points in time was another strength of our study. It is possible that modification of the PACIC stem in our study may have contributed to subtle differences in the factor loadings. Because the original stem was worded in a way that might prompt patients to consider multiple sources of health care when responding, we altered the wording to prompt patients to rate their experiences when receiving care in “this clinic.” Although this might account for differences from other studies, the stem and other aspects of the PACIC have been modified in other studies with no clear effect on results [13, 17, 20, 23].
Furthermore, the 5 factor solution we found is surprisingly comparable to the 4-factor solution identified by Fan et al. [14], which was based on an analysis using similar methods with 1,544 of 1,934 PACIC responses from US patients identified from a diabetes registry maintained by 34 primary care clinics in the Midwest belonging to a practice-based research network (Table 5). Although this suggests the relative consistency of the underlying factor structure of the PACIC, our findings indicate possible revisions that might further enhance the PACIC’s utility. Two of the subscales (Patient Engagement, Care Coordination) had a very small number of items; most experts suggest that a minimum of 3 items are needed for a factor to be stable but more are preferable [37, 38]. Therefore, a revision that adds additional items to these subscales would be advisable so they can be assessed separately to better inform clinical practice.
The results also suggest that certain items could either be deleted (item 5) or revised (item 11) and that two of the factors, Patient Activation (F2) and Collaborative Care Planning (F4) have some redundant items that could be eliminated. Additionally, the original Follow-up/Coordination subscale appeared to be two separate constructs, linkages to self-care and care coordination, suggesting potential for developing two separate subscales from this factor. Removing item 16 from Factor 3 consistently increased alpha for that factor. In view of its face content and the results of the EFA, item 16 likely measures a separate but related construct of active follow-up by the health care team in-between clinic visits, a concept that is distinct from the coordination function provided by primary care. A revised version of the PACIC could add items similar to 16 and create a new subscale called “Proactive Follow-up”. Active follow-up between primary care appointments has recently been found to distinguish high quality primary care, suggesting that this is an important domain to expand and capture in future assessments of patient care experience [39].
As previously noted, when the developers of the PACIC proposed candidate items most likely to reflect care experiences of the average patient [10], EHRs had not been widely adopted and technological advances such as patient portals simply did not exist. As a result, the Computer Information Systems domain of the CCM is not represented in the PACIC. Therefore, new PACIC items assessing the availability of EHR portals to allow patients to access their medical records or home health monitoring devices to transmit data to their providers, are needed to capture patients’ access to health information technologies that are increasingly being incorporated into chronic illness care. Although the CCM was one of the first to suggest the potential value of team-based care within the Delivery System Redesign domain, there are no items on the PACIC that assess the extent to which patients’ experience team-based care and this could be another important construct to include in a PACIC revision.
Similarly, social and environmental influences have increasingly been recognized over the past decade as important determinants of health [40]. The Institute of Medicine recently proposed domains and measures of social determinants for EHRs to meet recommendations for meaningful use [41]. The PACIC, however, does not capture the extent to which health care providers assess problems in the patients' home or neighborhood that may affect patients’ ability to self-mange. Furthermore, the PACIC does not appear to capture care practices important for optimizing the care of patients with multiple chronic conditions or complex chronic conditions [42]. A revision that incorporates this construct as well as the 6 items recently proposed to capture patients’ receipt of 5A behavior advice [15] as optional PACIC modules would provide the greatest utility for quality improvement and health services research.
Although we confirmed that the total scale is interpretable, our findings support a new interpretation of the sub-scales and underlying constructs measured by the PACIC that are more consistent with our current understanding of what primary care can and should be capable of from patient’s point of view. While some have suggested that a universally applicable factorial structure for the PACIC may not exist [14], we believe that it is time to revise the PACIC to enhance its performance and to better reflect chronic illness care delivery as it has evolved in the past decade. It would provide researchers, clinicians, and health care systems with a tool that can be trusted to assess specific components of optimal chronic illness care. The results of this study suggest some potential starting points for those revisions.
Supplementary Material
Acknowledgments
We would like to acknowledge the thoughtful feedback from the developers of the PACIC, including Russell Glasglow, Edward Wagner, Judith Schaefer, and Robert Reid, and other investigators at Group Health Cooperative, including Brian Austin, Karin Johnson, Leah Tuzzio, Dona Cutsogeorge, and Katie Coleman. This study was based on data collected through a grant from the National Institute of Diabetes, Digestive, and Kidney Disorders (R18 DK 075692), which follows the Consolidated Standards of Reporting Trials guidelines, and is registered per International Committee of Medical Journal Editors guidelines (Clinical Trial Registration Number NCT00482768). This work was also supported with resources and the use of facilities at the Audie L. Murphy Veterans Hospital, Veterans Health Administration, Department of Veterans Affairs. The views expressed in this article are those of the author(s) and do not necessarily represent the views of the Department of Veterans Affairs. We would like to express our appreciation to the physicians and offices staff in the South Texas Ambulatory Research Network (STARNet) for their participation in this study.
Contributor Information
Polly Hitchcock Noël, Email: noelp@uthsca.edu, University of Texas Health Science Center at San Antonio, South Texas Veterans Health Care System, 7703 Floyd Curl Drive, San Antonio, TX 78229, 210-394-0360.
Salene Jones, Email: jones.s@ghc.org, MacColl Center for Healthcare Innovation, Group Health Research Institute, Group Health Cooperative, 1730 Minor Avenue, Suite 1600, Seattle, WA 98101 206-287-2704.
Michael L. Parchman, Email: parchman.m@ghc.org, MacColl Center for Healthcare Innovation, Group Health Research Institute, Group Health Cooperative, 1730 Minor Avenue, Suite 1600, Seattle, WA 98101, 206-287-2704.
References
- 1.Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. New England Journal of Medicine. 2013;368(3):201–3. doi: 10.1056/NEJMp1211775. [DOI] [PubMed] [Google Scholar]
- 2.Browne K, Roseman D, Shaller D, Edgman-Levitan S. Analysis & commentary. Measuring patient experience as a strategy for improving primary care. Health Affairs (Millwood) 2010;29(5):921–5. doi: 10.1377/hlthaff.2010.0238. [DOI] [PubMed] [Google Scholar]
- 3.Cleary PD, Edgman-Levitan S. Health care quality. Incorporating consumer perspectives. JAMA. 1997;278:1608–1612. [PubMed] [Google Scholar]
- 4.Jenkinson C, Coulter A, Bruster S, Richards N, Chandola T. Patients’ experiences and satisfaction with health care: results of a questionnaire study of specific aspects of care. Quality and Safety in Health Care. 2002;11:335–339. doi: 10.1136/qhc.11.4.335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Salisbury C, Wallace M, Montgomery AA. Patients' experience and satisfaction in primary care: secondary analysis using multilevel modelling. BMJ. 2010;341:c5004. doi: 10.1136/bmj.c5004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Danielsen K, Bjertnaes OA, Garratt A, Forland O, Iversen HH, Hunskaar S. The association between demographic factors, user reported experiences and user satisfaction: results from three casualty clinics in Norway. BMC Family Practice. 2010;11:73. doi: 10.1186/1471-2296-11-73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Arain M, Nichol J, Campbell M. Patients’ experience and satisfaction with GP led walk-in centres in the UK; a cross sectional study. BMC Health Services Research. 2013;13:142. doi: 10.1186/1472-6963-13-142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.NCQA. [accessed on July 31, 2015]; [last accessed 30 December 2015];NCQA’s New Distinction in Patient Experience Reporting. Available at: http://www.ncqa.org/PublicationsProducts/OtherProducts/PatientExperienceReporting.aspx.
- 9.AHRQ. [Accessed August 10, 2015]; [last accessed 30 December 2015];The CAHPS Improvement Guide. Available at: https://www.cahps.ahrq.gov/quality-improvement/improvement-guide/improvement-guide.html.
- 10.Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and Validation of the Patient Assessment of Chronic Illness Care (PACIC) Medical Care. 2005;43:436–444. doi: 10.1097/01.mlr.0000160375.47920.8c. [DOI] [PubMed] [Google Scholar]
- 11.Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A. Improving chronic illness care: translating evidence into action. Health Affairs (Millwood) 2001;20:64–78. doi: 10.1377/hlthaff.20.6.64. [DOI] [PubMed] [Google Scholar]
- 12.Wagner EH, Groves T. Care for chronic diseases. BMJ. 2002;325:913–914. doi: 10.1136/bmj.325.7370.913. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Gugiu PC, Coryn CL, Applegate B. Structure and measurement properties of the Patient Assessment of Chronic Illness Care instrument. Journal of Evaluation in Clinical Practice. 2010;16:509–516. doi: 10.1111/j.1365-2753.2009.01151.x. [DOI] [PubMed] [Google Scholar]
- 14.Fan J, McCoy RG, Ziengenfuss JY, Smith SA, Borah BJ, Deming JR, Montori VM, Shah ND. Evaluating the structure of the Patient Assessment of Chronic Illness Care (PACIC) survey from the patient’s perspective. Annals of Behavioral Medicine. 2015;49(1):104–11. doi: 10.1007/s12160-014-9638-3. [DOI] [PubMed] [Google Scholar]
- 15.Rosemann T, Laux G, Droesemeyer S, Gensichen J, Szecsenyi J. Evaluation of a culturally adapted German version of the patient assessment of chronic illness care (PACIC 5A) questionnaire in a sample of osteoarthritis patients. Journal of Evaluation in Clinical Practice. 2007;13:806–813. doi: 10.1111/j.1365-2753.2007.00786.x. [DOI] [PubMed] [Google Scholar]
- 16.Aragones A, Schaefer EW, Stevens D, Gourevitch MN, Glasgow RE, Shah NR. Validation of the Spanish translation of the Patient Assessment of Chronic Illness Care (PACIC) Survey. Preventing Chronic Disease. 2008;5(4):1–10. [PMC free article] [PubMed] [Google Scholar]
- 17.Wensing M, van Lieshout J, Jung HP, Hermsen J, Rosemann T. The Patients Assessment Chronic Illness Care (PACIC) questionnaire in the Netherlands: a validation study in rural general practice. BMC Health Services Research. 2008;8:182. doi: 10.1186/1472-6963-8-182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Gugiu PC, Coryn C, Clark R, Kuehn A. Development and evaluation of the short version of the patient assessment of chronic illness care instrument. Chronic Illness. 2009;5(4):268–276. doi: 10.1177/1742395309348072. [DOI] [PubMed] [Google Scholar]
- 19.Cramm JM, Nieboer AP. Factorial validation of the patient assessment of chronic illness care (PACIC) and PACIC short version (PACIC-S) among cardiovascular disease patients in the Netherlands. Health and Quality of Life Outcomes. 2012;10:104. doi: 10.1186/1477-7525-10-104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.McIntosh CN. Report from Statistics Canada, Health Information and Research Division. 2008. Examining the factorial validity of selected modules from the Canadian survey of experiences with primary health care. [Google Scholar]
- 21.Taggart J, Chan B, Jayasinghe UW, Dip BC, Proudfoot JM, Crookes P, Beilby J, Black D, Harris MF. Patients Assessment of Chronic Illness Care (PACIC) in two Australian studies: structure and utility. Journal of Evaluation in Clinical Practice. 2011;17:215–221. doi: 10.1111/j.1365-2753.2010.01423.x. [DOI] [PubMed] [Google Scholar]
- 22.Spicer J, Budge C, Carryer J. Taking the PACIC back to basics: the structure of the Patient Assessment of Chronic Illness Care. Journal of Evaluation in Clinical Practice. 2010;18(2):307–312. doi: 10.1111/j.1365-2753.2010.01568.x. [DOI] [PubMed] [Google Scholar]
- 23.Maindal HT, Sokolowski I, Vedsted P. Adaptation, data quality and confirmatory factor analysis of the Danish version of the PACIC questionnaire. European Journal of Public Health. 2010;22(1):31–36. doi: 10.1093/eurpub/ckq188. [DOI] [PubMed] [Google Scholar]
- 24.Gensichen J, Serras A, Paulitsch MA, Rosemann T, König J, Gerlach FM, Petersen JJ. The Patient Assessment of Chronic Illness Care Questionnaire: Evaluation in patients with mental disorders in primary care. Community Mental Health Journal. 2011;47(4):447–53. doi: 10.1007/s10597-010-9340-2. [DOI] [PubMed] [Google Scholar]
- 25.Wagner EH, Ludman EF, Aiello Bowles EJ, Penfold R, Reid RJ, Rutter CM, Chubak J, McCorkle R. Nurse navigators in early cancer care: a randomized, controlled trial. Journal of Clinical Oncology. 2014;32(1):12–8. doi: 10.1200/JCO.2013.51.7359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Rittenhouse DR, Shortell SM. The patient-centered medical home: Will it stand the test of health reform? JAMA. 2009;301(19):2038–2040. doi: 10.1001/jama.2009.691. [DOI] [PubMed] [Google Scholar]
- 27.Hogan SO, Kissam SM. Measuring meaningful use. Health Affairs (Millwood) 2010;29(4):601–606. doi: 10.1377/hlthaff.2009.1023. [DOI] [PubMed] [Google Scholar]
- 28.Davis K, Abrams M, Stremikis K. How the Affordable Care Act will strengthen the nation’s primary care foundation. Journal of General Internal Medicine. 2011;26(10):1201–1203. doi: 10.1007/s11606-011-1720-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Noël PH, Parchman ML, Palmer RF, Romero RL, Leykum LK, Lanham HJ, Zeber JE, Bowers KW. Alignment of patient and primary care practice member perspectives of chronic illness care: a cross-sectional analysis. BMC Family Practice. 2014;15:57. doi: 10.1186/1471-2296-15-57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Parchman ML, Pugh JA, Culler SD, Noël PH, Arar NH, Romero RL, Palmer PF. A group randomized trial of a complexity-based organizational intervention to improve risk factors for diabetes complications in primary care. Implementation Science. 2008;3:15. doi: 10.1186/1748-5908-3-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Parchman ML, Noël PH, Culler SD, Lanham HJ, Leykum LK, Romero RL, Palmer RF. A randomized trial of practice facilitation to improve the delivery of chronic illness care in primary care: Initial and sustained effects. Implementation Science. 2013;8(1):93. doi: 10.1186/1748-5908-8-93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Browne MW, Cudeck R, Tatineni K, Mels G. CEFA: Comprehensive Exploratory Factor Analysis Version 3.04. 2010. Mar, [Google Scholar]
- 33.Chen F, Curran PJ, Bollen KA, Kirby J, Paxton P. An empirical evaluation of the use of fixed cutoff points in RMSEA test statistic in structural equation models. Sociological Methods and Research. 2008;36(4):462–494. doi: 10.1177/0049124108314720. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Holzinger KJ, Swineford F. The bifactor method. Psychometrika. 1937;2:41–54. [Google Scholar]
- 35.Hu LT, Bentler PM. Structural equation modeling: Concepts, issues and applications. Thousand Oaks, CA: Sage Publications; 1995. [Google Scholar]
- 36.Bryant FB, Yarnold PR. Principal-components analysis and exploratory and confirmatory factor analysis. In: Grimm LG, Yarnold PR, editors. Reading and understanding multivariate statistics. Washington, DC, US: American Psychological Association; 1995. pp. 99–136. [Google Scholar]
- 37.Anderson TW, Rubin H. Statistical inference in factor analysis. In: Neyman J, editor. Proceedings of the Third Berkeley Symposium. Vol. 5. Berkeley, CA: University of California Press; 1956. pp. 111–150. [Google Scholar]
- 38.Nunnally JC. Psychometric theory. New York: McGraw-Hill; 1967. [Google Scholar]
- 39.Bodenheimer T, Ghorob A, Willard-Grace R, Grumbach K. The 10 building blocks of high-performaing primary care. Annals of Family Medicine. 2014;12(2):166–171. doi: 10.1370/afm.1616. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Braveman PA, Egerter SA, Mockenahaupt RE. Broadening the focus: The need to address social determinants of health. American Journal of Preventive Medicine. 2011;20(1S1):S4–S18. doi: 10.1016/j.amepre.2010.10.002. [DOI] [PubMed] [Google Scholar]
- 41.IOM. Capturing social and behavioral domains and measures in electronic health records: Phase 2. Washington, DC: Institute of Medicine of the National Academies. National Academies Press; 2014. [PubMed] [Google Scholar]
- 42.Noël PH, Parchman ML, Williams JW, Jr, Cornell JE, Shuko L, Zeber JE, Kazis LE, Lee AF, Pugh JA. The challenges of multimorbidity from the patient perspective. Journal of General Internal Medicine. 2007;22(Suppl 3):419–24. doi: 10.1007/s11606-007-0308-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.