Abstract
Abstract
Objective
Patient engagement (PE), or a patient’s participation in their healthcare, is an important component of comprehensive healthcare delivery, yet there is not an existing, publicly available, measurement tool to assess PE capacity and behaviours. We sought to develop a survey to measure PE capacity and behaviours for use in ambulatory healthcare clinics.
Design
Measure development and psychometric evaluation.
Setting and participants
A total of 1180 adults in the USA from 2022 to 2024, including 1050 individuals who had indicated they had seen a healthcare provider in the prior 12 months who were recruited nationally via social media across three separate samples; 8 patient advisors and healthcare providers recruited from a large, midwestern US Academic Medical Center; and 122 patients recruited from five participating ambulatory clinics in the Midwestern USA.
Methods
An initial survey was developed based on a concept mapping approach with a Project Advisory Board composed of patients, researchers and clinicians. Social media was then used to recruit 540 participants nationally (Sample 1) to complete the initial, 101-item version of the survey to generate data for factor analysis. We conducted exploratory and confirmatory factor analyses to assess model and item fit to inform item reduction, and subsequently conducted cognitive interviews with eight additional participants (patient advisors and providers; Sample 2), who read survey items aloud, shared their thoughts and selected a response. The survey was revised and shortened based on these results. Next, a test–retest survey, also administered nationally via another round of social media recruitment, was administered two times to a separate sample (n=155; Sample 3), 2 weeks apart. We further revised the survey to remove items with low temporal stability based on these results. For clinic administration, research staff approached patients (n=122; Sample 4) in waiting rooms in one of five ambulatory clinics to complete the survey electronically or on paper to determine feasibility of in-clinic survey completion. We engaged in further item reduction based on provider feedback about survey length and fielded a final revised and shortened survey nationally via a final round of social media recruitment (n=355; Sample 5) to obtain psychometric data on this final version.
Primary and secondary outcome measures
Cronbach’s alphas, intraclass correlations (ICCs), Comparative Fit Index (CFI), root mean square error of approximation (RMSEA), standardised root mean squared residual (SRMR).
Results
The final PE Capacity Survey (PECS) includes six domains across two scales: ‘engagement behaviours’ (ie, preparing for appointments, ensuring understanding, adhering to care) and ‘engagement capacity’ (ie, healthcare navigation resources, resilience, relationship with provider). The PECS is 18 questions, can be completed during a clinic visit in less than 10 minutes, and produces scores which demonstrate acceptable internal consistency reliability (α=0.72 engagement behaviours, 0.76 engagement capacity), indicating items are measuring the same overarching construct. The scales also had high test–retest reliability (ICC=0.82 behaviours, 0.86 capacity), indicating stability of response over time, and expected dimensionality with high fit indices for the final scales (behaviours: CFI=0.97; RMSEA=0.07; SRMR=0.05; capacity: CFI=0.99; RMSEA=0.06; SRMR=0.06), indicating initial evidence of construct validity.
Conclusions
The PECS is the first known measure to assess patients’ capacity for engagement and represents a step toward informing interventions and care plans that acknowledge a patient’s engagement capacity and supporting engagement behaviours. Future work should be done to validate the measure in other languages and patient populations, and to assess criterion-related validity of the measure against patient outcomes.
Keywords: Primary Care, Surveys and Questionnaires, Patient Participation
STRENGTHS AND LIMITATIONS OF THIS STUDY.
We included patient advisors in concept mapping and initial phases of measure development, which helped ensure the patient’s voice was reflected in this co-constructed measure.
This study used a thorough scale development process, with multiple samples, to ensure development of a robust measure that accounts for both engagement behaviours and patients’ capacity for engagement.
We tested the feasibility of deploying the survey in the clinic setting, which yielded initial data on implementation considerations.
We were unable to examine the extent to which the measure may need adaptations for other languages and specialised patient populations (eg, older adults, paediatrics).
Introduction
Patient engagement (PE), or a patient’s participation in their healthcare, enhances team-based care, which is viewed as a critical element of high-performing healthcare organisations.1 The PE strategies often linked to successful patient care include approaches such as shared decision-making, group visits, motivational interviewing, patient portals, personalised communication, culturally appropriate care, language services and patient education.2,10 Healthcare management research has found that in ambulatory healthcare organisations, specific management practices are linked to increased PE—specifically, using chronic care management processes, conducting routine medical and social risk patient screenings and focusing on multiple dissemination pathways for evidence-based practices.11 However, even in organisations with these management practices in place, not all patients meaningfully participate in PE strategies, and it is currently unknown what capabilities are needed to do so, limiting the ability of clinics to fully develop PE-focused interventions that are effective for their patient panels. This deficit exists in large part due to the inability to measure factors that influence a patient’s PE behaviours and ability to engage. Highlighting this gap, a recent literature review of PE measures concluded: ‘There is no tool co-constructed with patients from development to validation, which can be used to assess the main concepts and dimensions of PE in care at the same time’.12
The lack of this type of tool is due in part to the multifaceted nature of PE, including the specific actions that patients can take to be engaged in their health and healthcare such as being involved in care tasks and decision making, as well as the thoughts, feelings and actions that are present before, during and after patients take such actions.13,16 Available PE measures are proprietary, focus on single domains of PE, or identify only engagement behaviours (EBs) and not patients’ capacity to engage.17,22 A recent scoping review of PE intervention studies found that more than 20 different variables were used to assess capacity for engagement with formal measures only developed for a subset of these factors.23
The lack of a multifaceted, publicly available measure of PE is evident in health policy efforts to improve healthcare quality. For instance, the CMS Total Performance Score, which serves as the basis for value-based purchasing reimbursements, includes a ‘person and community engagement’ domain. However, the measures within that domain do not point to deficits in engagement efforts or suggest how to improve a clinic’s score. Additionally, advancing clinic performance on PE was proposed in a 2020 National Academy of Medicine (NAM) discussion paper describing patient and family engagement in care (PFEC) as an essential element of health equity,24 suggesting that health disparities cannot be reduced without PE throughout the healthcare process. The NAM framework for PFEC details needed structures, skill and awareness building, and connections that healthcare organisations can make to enhance engagement. The framework identifies outputs of better engagement, decisions, processes and experiences that lead to increased health equity. What remains missing is the ability to measure change in these outputs using standardised PE measurement tools.12
Our study team filled this gap by developing a measure of a patient’s engagement capacity (EC), based on the EC Framework (ECF).25 The ECF suggests that participation in healthcare is influenced by factors associated with capacity to engage (elements of the person, their environment) and EBs. Underlying this reciprocal relationship are the components of their resources, willingness, self-efficacy and capabilities related to participating in their healthcare. This model focuses on capacity as a precursor to the behaviour of engaging because it can allow ambulatory healthcare clinics to better identify and implement mechanisms through which they can intervene to increase overall EBs. To further understand how to assess these components, we began by conceptualising PE using group concept mapping—an inclusive, participatory, collaborative and inductive social science process,15 26 which yielded a description of PE including 47 elements within five areas: Access (eg, transportation, cost), External resources (eg, patient portals, educational materials), Attitudes and behaviours (eg, self-efficacy, resiliency), Internal resources (eg, literacy, support systems) and Relationship with provider (eg, trust, rapport).27 28 We have published these findings in prior work.27 We now present the next step in the process of PE capacity measure development—constructing scales of EB and EC guided by the PE concept map, and accumulating evidence of their reliability and validity. Finally, we present a mapping of this newly developed PE capacity measure to the ECF factors described in earlier work, as well as applications of the new measure in healthcare settings.
Methods
This work follows the recommendations by Streiner and Kottner cited in the EQUATOR (Enhancing the QUAlity and Transparency Of Health Research) network (see online supplemental part 1 for checklist).29
Patient and public involvement
The project was guided by a Project Advisory Board inclusive of five patients and seven clinicians and researchers. We recruited patient advisors from our organisation’s existing Patient and Family Advisory Council, which is a robust programme that recruits patients to serve in a variety of capacities across the organisation. We sought advisors who varied in age and experience with the healthcare system. The role of this board is described in the subsequent subsections.
Survey development
Using our PE concept map described in Di Tosto et al27 (which was developed from the ECF)25 as the starting point, we worked with our Project Advisory Board to develop survey items aligned with the 47 elements in the concept map. We developed a list of the 47 elements and held a series of group working meetings where we searched for appropriate existing scales that mapped to each element. Online supplemental part 2 lists the 21 scales that were ultimately identified, with the specific subscales and/or items used and their validity information included in Supplemental Digital Content 3. For the remaining elements on the concept map, where we were unable to locate an appropriate existing scale, we developed items in line with item development best practices,30 using theory (eg, the EB Framework).31 32 To avoid an extremely lengthy survey, we selected only a subset of each existing scale’s items, with a goal of including at least three items per scale. Decisions regarding which items to include were based on those that were considered to be most closely related to the concept of interest in alignment with our conceptual model. When needed, we made minor adaptations to wording or scaling to ensure consistency (eg, changing response scale from 0–4 to 1–5; changing ‘at this house’ to ‘at your house’). These decisions were made in a series of group working meetings with several members of the research team. These group working meetings also involved the codevelopment of the items that we created, with several rounds of collaborative review and revisions of the survey with multiple team members.
The resulting first version of the PE Capacity Survey (PECS) was a 101-item measure with items on a 5-point Likert scale. Items fell primarily into one of two buckets: things patients do to engage, and things, from the perspective of the patient, that facilitate PE. This was aligned with our conceptual model, and thus we developed two separate but related scales in the PECS: EBs, and EC. To define EBs, we used the EB Framework’s definition of EBs as ‘measurable actions that individuals and/or their caregivers must perform in order to maximally benefit from the health care available to them’.32 We defined EC as the ability to experience a feeling of engagement or participation based on an individual’s resources, willingness, self-efficacy and capabilities.25
Participants and data collection
We used multiple independent samples in a robust scale development process33 with a total of 1180 participants. We began with exploratory factor analysis (EFA; n=260; Sample 1a) and confirmatory factor analysis (CFA; n=280; Sample 1b), followed by cognitive interviews34 (n=8; Sample 2). We then recruited an additional set of participants to conduct test–retest reliability analyses (n=155; Sample 3). We subsequently collected data from patients in a clinic environment to determine survey feasibility in this setting (n=122; Sample 4). Lastly, we collected a final round of data to develop a shorter version of the survey (n=355; Sample 5). While it is possible an individual may have been recruited for >1 of these activities by chance, this is unlikely given our various recruitment methods. There is no known overlap and to the best of our knowledge, the sample represents 1180 distinct individuals.
Sample 1a and 1b: factor structure sample
For this sample, we recruited study participants in June 2022 via social media advertising (paid ads on Facebook and Instagram) targeted to individuals >18 years of age who resided in the USA. A US$20 gift card was offered for completion of the study and the survey was administered via Qualtrics.35 To proceed, participants were required to agree to the consent; to answer ‘yes’ to three screening questions: (1) Are you 18 or older?, (2) Do you live in the USA? and (3) Have you seen a doctor or other healthcare provider (eg, nurse practitioner, physician assistant) in the past year (either in person or via telehealth)?; and to pass a CAPTCHA test (to reduce fraud/bots). For this activity, we collected data, enrolling participants in a consecutive manner irrespective of their characteristics (as long as they passed the screening questions and CAPTCHA test), with a goal of obtaining at least 500 complete responses in accordance with a recommendation for a minimum sample size of 200 for factor analysis.36 A sample of 500 would allow us to split the sample to achieve more than the minimum of 200 for EFA, and a separate 200 for CFA, while allowing room for some participants to have missing data and room to remove any responses flagged by Qualtrics as suspicious for being bots or duplicates. Next, we conducted EFA analysis on a random half of the sample (Sample 1a) and engaged in item reduction according to the results (described further in Analyses section). We conducted CFA analyses on the second half of the sample (Sample 1b) to test model fit (described further in Analyses section).
Sample 2: cognitive interviews
We recruited five patient advisors (who served on our Project Advisory Board) and a convenience sample of three clinicians in October 2022 to participate in cognitive interviews via an email from the principal investigator (PI). Patient Advisors were selected for these interviews because in addition to meeting the survey inclusion criteria outlined above, their roles as advisors for the academic medical centre (AMC) made them comfortable sharing their opinions with study investigators, a critical element for successful cognitive interviews. Interviews were conducted via Zoom with one interviewer and one notetaker. Interviews were audio recorded and transcribed. Following the think-aloud technique with concurrent probing,37 the interviewer displayed survey items on the screen and asked participants to read the item and response options aloud and then provide reactions. Additionally, participants were asked to select a response to each item. This step included all remaining survey items (58 items) after the EFAs and CFAs from the initial factor structure sample. The notetaker indicated any comments and suggested wording revisions. Participants received a US$25 gift card in appreciation for their time.
Sample 3: test–retest reliability
Recruitment for a test–retest reliability assessment of the survey took place during December 2022. It leveraged a new round of social media advertising with the same inclusion criteria adopted for the factor structure sample (Sample 1). The survey was closed after the receipt of 300 complete responses (Time 1), to exceed recommendations for a minimum of 100 participants for test–retest analyses,38 as we expected some drop-off from Time 1 to Time 2. Approximately 2 weeks later, the same participants were invited via email to respond to the survey a second time (Time 2). Participants received a US$10 gift card each time they completed the survey.39 40
Sample 4: clinic sample
In February and March of 2023, we collected data from participants at five clinics in the Midwest: two family medicine clinics, an outpatient endocrinology specialty clinic affiliated with a large AMC and two community health centre clinics providing primary, behavioural, gender-affirming and HIV care. This convenience sample was selected to provide a diverse participant sample seeking care in a range of contexts (table 1 presents the demographics of this sample—diverse in race, income, education and insurance). Each clinic was visited at least two times by a study team member. The PI contacted clinic managers and lead physicians to explain the study and survey and to arrange a time for survey administration. Front desk staff were encouraged to let patients know of study team presence. Once a patient checked in, we approached them to ask for their participation. Participants were offered the option of a paper survey, or to use their phones to take the Qualtrics survey (by scanning a QR code). Participants received a US$10 gift card in appreciation for their time.
Table 1. Dates and demographics of all samples used for quantitative analyses (N=1172*).
| Variable | Study activity | ||||
|---|---|---|---|---|---|
| Sample 1a: factor structure sample (EFA) (n=260) |
Sample 1b: factor structure sample (CFA) (n=280) |
Sample 3: test–retest sample (n=155) |
Sample 4: clinic feasibility sample (n=122) |
Sample 5: EC short form development sample (n=355) |
|
| Dates of data collection, n (%) | |||||
| Dates | July 2022 | July 2022 | December 2022 | February–March 2023 | March–April 2024 |
| Survey completion† | 90.6 | 100.0 | 95.1 | 94.6 | |
| Gender identity, n (%) | |||||
| Male | 43 (16.5) | 52 (18.6) | 53 (34.2) | 31 (25.4) | 125 (35.2) |
| Female | 200 (76.9) | 203 (72.5) | 91 (58.7) | 67 (54.9) | 196 (55.2) |
| Other | 4 (1.5) | 2 (0.7) | 2 (1.3) | 2 (1.6) | 3 (0.8) |
| Age, n (%) | |||||
| 18–49 | 126 (48.5) | 141 (50.4) | 113 (72.9) | 80 (65.6) | 203 (57.2) |
| 50–74 | 121 (46.5) | 130 (46.4) | 40 (25.8) | 32 (26.2) | 127 (35.8) |
| 75+ | 8 (3.1) | 7 (2.5) | 1 (0.6) | 5 (4.1) | 5 (1.4) |
| Race, n (%) | |||||
| White | 198 (76.2) | 225 (80.4) | 92 (59.4) | 62 (50.8) | 268 (75.5) |
| Black | 24 (9.2) | 29 (10.4) | 39 (25.2) | 35 (28.7) | 38 (10.7) |
| AI/AN, NH/OPI | 4 (1.5) | 1 (0.4) | 1 (0.6) | 1 (0.8) | 0 (0.0) |
| Asian | 15 (5.8) | 12 (4.3) | 8 (5.2) | 5 (4.1) | 16 (4.5) |
| Multiple/Other | 14 (5.4) | 11 (3.9) | 13 (8.4) | 12 (9.8) | 11 (3.1) |
| Ethnicity, n (%) | |||||
| Not Latinx | 240 (92.3) | 263 (93.9) | 134 (86.5) | 103 (84.4) | 323 (91.0) |
| Latinx | 15 (5.8) | 15 (5.4) | 19 (12.3) | 7 (5.7) | 11 (3.1) |
| Education, n (%) | |||||
| High school or less | 11 (4.2) | 17 (6.1) | 5 (3.2) | 23 (18.9) | 15 (4.2) |
| Some college | 43 (16.5) | 46 (16.4) | 23 (14.8) | 28 (23.0) | 40 (11.3) |
| Associate degree | 24 (9.2) | 19 (6.8) | 16 (10.3) | 10 (8.2) | 29 (8.2) |
| Bachelor’s degree | 94 (36.2) | 103 (36.8) | 55 (35.5) | 34 (27.9) | 114 (32.1) |
| Graduate or professional degree | 83 (31.9) | 93 (33.2) | 54 (34.8) | 21 (17.2) | 137 (38.6) |
| Income, n (%) | |||||
| US$0–34 999 | 65 (25.0) | 57 (20.4) | 32 (20.6) | 43 (35.2) | 62 (17.5) |
| US$35 000–74 999 | 75 (28.8) | 82 (29.3) | 35 (22.6) | 34 (27.9) | 76 (21.4) |
| US$75 000 or more | 101 (38.8) | 115 (41.1) | 81 (52.%) | 23 (18.9) | 188 (53.0) |
| Insurance coverage, n (%) | |||||
| Yes | 251 (96.5) | 272 (97.1) | 151 (97.4) | 109 (89.3) | 327 (92.1) |
| No | 4 (1.5) | 4 (1.4) | 2 (1.3) | 4 (3.3) | 6 (1.7) |
| Internet access at home, n (%) | |||||
| Yes | 254 (97.7) | 275 (98.2) | 153 (98.7) | 112 (91.8) | 329 (92.7) |
| No | 1 (0.4) | 1 (0.4) | 0 (0.0) | 2 (1.6) | 7 (2.0) |
Percentages do not add up to 100% due to missing data. More detailed demographics, including info on missing data, are provided in online supplemental material.
This table excludes cognitive interview participants in sample 2 (n=8).
% of participants who began the survey that completed it.
AI, American Indian; AN, Alaska Native; CFA, confirmatory factor analysis; EC, engagement capacity; EFA, exploratory factor analysis; NH, Native Hawaiian; OPI, Other Pacific Islander.
Sample 5: EC short form sample
We recruited individuals for this sample from March to April 2024 via social media advertising with the same inclusion criteria and recruitment procedures adopted for the factor structure sample (Sample 1). We conducted CFA analyses on this sample.
Analyses
Data screening and cleaning
For the survey samples, we screened out suspicious responses (eg, bots, likely duplicates) as flagged by Qualtrics’ fraud detection features (failed CAPTCHA, and/or reCaptcha Score <0.5, and/or RelevantID Duplicate Score >75, and/or RelevantID Fraud Score >30).41 Except for demographic questions, all survey items were on 5-point Likert scales, and we reverse coded items when indicated. We computed means for each domain and the overall scales, with possible scores ranging from 1 (lowest score) to 5 (highest score). Missing data were handled using listwise deletion.
Exploratory factor analyses
We conducted an EFA using SPSS V.2842 for the EB measure and FACTOR V.12.01.0243,45 for the initial EC measure. Polychoric correlations were used, and unweighted least squares extraction with Promax (EBs) or Promin46 (EC) rotation was used. We conducted iterative EFAs, first not restricting the number of factors and conducting a parallel analysis (in RStudio V.2022.02.347 for EB and in FACTOR43,45 for EC). We then restricted the number of factors in a follow-up EFA, in correspondence with parallel analysis results. We conducted subsequent follow-up EFAs where items that had low communalities (<0.3) and/or low factor loadings (<0.4) were removed, until all remaining items met these criteria.
Confirmatory factor analyses
We conducted CFAs in LISREL V.1248 using polychoric correlations and diagonally weighted least squares methods for all scales. Fitted models were derived from EFA results. We examined item loadings to confirm that all items had a standardised factor loading >0.4 in addition to examining fit indices including Comparative Fit Index (CFI), root mean square error of approximation (RMSEA), Non-Normed Fit Index (NNFI) and standardised root mean square residual (SRMR).
Cognitive interviews
We discussed and analysed notes to identify common themes across responses about the survey overall and for individual items. The frequency and consistency of comments on individual items were considered.
Test–retest reliability
We computed polychoric correlations between Time 1 and Time 2 data on each item, and intraclass correlations (ICCs) using two-way mixed effects models at the domain and scale level.
Item reduction
As the length of the resulting survey was judged to be too long to be feasible even after reduction via factor analysis, (40 items for EC and 18 for EB), we engaged in item reduction on both the EB and EC scales before collecting data on the clinic feasibility sample. Items for elimination were selected by choosing at least three items per domain to retain, balancing items with the highest factor loadings, those with moderate and high test–retest reliability correlations, avoiding construct deficiency (eg, avoiding retaining only trust items in ‘relationship with provider’ domain) and ensuring domains were of similar lengths (as we theoretically did not have a reason to believe that some domains were more important than others). This resulted in a 27-item version of the survey (18 items for EC, 9 for EB). After receiving informal feedback from multiple clinicians that the survey was still too long for their patients, we engaged in a second round of item reduction on the EC scale to further reduce it, following these same procedures (at this point, the EB scale was unable to be further reduced as it was already at the minimum of 3 items per domain), resulting in an 18-item measure (9 items for EC, 9 items for EB). We tested the shorter versions with additional CFAs using the previously described CFA methods.
Internal consistency reliability
We computed Cronbach’s alpha for both the final EB and EC scales using SPSS V.28.42
Demographics
The survey asked participants their race, ethnicity, age, gender identity, insurance type, income, education and whether they had internet access at home. We computed descriptive statistics using these data.
Feasibility
For the clinic sample (Sample 4), we used Qualtrics’ duration metadata (for only those who took the electronic survey). To adjust for outliers (due to some participants having the survey open on their device for several days), we examined median duration.
Results
Participant characteristics
Timing and demographics for each sample are in table 1. Each group had a diversity of age, race, income levels, education and insurance types. Overall, 93.5% of participants who began the PECS completed it, with specific completion rates for each sample reported in table 1.
Factor analysis (Sample 1)
EFA (Sample 1a)
Engagement behaviours: Of the 260 participants from the EFA subsample, 239 completed all EBs items. Initial results suggested a five-factor solution explaining 73.4% of the variance. However, parallel analysis suggested a three-factor solution. Consequently, we conducted two successive EFAs, first constraining to three factors, and then subsequently eliminating items with low communalities (<0.3) and/or low item loadings (<0.4), until all remaining items demonstrated sufficiently high communalities and loadings onto a factor. The final EFA solution comprised three factors and 18 items, accounting for 66.0% of the variance. All factor loadings, ranging from 0.44 to 0.89 in the final model, were statistically significant. Factors were labelled based on the content of the items: preparation for appointments, ensuring understanding/asking questions and adhering to care (table 2).
Table 2. Rotated EFA factor loadings for Engagement Behaviours Scale (final EFA).
| Factor | |||
|---|---|---|---|
| Item | 1 | 2 | 3 |
| B1. Awareness_of_Treatment_Options_1 | 0.90 | – | – |
| B2. Awareness_of_Treatment_Options_2 | 0.89 | – | – |
| B3. Awareness_of_Treatment_Options_3 | 0.79 | – | – |
| B4. Awareness_of_Treatment_Plan_1 | 0.81 | – | – |
| B5. Awareness_of_Treatment_Plan_2 | 0.75 | – | – |
| B6. Communicate_Symptoms_to_Provider_1 | 0.46 | – | – |
| B7. Communicate_Symptoms_to_Provider_2 | – | – | – |
| B8. Patient_Advocates_for_Self_1 | 0.61 | – | – |
| B9. Patient_Advocates_for_Self_2 | 0.68 | – | – |
| B10. Patient_Advocates_for_Self_3 | 0.65 | – | – |
| B11. Patient_Asks_Questions_1 | 0.81 | – | – |
| B12. Patient_Comes_to_Appointment_Prepared_1 | – | – | 0.74 |
| B13. Patient_Comes_to_Appointment_Prepared_2 | – | – | 0.75 |
| B14. Patient_Comes_to_Appointment_Prepared_3 | – | – | 0.70 |
| B15. Patient_Participates_in_Health_Maintenance_1 | – | – | – |
| B16. Patient_Participates_in_Health_Maintenance_2 | – | – | – |
| B17. Patient_Participates_in_Health_Maintenance_3 | – | 0.77 | – |
| B18. Patient_Participates_in_Health_Maintenance_4 | – | 0.78 | – |
| B19. Patient_s_Positive_Attitude_4 | – | 0.74 | – |
| B20. Patient_s_Positive_Attitude_5 | – | 0.56 | – |
| B21. Patient_s_Positive_Attitude_6 | – | 0.44 | – |
EFA, exploratory factor analysis.
Engagement capacity: Three items correlated >0.90 with other items and were removed before proceeding due to concerns about redundancy. 239 participants were included in the EC EFA. Initially, eighteen items did not load onto any of the factors and were removed. We then reran the model as a 58-item EFA. The results of this analysis indicated four factors. However, several items had low communalities (<0.3) and/or low item loadings (<0.4). Therefore, we conducted successive EFAs where we iteratively removed items until all had sufficiently high communalities and factor loadings. The final EFA solution consisted of four factors and 43 items, accounting for 67.1% of the variance. Factor loadings ranged from 0.45 to 0.96 and were all significant. Factors were labelled based on the content of their items: healthcare navigation resources, support and resiliency, relationship with provider and willingness to participate in care (table 3).
Table 3. Rotated EFA factor loadings for initial engagement capacity scale (final EFA).
| Factor | Factor | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Item | 1 | 2 | 3 | 4 | Item | 1 | 2 | 3 | 4 |
| C1. AbleToContactMyProvidersOutsideOffice1 | – | – | – | – | C41. PtFeelsRespected1 | – | – | – | – |
| C2. AbleToContactMyProvidersOutsideOffice2 | – | – | – | – | C42. PtFeelsRespected2 | – | 0.80 | – | – |
| C3. AbleToContactMyProvidersOutsideOffice3 | – | – | – | – | C43. PatientPortal1 | – | – | 0.60 | – |
| C4. AbleToGetAppt1 | – | – | – | – | C44. PatientsLanguage1 | – | – | – | – |
| C5. AbleToGetAppt2 | – | – | – | – | C45. PosAttitude1RCed | 0.87 | – | – | – |
| C6. Anxiety1RCed | – | – | – | 0.84 | C46. PosAttitude2RCed | 0.84 | – | – | – |
| C7. Anxiety2RCed | – | – | – | – | C47. PosAttitude3RCed | 0.78 | – | – | – |
| C8. ApptReminders1 | – | – | 0.45 | – | C48. SelfEfficacy1 | – | – | – | – |
| C9. ConfidenceInCare1 | – | – | – | – | C49. SelfEfficacy2 | – | – | – | – |
| C10. ConfidenceInCare2 | – | 0.54 | – | – | C50. SelfEfficacy3 | – | – | – | – |
| C11. CulturalApprCare1 | – | 0.75 | – | – | C51. ProvAdvocates1 | – | 0.91 | – | – |
| C12. CulturalApprCare2 | – | 0.73 | – | – | C52. ProviderListens1 | – | 0.93 | – | – |
| C13. CulturalApprCare3 | – | – | – | – | C53. ProximityToHC1 | – | – | – | – |
| C14. EduMaterials1 | – | – | 0.58 | – | C54. RapportProvider1 | – | 0.93 | – | – |
| C15. EduMaterials2 | – | – | – | – | C55. RapportProvider2 | – | 0.85 | – | – |
| C16. EduMaterials3 | – | – | 0.58 | – | C56. Resiliency_1 | – | – | – | 0.78 |
| C17. EmpathyCompass1 | – | 0.91 | – | – | C57. Resiliency_2 | – | – | – | 0.80 |
| C18. EmpathyCompass2 | – | 0.96 | – | – | C58. Resiliency_3 | – | – | – | 0.85 |
| C19. EmpathyCompass3 | – | 0.90 | – | – | C59. SharedDM1 | – | 0.75 | – | – |
| C20. EmpathyCompass4 | – | – | – | – | C60. SharedDM2 | – | 0.82 | – | – |
| C21. Empowered1 | – | – | – | – | C61.Stress1RCed | – | – | – | 0.76 |
| C22. Empowered2 | – | – | – | – | C62. Stress2 | – | – | – | |
| C23. Empowered3 | – | – | – | – | C63. Stress3 | – | – | – | 0.77 |
| C24. Empowered4 | – | – | – | 0.54 | C64. Stress4RCed | – | – | – | 0.83 |
| C25. Empowered5 | – | – | – | – | C65. SupportSystem1 | – | – | – | – |
| C26. Empowered6 | – | – | – | – | C66. SupportSystem2RCed | – | – | – | 0.69 |
| C27. FeelingSupported1 | – | 0.75 | – | – | C67. Transportation1RCed | – | – | – | – |
| C28. HealthLiteracy1 | – | – | – | – | C68. Trust1RCed | – | 0.65 | – | – |
| C29. HealthLiteracy2 | – | – | 0.84 | – | C69. Trust2 | – | 0.93 | – | – |
| C30. HealthLiteracy3 | – | – | 0.88 | – | C70. Trust3 | – | 0.81 | – | – |
| C31. HealthLiteracy4 | – | – | – | – | C71. Trust4 | – | 0.83 | – | – |
| C32. HealthStatus1 | – | – | – | – | C72. UnderstandingHealthcareSystem1 | – | – | – | – |
| C33. HealthStatus2 | – | – | – | – | C73. UnderstandingHealthcareSystem2 | – | – | 0.54 | – |
| C34. HealthStatus3 | – | – | – | 0.73 | C74. UnderstandingHealthcareSystem3 | – | – | 0.57 | – |
| C35. HealthStatus4 | – | – | – | 0.74 | C75. UnderstandingHealthcareSystem4 | – | – | – | – |
| C36. Included1 | – | – | – | – | C76. UnderstandingHealthcareSystem5 | – | – | – | – |
| C37. Included2 | – | 0.85 | – | – | C77. Willingness1 | – | – | – | – |
| C38. Included3 | – | 0.80 | – | – | C78. Willingness2 | – | – | – | – |
| C39. InsuranceCosts1RCed | – | – | – | – | C79. Willingness3 | – | – | – | – |
| C40. LengthOfVisit1 | – | 0.83 | – | – | |||||
EFA, exploratory factor analysis.
CFA (Sample 1b)
Engagement behaviours: Of 280 participants in this subsample, 241 completed all the EB items from the final EFA and were included in this CFA. We ran a three-factor CFA corresponding to the final EFA results. We included only 17 of the 18 items, as two items had a high intercorrelation (>0.83), which caused computational issues in the model. Results indicated good fit: χ²=763.78, df=116, p<0.01, CFI=0.93, RMSEA=0.09, SRMR=0.07; factor loadings ranged from 0.57 to 0.89.
Engagement capacity: 239 of the 280 participants completed all EC items from the final EFA and were included in this first EC CFA. We ran a four-factor, 43-item CFA corresponding to the final EFA results. Results indicated that fit was sufficient to carry forward: χ²=4200.32, df=854, p=<0.01, CFI=1.00, RMSEA=0.13, SRMR=0.06 and factor loadings ranged from 0.57 to 0.96.
Cognitive interviews (Sample 2)
Most items were considered acceptable as expressed; only 11 items received suggested wording changes or concerns from >2 participants (eg, multiple participants found the term ‘negotiate’ inappropriate in the item ‘I always negotiate a treatment plan with my healthcare provider’), and suggested changing this phrase to ‘come to an agreement on’; multiple participants indicated issues with the ‘appropriate type of care’ wording in the item ‘I know what the appropriate type of care I need to seek is [e.g., doctor’s office, urgent care, emergency room, etc.]’. Per the recommendation of the participants, we changed this item to ‘I know the right place to go when I need to get care [for example, doctor’s office, urgent care, emergency room, etc.]. Minor wording changes to other items were also made for clarification (these changes are all detailed in online supplemental part 3). Overall comments focused on the length of the survey and practicalities of administering it when a patient arrives for an appointment, and about the survey layout. For example, some participants did not feel the need for frequent reminders about the topic in each survey section.
Test–retest reliability (Sample 3)
Engagement behaviours
At the item level, all items had moderate to high test–retest correlations. For the final version of the scale, test–retest results revealed a high positive correlation between participants’ responses at Time 1 and Time 2 on the overall EB scale (ICC=0.82 (95% CI=0.68 to 0.88)) and for each of the domains’ scales (ICCs=0.62 to .81) indicating test–retest reliability.
Engagement capacity
At the item level, only one item had a low pre–post item-level correlation (ρ=0.28). This item was therefore removed before proceeding to further scale refinement. The overall test–retest correlation for the final version of the EC scale was high (scale ICC=0.86 (95% CI=0.79 to 0.90)). Additionally, correlations were high at the domain levels (ICCs=0.66 to 0.86).
Item reduction round 1
We removed items based on decisions described in the Methods section. Additionally, we excluded the domain of ‘willingness to engage’ from the EC measure, due to its limited number of items (only three items) compared with the other domains which contained a more substantial number of items. We did not feel there was a theoretical reason to merit the other factors being over-represented as compared with this domain, and tests of the model as a three-factor model with these items included indicated poor fit.
Reduced scale CFAs
The subsequent CFA of this shorter version included nine items (three domains) for EBs and 18 items (three domains) for EC. This revealed a good fit for both the EBs (n=241; χ²=106.69, df=24, p<0.01; CFI=0.97; RMSEA=0.07; SRMR=0.05; factor loadings: 0.62–0.91; figure 1, panel a) and EC (n=239, χ²=435.62, df=132, p<0.01; CFI=0.98; RMSEA=0.09; SRMR=0.05; factor loadings: 0.59–0.91).
Figure 1. Confirmatory factor analysis models with standardised loadings for final engagement behaviours (panel a) and engagement capacity (panel b) scales.

Feasibility (Sample 4)
122 surveys were collected across the clinics (n=99, 81.1% electronic; n=23, 18.9% paper). The median duration for electronic survey completion in this setting with the 27-item reduced scale version (9 EB items, 18 EC items) was 480.00 s (M=2170.38, SD=11 7310.43, minimum=170.00, maximum=113 450.00; 25th percentile=337.50, 75th percentile=1096.00; all values in seconds), corresponding to a median of 8 min. Those who were older, non-white, Latinx, covered by Medicaid or Medicare insurance, or less educated had higher odds of requesting a paper survey as compared with completing the survey electronically on their device (online supplemental part 3).
Item reduction round 2 (Sample 5)
Despite the 8 min duration uncovered in feasibility testing, several clinicians provided feedback that they felt the PECS survey was still too long for their patient population. As a result, we engaged in a second round of item reduction for the EC scale (as the EB scale was already at the minimum length required for computing internal consistency reliability, of three items per domain). We removed items based on decisions described in the Methods section, striving for an EC scale that was of the minimum possible length (nine items, three items per domain).
Reduced EC scale CFA
The subsequent CFA for the shortened EC scale included nine items (three domains). This revealed a good fit for the shortened EC scale (n=340; χ²=50.52, df=24, p<0.001; CFI=0.99; RMSEA=0.06; SRMR=0.06; factor loadings: 0.37–0.90; figure 1, panel b).
Feasibility of final reduced scale (Sample 5)
Given the item reductions that took place after this round of data collection, we reran the survey duration analysis for Sample 5 on this final, 18-item (9 EB items, 9 EC items) measure. With this version, for those in this sample who completed the survey (n=336) we found a median duration of 419.00 s (M=582.58, SD=851.68, minimum=146.00, maximum=10 271.00; 25th percentile=316.00, 75th percentile=560.75; all values in seconds), corresponding to a median of 7 min.
Internal consistency reliability (Sample 5)
Engagement behaviours
Cronbach’s alpha was acceptable in these samples for the final EBs scale (α=0.72 (95% CI=0.70 to 0.75)).
Engagement capacity
Cronbach’s alpha was acceptable in these samples for the final EC scale (α=0.76 (95% CI=0.74 to 0.78)).
Discussion
The PECS is comprised of two scales: EBs (nine items, three domains: preparing for appointments, ensuring understanding, and adhering to care) and EC (nine items, three domains: healthcare navigation resources, resilience, relationship with provider). The scores show adequate internal consistency and test–retest reliability, as well as initial evidence of construct validity. The scale is feasible for patients to complete at the point of care in less than 10 min. The final validated survey, with scoring instructions, is in online supplemental part 4. Analysis of the Flesch-Kincaid score of this final measure is 58.1 (8th grade reading level).
The PECS aligns with existing frameworks related to patients engaging in their care, including the ECF (see table 4) which guided the development of this measure. Kimerling, in addition, offers four domains for which a patient’s self-efficacy may influence their predisposition to PE: self-management, collaborative communication, health information use and healthcare navigation.49 While this framework does not propose a measurement tool, one could consider self-management and health information use as EBs and collaborative communication and healthcare navigation as components of capacity to engage. Similarly, the PECS links assessment of specific behaviours such as completing recommended screening with measures of thoughts and feelings such as feelings of anxiety, as well as with health system-related characteristics such as a provider who listens, as suggested by Graffigna et al.20
Table 4. Mapping of Engagement Capacity Framework (ECF) model to PECS domains.
| ECF concepts | PECS domain | |
|---|---|---|
| Major concept | Underlying concept | |
| Behaviour | Willingness | Prepares for appointment |
| Ensures understanding | ||
| Adheres to treatment | ||
| Person | Self-efficacy | Support and resilience |
| Capabilities | Healthcare navigation | |
| Environment | Resources | Relationship with provider |
| Healthcare navigation | ||
PECS, Patient Engagement Capacity Survey.
Building on this previous foundation of PE frameworks, PECS offers a tool for ambulatory care clinics to measure and address engagement at various levels. At the patient level, this survey could potentially help identify an individual’s strengths and weaknesses related to PE and guide conversations during an encounter and enable consideration of resources to provide. For example, a patient may need additional support in finding high-quality educational materials about their condition, and these could then be provided by a healthcare organisation; alternatively, perhaps the PECS may indicate that the patient may have difficulty asking for support, so the healthcare organisation can proactively provide social support resources. When considered in the context of other screenings administered in a healthcare setting such as non-medical, health-related social needs screening or depression screening, PECS may provide a more complete view of the patient from which to offer support to them as a partner in their healthcare.
At the provider level, a provider could potentially examine scores across their panel to get feedback on patients’ perceptions of the patient–provider relationship, identifying areas where they can offer additional support. For example, examining provider-level scores on shared decision-making may enable a provider to assess how supported patients feel working with them and continue or adjust their approach according to the results. A provider-level look at scores related to ensuring understanding, for example, might encourage a provider to focus on using a teach-back method.50 While patient perceptions of their relationship with a provider can be important for shaping efforts to better support patients in engaging in their care, clinics could consider how to present PECS scores on this domain to individual providers to avoid potentially straining patient–provider relationships.
If aggregated to the clinic level, the PECS could potentially guide workflows and processes and assist with resource targeting. This activity may help identify gaps in resources offered to patients, and possible quality improvement interventions.51 For example, if health literacy is identified as a significant challenge clinic-wide, clinic managers could seek to provide more tailored educational materials.52 Similarly, provider communication training could be implemented to better support PE if relationships with providers are low.53
Limitations and future work
Development and validation took place using US samples and in English; generalisability to other countries and languages is unknown. In addition, future work should validate the PECS against similar constructs (eg, patient activation)54 and assess criterion-related validity against existing outcome measures (eg, medication adherence). Third, this study only examined the PECS survey at the patient level; explorations of its validity at provider and clinic levels are crucial next steps, as different interventions may be required at each level.55 Additionally, EC and behaviours likely to change over time, making it possible that the PECS should be given more than once. Thresholds for engaged versus unengaged patients should be validated. In addition, our age distribution skewed younger across all study activities, but particularly in the test–retest and short form development activities. Thus, future research should examine whether adaptations are needed for specialised patient populations, such as older adults or children who were not included in the survey development. Similarly, the survey could be tested in acute or inpatient settings. Lastly, while we found feasibility for in-clinic survey completion, with a strength of this study including the wide variety of clinics we recruited in allowing us to test feasibility in different contexts, workflows and patient populations, implementation should be formally tested and alternative methods (eg, deploying the survey on a patient portal) could be examined.
Conclusions
This study fills a critical gap in understanding and addressing PE. The PECS, which includes six domains across the two scales of ‘engagement behaviours’ (understanding, adherence, preparation) and ‘engagement capacity’ (healthcare navigation resources, support and resilience, relationship with provider), allows assessment of both capacity and behaviour elements of engagement in under 10 min. This psychometrically sound survey was developed with significant input from our advisory team comprised of patients and providers throughout the entire process. Importantly, the PECS is now available in the public domain at no cost to healthcare organisations wishing to assess PE.
Supplementary material
Acknowledgements
We would like to acknowledge Dr Tim Huerta for assistance with initial study conceptualisation, Natasha Kurien and Lauren Phelps for project assistance, the Ohio State University Center for Clinical and Translational Science for recruitment support, the advice and feedback from our Project Advisory Committee (Dr Maryanna Klatt, Ms Sharon Cross, Dr Cortney Forward, Dr Kristen Rundell, Judy Yesso, Emily Little, Kurt Morris, Dr Mary Jo Welker, Dr Samilia Obeng-Gyasi, Julia Skapik, Dr Nwando Olayiwola and David Bracket), the participants in this study, and the participating clinics.
Footnotes
Funding: This work was supported by the National Institutes of Health/National Institute of Aging (Grant #R01AG056469).
Prepub: Prepublication history and additional supplemental material for this paper are available online. To view these files, please visit the journal online (https://doi.org/10.1136/bmjopen-2024-091620).
Provenance and peer review: Not commissioned; externally peer reviewed.
Patient consent for publication: Not applicable.
Ethics approval: This study was approved by The Ohio State University IRB (# 2018H0073). Participants provided consent at the beginning of the survey or were verbally consented for cognitive interviews.
Data availability free text: Data are available from the corresponding author upon reasonable request.
Patient and public involvement: Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.
Data availability statement
Data are available upon reasonable request.
References
- 1.Bodenheimer T, Ghorob A, Willard-Grace R, et al. The 10 building blocks of high-performing primary care. Ann Fam Med. 2014;12:166–71. doi: 10.1370/afm.1616. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Barry MJ, Edgman-Levitan S. Shared decision making--pinnacle of patient-centered care. N Engl J Med. 2012;366:780–1. doi: 10.1056/NEJMp1109283. [DOI] [PubMed] [Google Scholar]
- 3.Laurance J, Henderson S, Howitt PJ, et al. Patient Engagement: Four Case Studies That Highlight The Potential For Improved Health Outcomes And Reduced Costs. Health Aff (Millwood) 2014;33:1627–34. doi: 10.1377/hlthaff.2014.0375. [DOI] [PubMed] [Google Scholar]
- 4.Miller WR, Rollnick S. Motivational Interviewing: Helping People Change. 3rd. Guilford Press; 2013. edn. [Google Scholar]
- 5.Sepucha KR, Simmons LH, Barry MJ, et al. Ten Years, Forty Decision Aids, And Thousands Of Patient Uses: Shared Decision Making At Massachusetts General Hospital. Health Aff (Millwood) 2016;35:630–6. doi: 10.1377/hlthaff.2015.1376. [DOI] [PubMed] [Google Scholar]
- 6.Dixon A, Hibbard J, Tusler M. How do People with Different Levels of Activation Self-Manage their Chronic Conditions? The Patient: Patient-Centered Outcomes Research . 2009;2:257–68. doi: 10.2165/11313790-000000000-00000. [DOI] [PubMed] [Google Scholar]
- 7.Alston C, Elwyn G, Fowler F, et al. Shared Decision-Making Strategies for Best Care: Patient Decision Aids. NAM Perspectives . 2014;4 doi: 10.31478/201409f. [DOI] [Google Scholar]
- 8.Ivey SL, Shortell SM, Rodriguez HP, et al. Patient Engagement in ACO Practices and Patient-reported Outcomes Among Adults With Co-occurring Chronic Disease and Mental Health Conditions. Med Care. 2018;56:551–6. doi: 10.1097/MLR.0000000000000927. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Davidson KW, Mangione CM, Barry MJ, et al. Collaboration and Shared Decision-Making Between Patients and Clinicians in Preventive Health Care Decisions and US Preventive Services Task Force Recommendations. JAMA. 2022;327:1171. doi: 10.1001/jama.2022.3267. [DOI] [PubMed] [Google Scholar]
- 10.Elwyn G, Dehlendorf C, Epstein RM, et al. Shared decision making and motivational interviewing: achieving patient-centered care across the spectrum of health care problems. Ann Fam Med. 2014;12:270–5. doi: 10.1370/afm.1615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Miller-Rosales C, Lewis VA, Shortell SM, et al. Adoption of Patient Engagement Strategies by Physician Practices in the United States. Med Care. 2022;60:691–9. doi: 10.1097/MLR.0000000000001748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Clavel N, Paquette J, Dumez V, et al. Patient engagement in care: A scoping review of recently validated tools assessing patients’ and healthcare professionals’ preferences and experience. Health Expect. 2021;24:1924–35. doi: 10.1111/hex.13344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Barello S, Triberti S, Graffigna G, et al. eHealth for Patient Engagement: A Systematic Review. Front Psychol. 2015;6:2013. doi: 10.3389/fpsyg.2015.02013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Barello S, Graffigna G, Vegni E, et al. The Challenges of Conceptualizing Patient Engagement in Health Care: A Lexicographic Literature Review. J Particip Med. 2014 [Google Scholar]
- 15.Nease RF, Frazee SG, Zarin L, et al. Choice Architecture Is A Better Strategy Than Engaging Patients To Spur Behavior Change. Health Aff (Millwood) 2013;32:242–9. doi: 10.1377/hlthaff.2012.1075. [DOI] [PubMed] [Google Scholar]
- 16.Drenkard K, Swartwout E, Deyo P, et al. Interactive Care Model: A Framework for More Fully Engaging People in Their Healthcare. JONA: The Journal of Nursing Administration. 2015;45:503–10. doi: 10.1097/NNA.0000000000000242. [DOI] [PubMed] [Google Scholar]
- 17.Hibbard JH, Stockard J, Mahoney ER, et al. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39:1005–26. doi: 10.1111/j.1475-6773.2004.00269.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Lorig K, Ritter PL, Ory MG, et al. Effectiveness of a generic chronic disease self-management program for people with type 2 diabetes: a translation study. Diabetes Educ. 2013;39:655–63. doi: 10.1177/0145721713492567. [DOI] [PubMed] [Google Scholar]
- 19.Duke CC, Lynch WD, Smith B, et al. Validity of a New Patient Engagement Measure: The Altarum Consumer Engagement (ACE) Measure. Patient. 2015;8:559–68. doi: 10.1007/s40271-015-0131-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Graffigna G, Barello S, Bonanomi A, et al. Measuring patient engagement: development and psychometric properties of the Patient Health Engagement (PHE) Scale. Front Psychol. 2015;6:274. doi: 10.3389/fpsyg.2015.00274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Kimerling R, Zulman DM, Lewis ET, et al. Clinical Validity of the PROMIS Healthcare Engagement 8-Item Short Form. J Gen Intern Med. 2023;38:2021–9. doi: 10.1007/s11606-022-07992-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Schalet BD, Reise SP, Zulman DM, et al. Psychometric evaluation of a patient-reported item bank for healthcare engagement. Qual Life Res. 2021;30:2363–74. doi: 10.1007/s11136-021-02824-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Tobiano G, Jerofke‐Owen T, Marshall AP. Promoting patient engagement: a scoping review of actions that align with the interactive care model. Scandinavian Caring Sciences . 2021;35:722–41. doi: 10.1111/scs.12914. [DOI] [PubMed] [Google Scholar]
- 24.Simon M, Baur C, Guastello S, et al. Patient and Family Engaged Care: An Essential Element of Health Equity. NAM Perspect . 2020;2020 doi: 10.31478/202007a. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Sieck CJ, Walker DM, Retchin S, et al. The patient engagement capacity model: what factors determine a patient’s ability to engage. NEJM Catal. 2019;5 [Google Scholar]
- 26.Duea SR, Zimmerman EB, Vaughn LM, et al. A Guide to Selecting Participatory Research Methods Based on Project and Partnership Goals. J Particip Res Methods . 2022;3 doi: 10.35844/001c.32605. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Di Tosto G, Hefner JL, Walker DM, et al. Development of a conceptual model of the capacity for patients to engage in their health care: a group concept mapping study. BMC Health Serv Res. 2023;23:846. doi: 10.1186/s12913-023-09785-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Kane M, Rosas S. Conversations about Group Concept Mapping: Applications, Examples and Enhancements. SAGE; 2018. [Google Scholar]
- 29.Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. 2014;70:1970–9. doi: 10.1111/jan.12402. [DOI] [PubMed] [Google Scholar]
- 30.DeVellis RF. Scale Development: Theory and Applications. 3rd. SAGE; 2012. edn. [Google Scholar]
- 31.Gruman J, Rovner MH, French ME, et al. From patient education to patient engagement: implications for the field of patient education. Patient Educ Couns. 2010;78:350–6. doi: 10.1016/j.pec.2010.02.002. [DOI] [PubMed] [Google Scholar]
- 32.GW cancer center: prepared patient. patient engagement: engagement behavior framework. https://preparedpatient.org/engagement-behavior-framework/ n.d. Available.
- 33.Hinkin TR. A Brief Tutorial on the Development of Measures for Use in Survey Questionnaires. Organ Res Methods. 1998;1:104–21. doi: 10.1177/109442819800100106. [DOI] [Google Scholar]
- 34.Ryan K, Gannon-Slater N, Culbertson MJ. Improving Survey Methods With Cognitive Interviews in Small- and Medium-Scale Evaluations. American Journal of Evaluation. 2012;33:414–30. doi: 10.1177/1098214012441499. [DOI] [Google Scholar]
- 35.Qualtrics . Provo, Utah, USA: 2005. Version 2022.https://www.qualtrics.com Available. [Google Scholar]
- 36.Bandalos DL. Relative Performance of Categorical Diagonally Weighted Least Squares and Robust Maximum Likelihood Estimation. Structural Equation Modeling: A Multidisciplinary Journal . 2014;21:102–16. doi: 10.1080/10705511.2014.859510. [DOI] [Google Scholar]
- 37.Willis GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Sage Publications; 2005. [Google Scholar]
- 38.Kennedy I. Sample Size Determination in Test-Retest and Cronbach Alpha Reliability Estimates. Middle East Res J Humanities Soc Sci . 2021;1:16–24. doi: 10.36348/merjhss.2021.v01i01.003. [DOI] [Google Scholar]
- 39.Deyo RA, Diehr P, Patrick DL. Reproducibility and responsiveness of health status measures. Statistics and strategies for evaluation. Control Clin Trials. 1991;12:142S–158S. doi: 10.1016/s0197-2456(05)80019-4. [DOI] [PubMed] [Google Scholar]
- 40.Park MS, Kang KJ, Jang SJ, et al. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review. Int J Nurs Stud. 2018;79:58–69. doi: 10.1016/j.ijnurstu.2017.11.003. [DOI] [PubMed] [Google Scholar]
- 41.Qualtrics Digital success. fraud detection. [7-Apr-2025]. https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ Available. Accessed.
- 42.IBM SPSS Statistics for Macintosh.Version 28.0. Armonk, NY: IBM Corp; 2021. [Google Scholar]
- 43.Ferrando PJ, Lorenzo-Seva U. Program FACTOR at 10: Origins, development and future directions. Psicothema. 2017;29:236–40. doi: 10.7334/psicothema2016.304. [DOI] [PubMed] [Google Scholar]
- 44.Lorenzo-Seva U, Ferrando PJ. FACTOR 9.2: A Comprehensive Program for Fitting Exploratory and Semiconfirmatory Factor Analysis and IRT Models. Appl Psychol Meas. 2013;37:497–8. doi: 10.1177/0146621613487794. [DOI] [Google Scholar]
- 45.Lorenzo-Seva U, Ferrando PJ. FACTOR: a computer program to fit the exploratory factor analysis model. Behav Res Methods. 2006;38:88–91. doi: 10.3758/bf03192753. [DOI] [PubMed] [Google Scholar]
- 46.Lorenzo-Seva U. Promin: A Method for Oblique Factor Rotation. Multivariate Behav Res. 1999;34:347–65. doi: 10.1207/S15327906MBR3403_3. [DOI] [Google Scholar]
- 47.Boston, MA: RStudio, PBC; 2020. RStudio: integrated development for R.https://rstudio.com Available. [Google Scholar]
- 48.Jöreskog KG, Sörbom D. LISREL 12 for windows. 2022.
- 49.Kimerling R, Lewis ET, Javier SJ. A Behavioral Framework for Patient Engagement. Med Care. 2020;58:161–8. doi: 10.1097/MLR.0000000000001240. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Agency for Healthcare Research and Quality Use the teach-back method: tool #5. 2020. https://www.ahrq.gov/health-literacy/improve/precautions/tool5.html Available.
- 51.Irwin R, Stokes T, Marshall T. Practice-level quality improvement interventions in primary care: a review of systematic reviews. Prim Health Care Res Dev. 2015;16:556–77. doi: 10.1017/S1463423615000274. [DOI] [PubMed] [Google Scholar]
- 52.Mbanda N, Dada S, Bastable K, et al. A scoping review of the use of visual aids in health education materials for persons with low-literacy levels. Patient Educ Couns. 2021;104:998–1017. doi: 10.1016/j.pec.2020.11.034. [DOI] [PubMed] [Google Scholar]
- 53.Patel MR, Smith A, Leo H, et al. Improving Patient-Provider Communication and Therapeutic Practice Through Better Integration of Electronic Health Records in the Exam Room: A Pilot Study. Health Educ Behav. 2019;46:484–93. doi: 10.1177/1090198118796879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Hibbard JH, Mahoney ER, Stockard J, et al. Development and testing of a short form of the patient activation measure. Health Serv Res. 2005;40:1918–30. doi: 10.1111/j.1475-6773.2005.00438.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Sieck CJ, Hefner JL, Walker DM, et al. The role of health care organizations in patient engagement: Mechanisms to support a strong relationship between patients and clinicians. Health Care Manage Rev. 2023;48:23–31. doi: 10.1097/HMR.0000000000000346. [DOI] [PMC free article] [PubMed] [Google Scholar]
