Abstract
Objective:
To develop and evaluate the psychometric properties of a family caregiver-reported survey that assesses family-centeredness of care in the context of pediatric emergency department (ED) encounters.
Methods:
We created a caregiver-reported scale, incorporated content expert feedback, and iteratively revised it based on cognitive interviews with caregivers. We then field tested the scale in a survey with caregivers. We dichotomized items using top-box scoring and obtained a summary score per respondent. Using a sample of 191 caregivers recruited from nine EDs, we analyzed internal consistency reliability, dimensionality via item response theory modeling, and convergent validity with the ED Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey.
Results:
Feedback from the nine experts led us to remove four items. We conducted 16 cognitive interviews and revised the survey in four rounds. An 11-item survey was field tested. Mean (SD) respondent 11-item summary score was 77.2 (26.6). We removed two items given inconsistent response patterns, poor variability, and poor internal consistency, which increased coefficient alpha from 0.85 to 0.88 for the final scale. A multidimensional model fit the data best, but factor scores correlated strongly with summary scores, suggesting the latter are sufficient for quality improvement and future research. Regarding convergent validity, adjusted partial correlation between our scale’s nine-item summary score and the ED CAHPS summary score was 0.75 (95% CI 0.67–0.81).
Conclusions:
Psychometric analyses demonstrated strong item performance, reliability, and convergent validity for the nine-item scale. This survey can be used to assess family-centered care in the ED for research and quality improvement purposes.
Keywords: Pediatrics, Emergency Department, Surveys and Questionnaires, Patient Reported Outcome Measures, Psychometric
INTRODUCTION
The importance of family-centered care, which places the family as the focus and unit of care,1 is widely recognized among researchers, national health professional associations, and public health policy organizations.2,3 Evidence demonstrates that family-centered care is associated with improvements in care quality, trust, medical goal achievement, patient and family satisfaction, patient and family anxiety, and costs.1,4,5 Key attributes of family-centered care include collaboration among patients, families, and health care providers; empowerment; respect; communication; individualized care; and support.1,6 Challenges to successful implementation of family-centered care include family and care team stress and competing demands, power imbalances, and organizational limitations.7,8 These challenges are especially evident in the emergency department (ED) setting, where family caregivers are navigating a new or serious illness in a new setting with unfamiliar care providers.2
To improve the delivery of family-centered care, the healthcare field needs evaluation mechanisms to assess whether care is family-centered. While the Consumer Assessment of Healthcare Providers and Systems (CAHPS)9 surveys measure patient experience, there is no national standard to measure patient- and family-centered care.10 For this reason, the National Quality Forum identified “measurement of person-centered care” as one of three priority initiatives.10 No published family-centered care measure was designed for pediatric ED encounters. Existing family-centered care measures assess the physician’s approach based on multiple encounters over time (e.g., “In the last 12 months…”)11 or include items that assume established relationships (e.g., “My doctor and I have been through a lot together”).12 Such measures that were designed for the primary care or clinic setting are not appropriate for a single ED encounter. Other family-centered care measures were designed for patients with specific chronic diseases.13 Furthermore, the body of literature includes patient-reported surveys to assess patient-centered care.14 Our team therefore wanted to address this gap by providing a caregiver-reported measurement that is applicable for pediatric ED visits. This research focused on developing and evaluating the psychometric properties of a caregiver-reported survey to assess family-centeredness of care in the context of pediatric ED encounters.
METHODS
We used standard procedures for instrument development15 (Figure 1) to draft the ED Family Centered Care Experience (ED-FACCE) survey, a caregiver-reported measure of family-centered care provided in the ED, to be used for research and quality improvement purposes. We solicited and incorporated feedback on the preliminary instrument from content experts. The instrument was then iteratively revised through cognitive interviews with caregivers, whereby caregivers completed the instrument and then participated in a semi-structured interview. Finally, we field tested the instrument with caregivers and analyzed the scale’s psychometric properties. We determined the reading grade level of the instrument using the Flesch-Kincaid scale (Microsoft, Redmond, WA) throughout development. We included the entire text of the instrument (i.e., instructions, items, response options) when assessing reading level. The University of California Institutional Review Board approved this study.
Figure 1. Overview of Instrument Development and Evaluation Process.

The Preliminary Instrument
The initial instrument draft was informed by Little et al.,14 who developed a survey to assess patient-centered care using the patient as the rater, which is more accurate at predicting outcomes than using a provider or research observer.16,17 Our instrument was developed to assess the care provided by physicians, because prior research suggests that physicians’ family-centeredness of care in the ED can be improved.18 To adapt the Little et al. survey to assess family-centered care for pediatric ED encounters from the caregiver’s perspective, we identified content domains based on relevant published research and policy statements.1,6,19,20 Individual items within these domains were adapted or written de novo using published guidelines for survey development.21 Additional items were also adapted from the 20-item Measure of Processes of Care (MPOC-20),13 a survey used to measure caregivers’ perceptions of family-centered behaviors of healthcare professionals for children with disabilities. The resulting initial instrument draft had 17 items in four domains: (1) listening to and respecting experiences and preferences, (2) sharing information, (3) providing support, (4) collaborating and encouraging participation.
Consistent with the CAHPS9 surveys, the response format for the preliminary instrument items was a four-point scale (never, sometimes, usually, always) for rating the frequency of experiences with doctors (e.g., “During this emergency department visit, how often did your child’s doctors encourage you to ask questions?”).
Content Expert Assessment
We convened a panel of nine family-centered care experts, including three researchers, one leader of a family-centered care organization, one bioethicist and family-centered care author, one hospital-based family engagement director, two hospital-based family engagement program managers, and one pediatric emergency medicine chief, to evaluate content validity and potential augmentation of the item pool. We asked panelists to identify missing items or domains and to provide feedback on instrument wording and duplicative items. We also asked panelists to rate each item’s relevance to assessing family-centeredness of care (coded as not relevant = 1, somewhat relevant = 2, quite relevant = 3, highly relevant = 4)22. For each item, we computed the item-level content validity index (I-CVI), which was the number of experts giving a rating of 3 or 4 divided by the total number of experts.22 Items with I-CVI below the thresholds established by Lynn23 (e.g., I-CVI <0.778 with 9–10 experts) were flagged as needing to be revised or removed. We calculated the scale-level content validity index (S-CVI/Ave) as the average I-CVI value.24
Pre-Testing: Cognitive Interviews
Participants:
We conducted one-on-one cognitive interviews with English-speaking, adult parents or guardians who were at their child’s bedside during an eligible encounter in one of nine northern California EDs. Eligible encounters were for children aged 0–18 years, identified by care team providers (e.g., physicians and nurses). We used convenience sampling to recruit participants to complete the instrument within 30 days of the ED encounter. The instrument was designed to be self-administered (electronically or on paper) or interviewer-administered (face-to-face). Participants anonymously completed the instrument and then completed an interview within three weeks.
Data Collection:
We used two main cognitive interview techniques: think-aloud interviewing and probing.25 A researcher trained in cognitive interviewing (J.L.R) asked the participants to ‘think aloud’ as they completed the instrument, whereby the caregiver read each question out loud and described the thought process used to get to their answer. Next, the interviewer used the probing method with a semi-structured question guide to assess participants’ comprehension of each question, response, retrieval, judgement, and usability. Interviews were conducted in-person, by videoconference, or by phone. Participants received a $50 gift card. Interviews were audio recorded, transcribed, and reviewed for accuracy by the interviewer. The interviewer maintained field notes and prepared a written summary of each interview, including potential problems identified during the interview. Summaries were combined into a report that collated the identified problems under each individual item.
Analysis:
Data were analyzed concurrently with data collection, using deductive and inductive strategies. We used an initial codebook of a priori codes developed from the question-and-answer model from cognitive psychology.26 Three researchers with qualitative expertise (J.L.R, S.L.P., H.M.Y.) reviewed completed instruments and transcriptions. We began by independently memo-writing and coding the first three transcripts with the a priori codes. We also identified emergent codes. Our team met to compare codes and discuss discrepancies to ensure consensus on application of codes, refine codes dimensions, add new codes, and develop tentative categories. We reviewed and discussed the report of written summaries, deleting, adding, or modifying items to improve face validity, understandability, and usability. The revised instrument and codebook were used for subsequent interviews and transcripts. We revisited earlier transcripts as new codes were identified. This process was repeated for every three to five transcripts until no further areas for instrument improvement were identified.
Field Testing
Participants and Data Collection:
Eligible caregivers, using the same eligibility criteria, completed the ED-FACCE instrument. One caregiver per patient was eligible to complete the instrument within 30 days of the ED visit. Each participant received a $25 gift card. We documented the number of participants invited to take the survey to monitor the proportion of respondents among all approached individuals. Participants were additionally invited via flyer advertisements.
To examine similarity between our family-centeredness measure and a theoretically related outcome, we used 15 questions from the ED CAHPS Survey27 that solicit information on going to the ED, family-provider communication, and overall experience. Questions about being discharged home and nurse-specific questions were omitted. Demographic questions addressed participants’ age, gender, race, ethnicity, education, child’s age, child’s ED use, relationship to the child, amount of time spent at the ED bedside, and child’s ED disposition.
Item and Scale Scoring:
We assigned a numeric value to each item response to develop item raw scores (i.e., No = 0, Yes, somewhat = 1, Yes, definitely = 2 for the nine positively-worded items and the reverse for the two negative-worded items). These item raw scores were used in item analyses, but all subsequent analyses were conducted with “top-box” scoring (i.e. “1” for most favorable response, “0” otherwise). We used top-box scoring to highlight the fact that family-centered care should result in caregivers reporting definitively positive experiences rather than experiences that are somewhat positive.28 CAHPS research has also demonstrated that top-box scoring, compared to linear means, has higher hospital-level reliability29 and greater patient and family understandability.30
We examined response patterns across both the raw scores and the top-box binary indicators to identify inconsistent responses and item missingness. Summary tox-box scores for each respondent were obtained by computing the proportion of top-box scores among the answered items and multiplying it by 100. We used a linear regression model to examine whether mean summary scores varied by mode of administration (in person, paper, electronic).
Reliability:
Internal consistency of the scale was assessed using coefficients alpha (tau-equivalent reliability) and omega (congeneric reliability).31,32
Item selection and assessment of scale dimensionality:
After removing items with low variability and/or internal consistency, we assessed dimensionality using item response theory (IRT) modeling, including models with item difficulty/location parameters (Rasch model), item discrimination (2-parameter logistic or 2PL model), and multiple dimensions/factors. Relative model fit was evaluated using Akaike and Bayes Information Criteria (AIC and BIC, respectively). We also considered root mean square error of approximation (RMSEA) as an absolute measure of model fit. Models were fit using the mirt R package.33,34
The aim of the dimensionality analysis was to identify the best fitting model, to summarize the underlying structure of the instrument and provide evidence of construct validity. This aim was balanced against finding a scoring process that would be interpretable operationally, where parsimony is preferred. We compared factor scores with simple summary scores. Strong positive correlations would justify the use of summary scores in reporting.
Convergent Validity:
We estimated the partial correlation between the ED-FACCE summary top-box score and the 15-item ED CAHPS summary top-box score, adjusting for family caregiver demographics, relying on the Fisher Z transformation to calculate 95% CI.35
RESULTS
Content Expert Assessment
Content Validity:
The 17-item preliminary instrument had a reading grade level of 6.7. Expert assessment I-CVI for each item ranged from 0.44 to 1. The S-CVI/Ave was 0.83. No experts identified missing domains. Three experts each suggested adding one item, but no suggestion was made by multiple experts. We judged the suggested items to be duplicative of existing items and therefore did not incorporate them, but we adopted minor working suggestions.
Four items were removed based on perceptions by experts that they were confusing, not relevant, or duplicative. These items included three of the four items with an I-CVI below the 0.778 threshold plus one item with an I-CVI 0.778. The one remaining item rated with low relevance (I-CVI 0.667) was revised. Following these changes, the resulting instrument had 13 items and a reading grade level of 7.0.
Pre-Testing: Cognitive Interviews
Sixteen interviews were conducted between August and November 2020: nine in person, two by telephone, and five by videoconference. Ten caregivers completed a self-administered instrument; six caregivers completed an interviewer-administered instrument. Twelve of the participants were female; four were male. Participant ages ranged from 23 to 53 years (mean=37.4). Ten participants were White; two were Black; two were Asian; and two described their race as “other.” Four participants were Hispanic or Latinx. Highest education attained among the participants ranged from less than high school to more than a 4-year degree; the modal category (n=5) was having a high school degree or equivalent credential. Four of the children had a chronic medical condition. Seven of the children were discharged home from the ED; four were admitted locally; and five were transferred to another hospital.
The 13-item instrument was revised in four rounds. Four categories of instrument problems were identified as presented in Appendix 1. In brief, participants expressed concerns regarding the phrasing of the questions, duplicative constructs, unclear terms, and a missing construct. As a result of the cognitive testing, 10 items were revised, three were deleted, and one novel item was added. Following these revisions, no further problems with the instrument were identified by participants. The reading grade level was 6.9.
Field Testing
Field testing of the 11-item instrument (Appendix 2) was conducted from March to August 2021. Among the participants directly invited to take the ED-FACCE survey, the response rate was 92.7% (190/205). One participant completed the survey via flyer advertisement. Among the 191 respondents, the mode of administration was 54 (28.3%) in person, 60 (31.4%) paper, and 77 (40.3%) electronic. Table 1 shows respondent characteristics. Most participants (70%) were mothers of the child; 23% were fathers; 6% were grandparents; and 2% were other types of guardians. Compared to the overall representation of California residents,36 the participants in this study had a greater proportion of individuals reporting Black or African American race, a greater proportion of Native Hawaiian or Other Pacific Islander race, a smaller proportion of Hispanic or Latinx ethnicity, a greater proportion of high school graduates or higher, and a smaller proportion of those with a bachelor’s degree or higher. Most participants (88%) completed the survey about their child’s ED visit in an urban level I trauma center with a 121-bed quaternary care children’s hospital. This teaching hospital is a referral center for children across a 33-county region covering 65,000 square miles. It receives pediatric transfers from ~30 hospitals in the region; 12% of participants completed the survey about a visit in one of those referring ED sites.
Table 1.
Field Testing Participant Characteristics
| About the Participant | N (%) |
|---|---|
| Age, in years | |
| </= 29 | 57 (29.8) |
| 30–39 | 88 (46.1) |
| 40–49 | 31 (16.2) |
| 50 + | 15 (7.8) |
| Gender | |
| Female | 143 (74.9) |
| Male | 47 (24.6) |
| Non-binary | 1 (0.5) |
| Race | |
| White | 100 (52.6) |
| Black or African American | 23 (12.1) |
| Asian | 20 (10.5) |
| Native Hawaiian or Other Pacific Islander | 4 (2.1) |
| American Indian or Alaska Native | 3 (1.6) |
| Other | 40 (21.0) |
| Ethnicity | |
| Hispanic or Latinx | 50 (26.2) |
| Not Hispanic or Latinx | 141 (73.8) |
| Highest Education Attained | |
| Some high school, but did not graduate | 5 (2.6) |
| High school graduate or GED | 106 (56.1) |
| Some college or 2-year degree | 26 (13.8) |
| 4-year college graduate | 38 (20.1) |
| More than 4-year college degree | 14 (7.4) |
| About the Participant’s Child | |
| Child’s Age, in years | |
| < 1 | 27 (14.1) |
| 1–5 | 92 (48.2) |
| 6–12 | 52 (27.2) |
| 13–18 | 20 (10.5) |
| Relationship to the Child | |
| Mother | 134 (70.2) |
| Father | 43 (22.5) |
| Grandmother | 9 (4.7) |
| Grandfather | 2 (1.0) |
| Other | 3 (1.6) |
| Number of ED visits for the Child in Last 6 Months | |
| 1 time | 145 (75.9) |
| 2 times | 31 (16.2) |
| 3 times | 13 (6.8) |
| 4 times | 2 (1.0) |
| 5 or more times | 0 (0.0) |
| About the Emergency Department Visit | |
| Main Reason for the Child’s ED visit | |
| An accident of injury | 114 (59.7) |
| A new health problem | 53 (27.8) |
| An ongoing health condition or concern | 24 (12.6) |
| Child Arrived to the ED by Ambulance | |
| Yes, by ambulance | 115 (60.2) |
| Amount of Time Spent at the Child’s ED Bedside | |
| A little of the time | 0 (0.0) |
| Some of the time | 3 (1.6) |
| Most of the time | 21 (11.0) |
| All or nearly all of the time | 167 (87.4) |
| Child’s ED Disposition | |
| Discharged home | 94 (49.2) |
| Transfer to different hospital | 23 (12.0) |
| Admitted for hospitalization | 74 (38.7) |
GED – graduate equivalency degree.
ED – emergency department.
Summary Score:
The mean (SD) ED-FACCE 11-item summary score was 77.2 (26.6); the median (25–75% IQR) was 81.8 (54.5–100). In a linear regression model, the summary score did not vary by the mode of administration (p=0.50). There were no missing responses for any item from any respondent.
Response Patterns and Item Variability:
Four of 191 respondents selected “Yes, definitely” for all 11 items. This pattern suggests inconsistent responding for these participants, at least with the last two items, since the first nine items were positively worded attributes and the last two were negatively worded. Figure 2 shows the mean (95% CI) top-box score for each item, which ranged from 61.2 to 95.3%. Two “low-variability” items had top-box scores greater than 95% (items 10 and 11).
Figure 2. Variability of Item Top-Box Scores.

Top-box scores calculated as the proportion of respondents answering the most favorable response option among the number of respondents answering the item.
Reliability:
Coefficients alpha and omega for the ED-FACCE scale (raw scores, all 11 items) were both 0.85. Items 10 and 11 demonstrated corrected item-total correlations (a measure of item discrimination) of 0.08 and 0.14, respectively. Restricting the scale to items 1 through 9, the correct item-total correlations ranged from 0.46 to 0.79, with seven items above 0.60. Deleting any of these nine items would not have improved reliability; however, deleting either item 10 or 11 would increase alpha and omega to 0.86. In support of top-box scoring, coefficients alpha and omega for the nine-item subset using the original raw scoring were 0.88 and 0.89, respectively. For top-box scoring, these values were both 0.90.
Item Selection:
We removed items 10 and 11 given their poor variability and adverse effects on internal consistency reliability. Removing these two items also addressed the unbalanced inclusion of two negatively worded items in a scale of 11 total items. Items 1 through 9 were used as a scale for the subsequent analyses. Table 2 presents the text of the final ED-FACCE survey items. Table 3 contains item analysis results.
Table 2.
Final Survey Instructions and Items
| Mark one answer for each question. Answer the questions about your child’s recent EMERGENCY DEPARTMENT visit. If your child recently visited more than one emergency department, answer the questions about only one of those visits. Do not include any other visits in your answers. | |
| During your child’s recent EMERGENCY DEPARTMENT visit… | |
| Item 1 | Did your child’s doctors seem interested in your worries about your child’s problem? |
| Item 2 | Did you feel comfortable sharing your ideas with your child’s doctors? |
| Item 3 | Did your child’s doctors focus on what matters to you the most? |
| Item 4 | Did your child’s doctors clearly explain to you their ideas about your child’s medical care? |
| Item 5 | Did your child’s doctors give you information in ways that you understood? |
| Item 6 | Did your child’s doctors encourage you to ask questions? |
| Item 7 | Did your child’s doctors involve you in making decisions about your child’s medical care? |
| Item 8 | Did your child’s doctors propose next steps for your child’s medical care that were acceptable to you? |
| Item 9 | Did your child’s doctors address your child’s needs? |
Response options for each item: Yes, definitely; Yes, somewhat; No. The contents of the instrument are distributed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
Table 3.
Item Analysis and Model Results
| Factors Labels and Items | M | SD | AID | F1 | F2 | A1 | A2 | B | |
|---|---|---|---|---|---|---|---|---|---|
| Communication/Interaction | |||||||||
| 1 | interested in your worries | 0.73 | 0.44 | 0.88 | 0.89 | −0.01 | 2.99 | 2.13 | |
| 2 | comfortable sharing your ideas | 0.61 | 0.49 | 0.87 | 0.95 | 0.03 | 7.03 | 2.30 | |
| 3 | focus on what matters | 0.66 | 0.48 | 0.89 | 0.85 | −0.02 | 2.52 | 1.26 | |
| 4 | clearly explain their ideas | 0.70 | 0.46 | 0.88 | 0.77 | 0.15 | 3.35 | 1.94 | |
| 5 | give information you understood | 0.64 | 0.48 | 0.87 | 0.80 | 0.21 | 8.75 | 3.43 | |
| 6 | encourage you to ask questions | 0.70 | 0.46 | 0.88 | 0.78 | 0.17 | 3.84 | 2.18 | |
| Decision-Making/Delivery of Medical Care | |||||||||
| 7 | involve you in making decisions | 0.79 | 0.41 | 0.88 | 0.24 | 0.74 | 5.24 | 4.27 | |
| 8 | propose next steps for care | 0.85 | 0.36 | 0.89 | 0.07 | 0.90 | 4.73 | 5.18 | |
| 9 | address your child’s needs | 0.91 | 0.29 | 0.90 | 0.00 | 0.88 | 3.03 | 4.64 | |
M – proportion of respondents selecting the highest rating category “yes, definitely.”
SD – standard deviation.
AID – alpha-if-item-deleted; what coefficient alpha would be for the scale if the item were deleted.
F1 and F2 – exploratory 2PL IRT loadings on factors 1 and 2.
A1 and A2 – discrimination parameters from the final confirmatory multidimensional 2PL model.
B – item location parameter.
Item Response Theory:
An exploratory 2PL model (all items free to load on 2 factors, promax rotation) produced factors correlating at 0.77 and sums of squared loadings of 4.32 and 2.22. A trend was evident in the factor loadings, with the first six items loading more on one factor and the last three items loading on the other (Table 3). A multidimensional confirmatory 2PL model (AIC 1285, BIC 1346) included loadings as specified in the exploratory model, and a bifactor model (AIC 1293, BIC 1381) included a general factor and two specific factors, again following the structure from above. The multidimensional 2PL produced the best fit, with lower AIC and BIC than any other model, including unidimensional Rasch (AIC 1316, BIC 1348) and 2PL (AIC 1300, BIC 1359) models. All models had RMSEA below 0.05. Item parameters for the final multidimensional 2PL model are given in Table 3. Factor scores from the multidimensional 2PL correlated at 0.98 and 0.96 with summary scores.
Convergent Validity:
The mean (SD) ED CAHPS summary score for each respondent was 68.3 (25.3); median (25–75% IQR) was 70 (50–90). Adjusted partial correlation between the nine-item ED-FACCE summary core and the 15-item ED CAHPS summary score was 0.75 (95% CI 0.67–0.81).
DISCUSSION
This study developed and evaluated the ED-FACCE survey, which measures family-centered care provided by physicians during pediatric ED encounters. The multi-phased development and evaluation process resulted in a survey with sufficient evidence supporting its reliability and validity for research and quality improvement purposes. The usability of the instrument is supported by positive feedback from cognitive interviews as well as the lack of missing values during field testing. Psychometric analyses demonstrated good item performance, reliability, and convergent validity of a nine-item scale.
Content validity is deeply connected to the definition of the measured construct. Unfortunately, no consensus definition exists for family-centered care.4,6,37,38 We began each cognitive interview by reading a family-centered care definition that used the four attributes described in the concept analysis by Coyne et al.1 This definition stated that the core concepts of family-centered care include: (1) Listening to and Respecting Experiences and Preferences, (2) Sharing Information, (3) Providing Support, and (4) Collaborating and Encouraging Participation. Should others seek to measure a differently defined construct, our present instrument might not be applicable. Of note, although the adjusted partial correlation between our scale’s nine-item summary score and the ED CAHPS summary score supports convergent validity, it also suggests that these two scales give different results.
Family-centered care has been the gold standard approach to pediatric care delivery for decades.39 Although less recognized, there is growing emphasis on an alternative concept: child-centered care.1,40 The foundation of child-centered care is that the child’s perspective is heard and valued.1 Future iterations of the ED-FACCE tool can incorporate child-centered care attributes, while acknowledging that a construct that values the child’s perspective and participation in decision-making would lack applicability for neonatal patients.
Future research should include testing versions of the ED-FACCE survey in languages other than English. Limited English-proficiency parents are more likely to report low satisfaction, poor comprehension, and non-family-centered care.41,42 Their children are twice as likely to experience adverse events while hospitalized.43 A language appropriate ED-FACCE survey is needed for research and quality improvement efforts targeting family-centered care for limited English-proficiency families.
IRT analyses identified that the ED-FACCE survey consists of two factors, which we labeled as (1) quality of communication and (2) clinical decision-making. Consistent with existing literature, these two topics encompass various attributes used to describe family-centered care.4,6,37,38 These two dimensions also align with the four domains of the initial instrument draft. Future confirmatory factor analyses are needed and can begin by considering the multidimensional model. Although the multidimensional 2PL model provided the best fit, factor scores correlated strongly with summary scores, so we recommend the simpler summary scores for routine use.
This study has several limitations, to be addressed in future studies. The ED-FACCE survey was designed to measure physicians’ family-centeredness of care. It does not assess care delivered by other provider groups (e.g., nurses, nurse practitioners, physician assistants) or providers more broadly. Future research should adapt and evaluate the ED-FACCE survey for other types of providers, such as nurses. While the instrument measures family-centered care provided by physicians, respondents might not have known which ED providers were physicians. Additionally, the care provided by other health care professionals might have influenced their reporting about their child’s physicians. Furthermore, our field testing participants largely responded about their experiences in the same ED, which was an urban level I trauma center and teaching hospital. Most field testing participants also reported being in the ED for a new injury or health problem; few respondents accompanied a child presenting to the ED for an ongoing health condition or concern. While the sample of field testing participants was not representative of California residents, this study included a diverse sample. Additional psychometric analyses are also needed of administration mode effects, which were only explored here at the scale level, as well as measurement invariance across participant subgroups, including for individual items. Finally, recruitment and testing occurred during the COVID-19 pandemic; participants’ varied experiences with the pandemic may have impacted their responses. Given the targeted nature of the questions, we do not expect the impact to be significant. Finally, we did not readminister the final nine items on their own, and the presence versus absence of items 10 and 11 could potentially influence responses to items 1 through 9.
Despite these limitations, this study provides validity evidence supporting use of the ED-FACCE to assess family-centeredness of care for pediatric ED encounters. Future research should explore additional psychometric analyses, including with larger and more diverse samples. Future research should also test alternative iterations of the ED-FACCE survey, such as versions in additional languages and surveys measuring family-centered care provided by other provider groups and in other settings.
Funding:
This work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health (K23HD101550 to Dr. Rosenthal). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Appendix 1. Results from Cognitive Interviews
Four categories of instrument problems were identified as presented below. Following four rounds of revisions, no further problems with the instrument were identified by participants.
Lack of Applicability:
Many participants stated that the phrasing of the questions and answer choices was inapplicable to their ED visit. Specifically, starting each item with the phrase “How often did your child’s doctors…” was perceived to be more fitting for an encounter with a longer duration, such as a hospitalization. A brief ED encounter did not provide respondents with adequate experience to answer about frequency. We therefore incorporated multiple participants’ suggestion to rephrase all items to start with “Did your child’s doctors…”. Accordingly, the answer choices (“Never/Sometimes/Usually/Always”) were changed to “Yes, definitely/Yes, somewhat/No.”
Another applicability problem was raised for the item about the doctor listening to the parent’s ideas about the child’s medical care. This item assumed that parents had ideas about their child’s care; however, cognitive interviews revealed that parents often have questions and concerns instead. Some participants shared that a more relevant construct would measure the doctor’s willingness to listen to the parent’s input. Accordingly, this item was modified to ask if the parent felt comfortable sharing their ideas with the doctor.
Cognitive testing also identified applicability issues with the item asking “…did your child’s doctors seem interested in what you wanted to know?” Some participants shared that they did not want to know anything, but rather wanted something done. We deleted this item given this problem plus concerns that it was duplicative of more relevant items.
Duplicative Construct:
Some participants stated that the item asking “…did your child’s doctors clearly explain to you what your child’s problem was?” was duplicative of the item about the doctor giving information in understandable ways. The former item was deleted, because the latter item was perceived to be more comprehensive and therefore useful.
Multiple participants struggled to discriminate between the item about the doctor addressing the parent’s needs and the item about the doctor addressing the child’s needs. Participants did not understand how those two items differed, because the parent’s need was that their child’s needs be met. Upon reviewing the completed instruments, all respondents answered these two items similarly, supporting that these items measured the same construct. We thus deleted the item about the doctor addressing the parent’s needs.
Unclear Terms:
Comprehension problems emerged during cognitive testing. Participants failed to understand the term “next steps” in the item asking “…did your child’s doctors clearly explain to you the next steps for your child’s medical care?” Some respondents misinterpreted this item as referencing disposition plans following the ED visit. The phrase “next steps for” was changed to “ideas about” to capture ED care plans rather than post-ED care plans.
Another item identified to have an unclear term was the item asking “…did your child’s doctors treat you as an individual?” The word “individual” was perceived by many participants to be vague. Multiple participants suggested rephrasing the item to ask about being treated with respect or disrespect.
Missing Construct:
Almost every participant stated that the instrument adequately and comprehensively assessed all meaningful family-centered care constructs. One participant explained how she experienced sex-based discrimination, whereby the doctor spoke only to her husband despite the fact that she was asking all the questions. Another participant stated that doctors sometimes prejudge caregivers based on their appearance, and this discrimination can impact the care received. We therefore added an item about experiencing unfair treatment.
Appendix 2. Instrument for Field Testing
Mark one answer for each question. Answer the questions about your child’s recent EMERGENCY DEPARTMENT visit. If your child recently visited more than one emergency department, answer the questions about only one of those visits. Do not include any other visits in your answers.
During your child’s recent EMERGENCY DEPARTMENT visit…
- Did your child’s doctors seem interested in your worries about your child’s problem?
- Yes, definitely
- Yes, somewhat
- No
- Did you feel comfortable sharing your ideas with your child’s doctors?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors focus on what matters to you the most?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors clearly explain to you their ideas about your child’s medical care?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors give you information in ways that you understood?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors encourage you to ask questions?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors involve you in making decisions about your child’s medical care?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors propose next steps for your child’s medical care that were acceptable to you?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors address your child’s needs?
- Yes, definitely
- Yes, somewhat
- No
- Did your child’s doctors treat you with disrespect?
- Yes, definitely
- Yes, somewhat
- No
- Did you experience unfair treatment from your child’s doctors based on who you are (such as your race, gender, age)?
- Yes, definitely
- Yes, somewhat
- No
Footnotes
Declaration of interest: None
REFERENCES
- 1.Coyne I, Holmström I, Söderbäck M. Centeredness in healthcare: a concept synthesis of family-centered care, person-centered care and child-centered care. Journal of pediatric nursing. 2018;42:45–56. [DOI] [PubMed] [Google Scholar]
- 2.Dudley N, Ackerman A, Brown KM, Snow SK, Medicine AAoPCoPE, Committee ENAP. Patient-and family-centered care of children in the emergency department. Pediatrics. 2015;135(1):e255–e272. [DOI] [PubMed] [Google Scholar]
- 3.Institute of Medicine. Crossing the Quality Chiasm: A New Healthcare for the 21st Century. U.S. Department of Health and Human Services, National Academy of Sciences, Washington, DC; 2001. [Google Scholar]
- 4.Kuo DZ, Houtrow AJ, Arango P, Kuhlthau KA, Simmons JM, Neff JM. Family-centered care: current applications and future directions in pediatric health care. Maternal and child health journal. 2012;16(2):297–305. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Goldfarb MJ, Bibas L, Bartlett V, Jones H, Khan N. Outcomes of patient-and family-centered care interventions in the ICU: a systematic review and meta-analysis. Critical care medicine. 2017;45(10):1751–1761. [DOI] [PubMed] [Google Scholar]
- 6.Committee on Hospital Care and Institute for Patient- and Family-Centered Care. Patient-and family-centered care and the pediatrician’s role. Pediatrics. 2012;129(2):394. [DOI] [PubMed] [Google Scholar]
- 7.Mirlashari J, Brown H, Fomani FK, de Salaberry J, Zadeh TK, Khoshkhou F. The challenges of implementing family-centered care in NICU from the perspectives of physicians and nurses. Journal of pediatric nursing. 2020;50:e91–e98. [DOI] [PubMed] [Google Scholar]
- 8.Oude Maatman SM, Bohlin K, Lilliesköld S, et al. Factors influencing implementation of family-centered care in a neonatal intensive care unit. Frontiers in pediatrics. 2020;8:222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Agency for Healthcare Research and Quality. CAHPS Patient Experience Surveys and Guidance. Available at: https://www.ahrq.gov/cahps/surveys-guidance/index.html. Published 2021. Accessed Dec 9, 2021.
- 10.National Quality Forum Leadership Consortium. National Quality Forum Leadership Consortium 2022 Priorities for Action. Available at: https://www.qualityforum.org/Publications/2021/12/2022_NQF_Priorities_for_Action.aspx. 2021. Accessed March 2, 2022.
- 11.Lindly OJ, Geldhof GJ, Acock AC, Sakuma K-LK, Zuckerman KE, Thorburn S. Family-centered care measurement and associations with unmet health care need among US children. Academic pediatrics. 2017;17(6):656–664. [DOI] [PubMed] [Google Scholar]
- 12.Etz RS, Zyzanski SJ, Gonzalez MM, Reves SR, O’Neal JP, Stange KC. A new comprehensive measure of high-value aspects of primary care. The Annals of Family Medicine. 2019;17(3):221–230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.King S, King G, Rosenbaum P. Evaluating health service delivery to children with chronic conditions and their families: Development of a refined measure of processes of care (MPOC– 20). Children’s Health Care. 2004;33(1):35–57. [Google Scholar]
- 14.Little P, Everitt H, Williamson I, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. Bmj. 2001;323(7318):908–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Frost MH, Reeve BB, Liepa AM, Stauffer JW, Hays RD, Group MFPROCM. What is sufficient evidence for the reliability and validity of patient-reported outcome measures? Value in Health. 2007;10:S94–S105. [DOI] [PubMed] [Google Scholar]
- 16.Hudon C, Fortin M, Haggerty JL, Lambert M, Poitras M-E. Measuring patients’ perceptions of patient-centered care: a systematic review of tools for family medicine. The Annals of Family Medicine. 2011;9(2):155–164. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Stewart M, Brown JB, Donner A, et al. The impact of patient-centered care on outcomes. Fam Pract. 2000;49(9):796–804. [PubMed] [Google Scholar]
- 18.Rosenthal JL, Li S-TT, Hernandez L, Alvarez M, Rehm RS, Okumura MJ. Familial Caregiver and Physician Perceptions of the Family-Physician Interactions During Interfacility Transfers. Hospital Pediatrics. 2017;7(6):344–351. [DOI] [PubMed] [Google Scholar]
- 19.Byczkowski TL, Gillespie GL, Kennebeck SS, Fitzgerald MR, Downing KA, Alessandrini EA. Family-centered pediatric emergency care: a framework for measuring what parents want and value. Academic pediatrics. 2016;16(4):327–335. [DOI] [PubMed] [Google Scholar]
- 20.Klassen A, Dix D, Cano S, Papsdorf M, Sung L, Klaassen R. Evaluating family-centred service in paediatric oncology with the measure of processes of care (MPOC-20). Child: care, health and development. 2009;35(1):16–22. [DOI] [PubMed] [Google Scholar]
- 21.Bradburn NM, Sudman S, Wansink B. Asking questions: the definitive guide to questionnaire design--for market research, political polls, and social and health questionnaires. John Wiley & Sons; 2004. [Google Scholar]
- 22.Davis LL. Instrument review: Getting the most from a panel of experts. Applied nursing research. 1992;5(4):194–197. [Google Scholar]
- 23.Lynn MR. Determination and quantification of content validity. Nursing research. 1986;35(6):382–385. [PubMed] [Google Scholar]
- 24.Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Research in nursing & health. 2006;29(5):489–497. [DOI] [PubMed] [Google Scholar]
- 25.Collins D Pretesting survey instruments: an overview of cognitive methods. Quality of life research. 2003;12(3):229–238. [DOI] [PubMed] [Google Scholar]
- 26.Tourangeau R Cognitive sciences and survey methods. Cognitive aspects of survey methodology: building a bridge between disciplines. In: Washington DC: National Academy Press; 1984. [Google Scholar]
- 27.Weinick RM, Becker K, Parast L, et al. Emergency department patient experience of care survey: development and field test. Rand health quarterly. 2014;4(3) [PMC free article] [PubMed] [Google Scholar]
- 28.Toomey SL, Elliott MN, Zaslavsky AM, et al. Variation in family experience of pediatric inpatient care as measured by child HCAHPS. Pediatrics. 2017;139(4) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Toomey SL, Zaslavsky AM, Elliott MN, et al. The Development of a Pediatric Inpatient Experience of Care Measure: Child HCAHPS. Pediatrics. Aug 2015;136(2):360–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Medical Care Research and Review. 2010;67(1):27–37. [DOI] [PubMed] [Google Scholar]
- 31.McNeish D Thanks coefficient alpha, we’ll take it from here. 2018;23(3):412–433. [DOI] [PubMed] [Google Scholar]
- 32.McDonald RP. Test theory: A unified treatment. New York, NY: Lawrence Erlbaum Associates, Inc; 1999. [Google Scholar]
- 33.Chalmers RP. mirt: A multidimensional item response theory package for the R environment. Journal of statistical Software. 2012;48:1–29. [Google Scholar]
- 34.R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/. Published 2017. Accessed June 16, 2022. [Google Scholar]
- 35.Hotelling H New light on the correlation coefficient and its transforms. Journal of the Royal Statistical Society Series B (Methodological). 1953;15(2):193–232. [Google Scholar]
- 36.United States Census Bureau. Quick Facts California. Available at https://www.census.gov/quickfacts/CA. Published 2021. Accessed June 3, 2022.
- 37.Kokorelias KM, Gignac MA, Naglie G, Cameron JI. Towards a universal model of family centered care: a scoping review. BMC health services research. 2019;19(1):1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Lor M, Crooks N, Tluczek A. A proposed model of person-, family-, and culture-centered nursing care. Nursing Outlook. 2016;64(4):352–366. [DOI] [PubMed] [Google Scholar]
- 39.Jolley J, Shields L. The evolution of family-centered care. Journal of pediatric nursing. 2009;24(2):164–170. [DOI] [PubMed] [Google Scholar]
- 40.Carter B, Ford K. Researching children’s health experiences: The place for participatory, child-centered, arts-based approaches. Research in Nursing & Health. 2013;36(1):95–107. [DOI] [PubMed] [Google Scholar]
- 41.Seltz LB, Zimmer L, Ochoa-Nunez L, Rustici M, Bryant L, Fox D. Latino families’ experiences with family-centered rounds at an academic children’s hospital. Academic Pediatrics. 2011;11(5):432–438. [DOI] [PubMed] [Google Scholar]
- 42.Eneriz-Wiemer M, Sanders LM, Barr DA, Mendoza FS. Parental limited English proficiency and health outcomes for children with special health care needs: a systematic review. Academic pediatrics. 2014;14(2):128–136. [DOI] [PubMed] [Google Scholar]
- 43.Khan A, Yin HS, Brach C, et al. Association between parent comfort with English and adverse events among hospitalized children. JAMA pediatrics. 2020;174(12):e203215–e203215. [DOI] [PMC free article] [PubMed] [Google Scholar]
