Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Nov 1.
Published in final edited form as: Med Care. 2012 Nov;50(Suppl):S11–S19. doi: 10.1097/MLR.0b013e3182610a50

Development and Evaluation of CAHPS® Questions to Assess the Impact of Health Information Technology on Patient Experiences with Ambulatory Care

D Keith McInnes 1, Julie A Brown 2, Ron D Hays 3, Patricia Gallagher 4, James D Ralston 5, Mildred Hugh 6, Michael Kanter 7, Carl A Serrato 8, Carol Cosenza 9, John Halamka 10, Lin Ding 11, Paul D Cleary 12
PMCID: PMC3525454  NIHMSID: NIHMS388625  PMID: 23064271

Abstract

Background

Little is known about whether health information technology (HIT) affects patient experiences with health care.

Objective

To develop HIT questions that assess patients care experiences not evaluated by existing ambulatory CAHPS measures.

Research Design

We reviewed published articles and conducted focus groups and cognitive testing to develop survey questions. We collected data, using mail and the internet, from patients of 69 physicians receiving care at an academic medical center and two regional integrated delivery systems in late 2009 and 2010. We evaluated questions and scales about HIT using factor analysis, item-scale correlations, and reliability (internal consistency and physician-level) estimates.

Results

We found support for three HIT composites: doctor use of computer (2 items), e-mail (2 items), and helpfulness of provider’s website (4 items). Corrected item-scale correlations were 0.37 for the two doctor use of computer items and 0.71 for the two e-mail items, and ranged from 0.50 to 0.60 for the provider’s website items. Cronbach’s alpha was high for e-mail (0.83) and provider’s website (0.75), but only 0.54 for doctor use of computer. As few as 50 responses per physician would yield reliability of 0.70 for e-mail and provider’s website. Two HIT composites, doctor use of computer (p<0.001) and provider’s website (p=0.02), were independent predictors of overall ratings of doctors.

Conclusions

New CAHPS HIT items were identified that measure aspects of patient experiences not assessed by the CAHPS C&G 1.0 survey.

Keywords: CAHPS®, health information technology, personal health records, patient experiences of care

INTRODUCTION

Health care organizations have been slow to adopt information technologies.1,2 Recent studies show that only about a quarter of physicians use electronic medical records (EMRs) in ambulatory settings.3 New models of care and federal programs are likely to increase the use of health information technology (HIT).4,5 Physicians may use computers to record patient information, review test results, or for e-prescribing. Patients may use an electronic personal health record (PHR) for electronic messaging with providers, viewing laboratory results, and refilling prescriptions.6

PHRs are generally viewed positively by patients.79 PHRs improve patient-physician communication,10,11 and can foster trust in, and partnership with, doctors.12 Viewing medical records can help prepare patients for clinical appointments13 and increase their confidence dealing with health conditions.12,14 HIT may have disadvantages. In one study15 patients felt physician use of computers during the office visit depersonalized the encounter although this was not found in another study.16

In this study, we developed and evaluated questions that could be added to the Consumer Assessment of Health Plans and Systems (CAHPS®) survey1721 to assess ambulatory patient experiences with HIT. The CAHPS Clinician and Group Survey 1.0 (CAHPS C&G 1.0 survey) (see https://www.cahps.ahrq.gov/content/ncbd/CG/NCBD_CG_Intro.asp) assesses patients’ experiences in ambulatory settings, but it has no HIT questions. Following CAHPS procedures,22 we used focus groups, in-depth interviews, and field testing to draft survey questions and evaluate whether they elicit information about patient health-care experiences not captured by the CAHPS C&G 1.0 survey.

METHODS

Item Development

We followed CAHPS item development principles and procedures.22,23 We started with a literature review.7,10 We found no articles describing the development of survey questions about how HIT affects patients’ experience of ambulatory care.24 We conducted 3 focus groups, with a total of 21 patients, in organizations that used health information technologies, such as EMRs and PHRs. Two were conducted at a medical center in Boston, MA that has a well-developed PHR and 1 at a Secaucus, NJ health plan whose providers use personal digital assistants and an e-prescribing software. Patients said that PHRs allowed greater engagement in their health care and improved communication with their doctors. They expressed interest in expanded PHR functions, such as seeing their doctor’s progress notes. One patient concern was that eye contact with doctors might decrease if doctors are distracted by information on their computer screen.

We identified several HIT-related issues not assessed by the CAHPS C&G 1.0 survey, such as patient access to their electronic medical record, physician use of a computer during patient visits, e-prescribing, and patients e-mailing with their physician. We developed draft questions and then conducted two phases of cognitive interviewing, with a total of 17 patients, in Boston, Los Angeles, and Palo Alto, following previously used procedures.23,25

We also conducted semi-structured telephone interviews with 5 HIT leaders from organizations that had integrated PHRs and 3 health policy leaders, about HIT issues that should be covered by the new items. Towards the end of item development, we convened a technical expert panel of 20 health informatics and policy leaders from organizations representing health care delivery, informatics policy, patient advocacy, government, survey research and academia. Panelists provided advice on the survey items, pilot testing, and ways to encourage the adoption of the items after testing was completed.

The instrument included 42 items from the CAHPS C&G 1.0 survey, comprising 25 questions about experiences with care, 4 questions about eligibility and physician relationship, 1 global rating of care and 12 questions about respondent characteristics. We identified patients with chronic conditions by asking about health care visits in the past 12 months for the same condition and about taking medications for at least 3 months. The instrument also contained 35 new HIT items, including 6 open-ended items (see HIT Items, Supplemental Digital Content 1). Nineteen of those questions were factual (e.g., ever used e-mail to request prescription refills) and 10 were reports about experiences (e.g. was the physician’s use of a computer helpful to you). The final survey instrument with the HIT items had a total of 77 items that asked about ambulatory care experiences in the previous 12 months. We focused our analyses on the 10 HIT items that indicate care quality and are likely to be meaningful for a health care provider and did not focus on screening questions. Question Q18, for example, asks if the patient has emailed their doctor’s office in the past year. It does not assess quality of interactions but rather identifies respondents for whom a question about getting answers to e-mail applies.

We hypothesized that those 10 HIT items would form two composites – helpfulness of HIT and emailing the doctor’s office – based on the focus groups that indicated that participants perceived HIT to be an efficient way to get information and to communicate with their health care providers, and that patients liked using e-mail to communicate with their providers. Based on the content of the 25 CAHPS C&G 1.0 survey items we hypothesized that they would form four composites: access to care; doctor communication; office staff; and shared decision-making.

Sites

The 3 field test sites had well-developed integrated PHRs and represented different geographic and socio-demographic characteristics (Table 1). Beth Israel Deaconess Medical Center (BIDMC) is an academic medical center in Boston with 72 ambulatory care practices. The BIDMC PHR, called PatientSite26,27 provides patients access to problem lists, medications, allergies, visits, laboratory results, diagnostic test results, microbiology results, secure messaging, appointment making, prescription renewal, and specialist referral. Approximately 200 physicians and 40,000 patients use PatientSite every month.

TABLE 1.

Characteristics of Study Respondents

Variable BIDMC (N=1164) GHC (N=1649) KPSC (N=1930)

N Percent N Percent N Percent
Age 18–34 yrs 100 9 123 8 128 7
35–44 yrs 165 14 153 9 174 9
45–54 yrs 279 24 263 16 332 18
55–64 yrs 351 30 508 31 536 29
65–74 yrs 198 17 380 23 428 23
75 or older 71 6 222 14 256 14
Gender Female 665 57 1013 62 1061 57
Education Less than HS 5 1 27 2 24 1
HS graduates 48 4 152 9 195 11
Some College 175 16 425 26 678 37
4-yr college graduate 292 27 364 22 400 22
More than 4-yr college 576 53 673 41 556 30
Race/ethnicity Hispanic 12 1 37 2 144 8
White (not Hispanic) 997 92 1381 85 1506 82
Black (not Hispanic) 26 2 42 3 50 3
Asian (not Hispanic) 42 4 121 7 109 6
Other (not Hispanic) 6 1 47 3 23 1
Health Poor 20 2 29 2 38 2
Fair 99 9 153 9 234 13
Good 338 31 591 36 633 34
Very Good 440 40 623 38 667 36
Excellent 205 19 246 15 286 15
Chronic Condition Yes 948 81 1403 85 1560 84

Notes. BIDMC=Beth Israel Deaconess Medical Center; GHC=Group Health Cooperative; KPSC=Kaiser Permanente Southern California; HS=high school; yr=year; yrs=years.

Other refers to American Indian, Alaska Native, Native Hawaiian, or Other Pacific Islander.

N’s vary because of item non-response

Group Health Cooperative (GHC) is an integrated delivery system serving Washington State and northern Idaho that has more than 350,000 members. GHC has an integrated PHR, called MyGroupHealth, that allows patients to exchange secure electronic messages with their clinicians; access portions of their EHR, including laboratory data, problem lists, medications, allergy history, and prior immunizations; obtain after-visit summaries; search the Healthwise® health and drug-reference library; order medication refills; and schedule office appointments.7,28 As of July 2010, 62% of the more than 350,000 adult GHC enrollees were registered to use MyGroupHealth.

Kaiser Permanente Southern California Region (KPSC) is a not-for-profit health delivery system with 3.2 million members and 13 medical centers. KPSC offers members a PHR called My Health Manager, which included scheduling appointments, e-mailing clinicians, reviewing past visit information, viewing lab test results, and ordering prescriptions.29,30 As of March 2009 in Southern California approximately 30% of the 3.2 million members were registered to use My Health Manager.

Sample and Survey Administration

At each site we oversampled patients who were more frequent users of the PHR because they would be more likely to use the PHR features that our HIT items asked about. At BIDMC we first selected all physicians from BIDMC-owned practices who had at least 175 adult patients who had made at least 1 visit to their physician and had logged onto PatientSite at least once between February 15, 2009 and February 14, 2010. This resulted in a sample of 30 physicians. Eleven were excluded because they did not want their patients surveyed. The remaining 19 physicians included 5 specialists and 14 primary care physicians (PCPs). In the second stage we stratified each physician’s patients based on the number of times the patient had logged into PatientSite. The “high use” stratum comprised patients who were at or above the median number of log-ins for that physician’s panel and the “low use” stratum were patients below the median. We randomly selected 83 high-use patients and 42 low-use patients for a total of 125 patients per physician, for a sample of 2375. Of these 33 had deactivated PHR accounts, were duplicate records, or were staff members, and 13 we excluded at the request of their physician, resulting in a total of 2329 who were surveyed. BIDMC administered an internet survey in May and June, 2010. An electronic reminder was sent to non-responders after two weeks and a second reminder was sent to non-responders 2 weeks after that.

At GHC we selected the 9 GHC-owned clinics in western Washington State with the most racially/ethnically diverse patients. A random sample of 20 physicians was selected. Adult patients of these physicians were eligible for the study if they had had a visit with their physician between February 1, 2009 and November 30, 2009, and had used MyGroupHealth at least twice in the past 12 months. Patients were excluded if they were currently involved in another GHC study, or if they had a diagnosis of dementia or psychosis. Similar to procedures at BIDMC, 83 high-use patients and 42 low-use patients were randomly selected from each physician for a total of 2500 who were sent surveys. GHC mailed the surveys between January and March, 2010. Two weeks following the initial mailing, non respondents were mailed a reminder letter. Those not responding three weeks after the initial mailing were mailed a second survey.

At KPSC the study was conducted at two medical centers in San Diego and Woodland Hills, California. These two sites were selected because many (about 30%) of their members are users of My Health Manager. We selected the 30 primary care physicians with largest numbers of patients using My Health Manager. From each physician’s practice we selected a random sample of 120 adult patients who used My Health Manager, who had made at least one office visit with their doctor between January and July, 2009, and who had sent their physician an e-mail during that period. We surveyed 3,600 members (1,800 from each medical center).

The KPSC survey was an internet survey with a mail survey follow-up. It was conducted between November 2009 and January 2010. After the initial e-mail, participants received a reminder e-mail 12 days later, and a second reminder e-mail 6 days after that. Mail surveys were sent to participants who did not respond to the two e-mails. This allowed us to test the mode most commonly used to administer the survey (mail) with a new mode for CAHPS (internet). Previous studies have found that these two modes yield comparable results.3134 BIDMC and GHC fielded the same questionnaire, with most items having a 4-point response scale (never, sometimes, usually, always). KPSC has traditionally used CAHPS surveys with a 6-point response scale (never, almost never, sometimes, usually, almost always, always). To make results from this survey comparable to their other surveys, KPSC used the 6-point response scale for most items, including the HIT items. The IRBs at BIDMC, GHC, KPSC, Yale, RAND, and Veterans Affairs approved this study.

Analyses

We considered surveys complete if 50% or more of applicable items were answered. We calculated response rate using American Association for Public Opinion Research (AAPOR) definition of response rate 1: the number of completed interviews divided by the total number of interviews, plus number of non-interviews (refusals and break offs, plus non contacts plus others), plus all cases of unknown eligibility.35 We analyzed data from both completed and partially completed surveys. Item non-response was calculated based on the number of patients for whom the question was appropriate, based on responses to screener questions. Similarly, percent of “yes” responses is based solely on those who responded to the item. Items can be found online at the CAHPS website, http://www.cahps.ahrq.gov/Surveys-Guidance/Item-Sets/~/media/Files/SurveyDocuments/CG/12%20Month/Get_Surveys/1357a_Adult_Supp_Eng_11.pdf.

To assess the appropriate grouping of items we used exploratory factor analysis, first with C&G CAHPS 1.0 survey items and the HIT items combined. Oblique factor rotations (Promax) were performed. Then we conducted separate factor analyses of the C&G CAHPS 1.0 survey items and the HIT items. The item “Get appointment when using email or website” was not included in the factor analysis because it had a negative physician-level reliability estimate. We examined 3 and 4 factor solutions for both the C&G CAHPS 1.0 items and the HIT items, and, based on eigenvalues and patterns of loadings decided that the response patterns were best described by a total of 6 factors. Separate factor analyses by site yielded similar results in general. We imputed missing data in factor analyses using SAS PROC MI (SAS Version 9.2). We estimated item-scale correlations and the internal consistency reliability (Cronbach’s alpha) of the multi-item composites.36 We used the same imputed value for each missing case. Although this decreases the variance of the variables for which cases are imputed and thus can increase the correlations, it is preferable to estimating correlations using listwise or pairwise deletion. To assess the impact of missing data we analyzed item-scale correlations with and without the use of imputed data. Results were similar, thus we report on the full-sample analyses.

For each item and composite we estimated physician-level reliability, the corresponding intra-class correlation coefficient, and the number of respondents needed to achieve a reliability of 0.70.3740 Item-scale correlations with the total score, which corrected for item overlap, were computed. In analyses of physician-level reliability for some items only data from BIDMC and GH were included because the KPSC questionnaire used a 6-point response scale for those items. We evaluated the associations between the composite scores and overall rating of the doctor by first evaluating bivariate significance between each of the 7 composites and the rating of doctor, and then by fitting a single ordinary least squares multi-variable linear regression model in which the independent variables were the 7 composites and the dependent variable was the rating of doctor. We considered P<0.05 to be statistically significant.

RESULTS

There were 1164 respondents at BIDMC, 1649 at GHC, and 1930 at KPSC. Of these, 1115, 1631, and 1896, respectively, returned a completed survey. Response rates were 48% at BIDMC, 65% at GHC, and 53% at KPSC. The characteristics of respondents are shown in Table 1. Most (79%–89%) were between 35 and 74 years of age and white non-Hispanic (82% – 92%). Chronic health conditions were reported by 81% to 85% of the respondents.

Use of HIT Functions

Over 60% of participants at each site had e-mailed their doctor’s office with a medical question in the last 12 months (Table 2; Q18). At each site 40% or more had received an e-mail reminder about tests or treatments needed (Q21); of those who received such an e-mail reminder, about 80% made an appointment for the test or treatment that was mentioned in the e-mail (Q21a). About half of respondents (44% to 58%) used e-mail or website to request a prescription refill (Q36).

TABLE 2.

Item Non-response and Endorsement Rates for Screening and Factual Questions

Item Number Item BIDMC (N=1164)
GHC (N=1649)
KPSC (N=1930)
Patients question applies to# % non-response among applicable patients % Yes among responding patients Patients question applies to# % non-response among applicable patients % Yes among responding patients Patients question applies to# % non-response among applicable patients % Yes among responding patients
q5 Need care right away 1164 2 44 1649 2 54 1930 2 57
q7 Make appt for routine care 1164 2 91 1649 7 79 1930 2 81
q13 Complete med. history form on web (HIT) 1164 2 3 1649 11 25 1930 2 3
q14 Phone Dr office during regular hours 1164 3 44 1649 7 42 1930 3 45
q16 Phone Dr office after regular hours 1164 3 11 1649 7 8 1930 2 8
q18 E-mail Dr office medical question (HIT) 1164 3 61 1649 8 74 1930 2 64
q21 Dr office e-mail treatment reminder (HIT) 1164 3 56 1649 8 48 1930 3 41
q21a Made appt after e-mail reminder (HIT) 666 6 81 864 17 81 828 8 80
q60 Talk Dr regarding health concerns 1164 5 94 1649 3 96 1930 4 96
q30 Dr said >1 choice for treatment 1164 7 73 1649 4 67 1930 4 65
q33 Dr order a tests for you 1164 6 93 1649 3 90 1930 3 95
q36 Use e-mail/web to ask Dr refill Rx (HIT) 1164 5 49 1649 3 58 1930 2 44
q37 Use e-mail/web to ask Dr new Rx (HIT) 1164 6 15 1649 3 21 1930 2 17
q38 Dr use computer during visit (HIT) 1164 7 80 1649 11 95 1930 3 95
q40 Dr use computer to show info (HIT) 945 9 46 1572 13 71 1836 4 59
q47 Look for test results on web (HIT) 834 7 96 1566 11 97 1895 9 97
q52 Look for list of Rx meds on web (HIT) 515 12 81 1192 15 90 450 1 78
q56 Look at visit notes from Dr office (HIT) 163 38 93 1334 2 94 1590 23 96

Notes. BIDMC=Beth Israel Deaconess Medical Center; GHC=Group Health Cooperative; KPSC=Kaiser Permanente Southern California; Appt=appointment; med=medical; Dr=doctor; Rx=prescription; info=information. HIT indicates new health information technology item.

Seven items that had “don’t know” response option are not included in the table due to the large percentage that either selected that category or did not respond to the item. In addition, one factual question, about how physicians made visit notes available to patients, had four options with “mark one or more” instructions and could not be displayed in this table format. The 6 open-ended questions are also not included in the table.

#

Because of skip patterns not all items are answered by every respondent.

Physician use of computers during the office visit (Q38) was common, ranging from 80% to 95%. For respondents whose physician’s office put laboratory or other test results on a website, over 96% reported looking for those results on the website (Q47). When possible to see prescription medications on a website, 78% or more of respondents reported looking at the list on the website in the past 12 months (Q52). For respondents who had summary visit notes available, over 90% looked at them (Q56). Seven items that had a “don’t know” response option are not shown in the Table 2 due to the large percentage that either selected that category or did not respond to the item.

Item-scale correlations (Table 3) supported the four hypothesized composites: access to care; doctor communication; office staff; and shared decision making. The item-scale correlations and factor analyses suggested three HIT composites: 1) doctor use of computer use; 2) e-mail; and 3) provider’s website. The doctor use of computer composite items each had item-scale correlations of 0.37, while each of the e-mail composite items had correlations of 0.71. The provider’s website item-scale correlations ranged from 0.50 to 0.60. The three items shown at the bottom of Table 3 did not have interpretable patterns of correlations with the 7 composites and thus were not included in any of the composites. The e-mail composite correlated well with access to care (0.60) and doctor communication (0.55); provider’s website composite correlated most highly with access to care (0.52) and doctor communication (0.52), while doctor use of computer correlated most highly with doctor communication (0.42) (data not shown).

TABLE 3.

Item-Scale Correlations for Reporting Items, using Pooled Sampled. Imputed data (n = 4743).

Item Number Item Composites
Access Communication Staff SDM Computer E-mail Website
q6 Get care right away 0.66* 0.43 0.29 0.22 0.19 0.40 0.35
q8 Get appt. for routine care 0.73* 0.43 0.38 0.20 0.21 0.41 0.39
q9 Make appt. for routine care 0.70* 0.45 0.40 0.23 0.23 0.44 0.42
q12 Get appt. when use e-mail/website (HIT) 0.69* 0.35 0.29 0.24 0.18 0.48 0.37
q15 Get info during regular hours 0.57* 0.42 0.37 0.24 0.21 0.52 0.37
q17 Get info after regular hours 0.53* 0.43 0.29 0.18 0.17 0.51 0.45
q22 See Dr within 15 minutes 0.43* 0.32 0.32 0.16 0.16 0.32 0.30
q58 Dr explain 0.49 0.81* 0.32 0.40 0.36 0.48 0.46
q59 Dr listen 0.46 0.85* 0.32 0.43 0.38 0.48 0.44
q61 Dr give easy instructions 0.52 0.80* 0.34 0.42 0.36 0.51 0.47
q62 Dr know med history 0.44 0.71* 0.31 0.38 0.38 0.45 0.44
q63 Dr respect 0.45 0.81* 0.33 0.42 0.34 0.46 0.41
q64 Dr spend enough time 0.48 0.73* 0.36 0.36 0.33 0.42 0.44
q65 Office staff helpful 0.45 0.36 0.74* 0.13 0.22 0.30 0.42
q66 Office staff courteous 0.40 0.37 0.74* 0.15 0.20 0.29 0.39
q31 Dr talk pros/cons of trmt. choices 0.22 0.42 0.16 0.35* 0.21 0.23 0.22
q32 Dr ask your treatment preference 0.25 0.37 0.11 0.35* 0.22 0.21 0.22
q42 Dr computer use helpful to you (HIT) 0.27 0.42 0.23 0.23 0.37* 0.27 0.32
q43 Dr computer use easier to talk (HIT) 0.17 0.28 0.14 0.20 0.37* 0.18 0.21
q19 Get info when e-mailed Dr office (HIT) 0.58 0.48 0.31 0.23 0.23 0.71* 0.40
q20 Quex answered when e-mail Dr office (HIT) 0.54 0.53 0.28 0.26 0.27 0.71* 0.42
q48 Web test results easy to find (HIT) 0.32 0.32 0.29 0.16 0.21 0.32 0.55*
q49 Results on web as soon as needed (HIT) 0.40 0.36 0.34 0.19 0.23 0.34 0.60*
q50 Results on web easy understand (HIT) 0.39 0.38 0.32 0.21 0.26 0.30 0.56*
q57 Visit notes easy to understand (HIT) 0.47 0.53 0.38 0.23 0.27 0.41 0.50*
q13a Dr up-to-date re med hist. 0.43 0.60 0.32 0.33 0.35 0.46 0.42
q34 Office follows up test results 0.38 0.36 0.26 0.22 0.22 0.31 0.29
q53 List Rx meds on web up-to-date (HIT) 0.33 0.37 0.28 0.13 0.26 0.30 0.44

Notes.

*

Item-scale correlation, corrected for item overlap with the scale total score. HIT indicates new health information technology items. Composites: Access=Access to Care;

Communication=Doctor Communication; Staff=Office Staff; Computer=Doctor Use of Computer; SDM=Shared Decision Making; E-mail=Questions Answered by e-mail; Website=Provider’s Website.

Dr=doctor; appt=appointment; info=information; trmt=treatment; quex=questions; Rx=prescription

Reliability of Composites and Items

Coefficient alpha was 0.83 for the e-mail composite, 0.75 for provider’s website, and 0.54 for doctor use of computer. Alphas for the other CAHPS composites were 0.85 for access to care, 0.92 for doctor communication, 0.85 for office staff, and 0.47 for shared decision making (data not shown).

Twelve individual items and five composites had physician-level reliability of 0.70 or greater (Table 4). The sample size needed to achieve reliability of 0.70 ranged from 289 responses for the item about prescription medication list being up-to-date on the website, to 15 responses for seeing the doctor within 15 minutes of appointment time. For composites, required sample sizes ranged from 162 for doctor use of computer to 11 for doctor communication. Two HIT composites, e-mail and provider’s website, achieved reliability of 0.70 or greater. They required 30 and 47 responses, respectively, to achieve that level of reliability. We considered Q53 for the provider’s website composite, but it had poor physician-level reliability (N=289 respondents to achieve an R of 0.70), and as shown in Table 3, a lower correlation (0.44) with the other items in the scale.

TABLE 4.

Physician Level Reliability (n=69 physicians)

Items and Composites Average n per doctor Reliability ICC N needed for R=0.7
Access to Care
q6* Get care right away 37 0.55 0.03 69
q8* Get appt for routine care 58 0.63 0.04 80
q9* Make appt for routine care 57 0.67 0.05 64
q12* Get appt when use e-mail/website (HIT) 23
q15* Get info during regular hours 29 0.80 0.09 17
q22* See Dr within 15 minutes 70 0.91 0.22 15
Doctor Communication
q58 Dr explain 66 0.79 0.05 42
q59 Dr listen 66 0.78 0.05 44
q61 Dr give easy instructions 63 0.75 0.04 49
q62 Dr knows medical history 66 0.80 0.05 38
q63 Dr respect 66 0.78 0.05 43
q64 Dr spend enough time 66 0.78 0.05 43
Office Staff
q65 Office staff helpful 64 0.78 0.05 43
q66 Office staff courteous 64 0.77 0.05 45
Shared Decision Making
q31 Dr talk pros/cons of trmt. choices 44 0.32 0.01 218
q32 Dr ask which trmt. you thought best 43 0.36 0.01 177
Doctor Use of Computer
q42 Dr use of computer helpful to you (HIT) 58 0.41 0.01 194
q43 Dr use of computer easier for you to talk (HIT) 58 0.43 0.01 181
E-mail
q19* Get info when e-mail Dr office (HIT) 46 0.82 0.11 23
q20* Quex answered when e-mail Dr office (HIT) 46 0.67 0.05 54
Provider’s Website
q48* Web test results easy to find (HIT) 53 0.60 0.04 83
q49* Results on web as soon as needed (HIT) 53 0.55 0.03 103
q50* Results on web easy to understand (HIT) 53 0.63 0.04 73
q57* Visit notes easy to understand (HIT) 34 0.38 0.02 131
COMPOSITES
Access to Care* 71 0.85 0.13 30
Doctor Communication 67 0.93 0.17 11
Office Staff 64 0.80 0.06 37
Shared Decision Making 44 0.40 0.01 153
Doctor Use of Computer (HIT-C) 58 0.46 0.01 162
E-mail* (HIT-C) 46 0.78 0.08 30
Provider’s Website* (HIT-C) 59 0.75 0.07 47
Items not included in composites
q13a* Dr up-to-date about medical history 13 0.33 0.01 59
q34* Dr office followed up with test results 62 0.82 0.11 31
q53* List of Rx meds on website up-to-date (HIT) 33 0.21 0.01 289

Notes.

*

Only responses from BIDMC and GHC were included (number of doctors=39) due to KPSC using different response options for these items and composites;

Reliability estimate was negative.

q17 was excluded because of low number of respondents.

ICC=intra class correlation; R=reliability; appt.=appointment; trmt.=treatment; Rx=prescription; meds=medications.

HIT indicates new health information technology item. HIT-C indicates new health information technology composite.

Association between Composites and Global Rating of Doctor

The doctor communication composite was the strongest predictor of rating of doctor (β=0.56, p<0.001) (Table 5). Two of the HIT composites, doctor use of computer (β=0.08, p<0.001) and provider’s website (β=0.05, p=0.02), were also statistically significant independent predictors of the overall rating of the doctor.

TABLE 5.

Overall Rating of Doctor Regressed on Composites; Bivariate and Multivariable Results Using Standardized Regression Coefficients.

Composite Bivariate Results Multivariable Results

Estimate P value Estimate P value
Access to care 0.328 <0.001 0.045 0.054
Doctor Communication 0.641 <0.001 0.557 <.0001
Office Staff 0.226 <0.001 0.032 0.129
Shared Decision Making 0.299 <0.001 0.016 0.442
Doctor Use of Computer (HIT-C) 0.244 <0.001 0.081 <.0001
E-mail (HIT-C) 0.352 <0.001 0.034 0.134
Provider’s Website (HIT-C) 0.288 <0.001 0.047 0.023

R2 = 0.43

HIT-C indicates new health information technology composite.

DISCUSSION

The CAHPS C&G 1.0 survey is used to measure patient experiences with ambulatory care, but it does not include HIT questions. In this study we developed HIT items and assessed their psychometric properties. The resulting items, and the three composites they formed, assess patient experience when their doctor (or the doctor’s office) uses HIT and patients’ direct interactions with HIT. Our findings are timely given the growing interest in using PHRs to achieve improved health outcomes.2830,41. Recent randomized trials have shown that interventions that include patient-clinician secure messaging, included in most PHRs, are associated with improved chronic disease care in diabetes,42 hypertension43 and depression.44 The doctor use of computer composite did not have good physician-level reliability. This may be because the phrase, “doctor use of computer”, used in the questions, may be too broad. Patients observing their physician entering data into the computer (e.g., history of problem) may find it tedious and unhelpful, while patients whose doctors show them x-rays or other images on the computer may perceive use as beneficial. Additional efforts to refine these items are needed.

Seven items had a “don’t know” response option. These items can provide information about a feature that may not be well publicized or understood, or that is rarely used. The “ability to make appointments via e-mail or a website” was one such item. At one site, 45% responded “don’t know,” possibly indicating patients had never tried to make an appointment online. Some items may be hard for patients to answer because physicians may complete tasks on the computer without telling the patient, e.g. when doctors look up test results on the computer during the patient encounter. To keep surveys brief, it may be appropriate to drop items in which many patients are unlikely to know whether a feature is available or would have a hard time judging if it is being used.

This study has several potential limitations. The study sites had relatively advanced HIT systems. Our findings may not apply to organizations with less advanced systems. Respondents were included because they used a PHR; additionally they were well educated and predominately white. Our findings may not be generalizable to patients with different characteristics. Our response rates were comparable to other patient surveys but non-respondents may have less experience using PHRs and less interest in HIT; such patients may rate their experiences differently than patients with more HIT experience. Our analyses are based on pooled internet and paper questionnaire responses. This may have introduced biases, though studies have shown that there are no statistical differences between internet and mail surveys in scores or scales.31,32,45,46

The response rates at the three sites ranged from 48% to 65%. We considered a survey complete if 50% or more of the items were answered, so response rates for many questions were lower. Thus, the data presented may not generalize to all patients, although these response rates are comparable to other administrations of CAHPS surveys.38 Furthermore, any bias due to non-response is more likely to affect the distribution of responses more than measures of association, a major focus of these analyses.

Provider organizations can use these items, (see guidance at: http://www.cahps.ahrq.gov/Surveys-Guidance/Item-Sets/~/media/Files/SurveyDocuments/CG/12%20Month/Get_Surveys/1357a_Adult_Supp_Eng_11.pdf), to evaluate their PHR. Similarly, they can assess whether their clinicians and staff are using email in ways patients perceive as valuable. The survey can also be used to identify areas for quality improvement. Interest in patient experiences with HIT is likely to grow as more providers employ EMRs and PHRs. Additionally, interest in patient-centered medical homes (PCMHs) is also likely to increase EMR and PHR adoption, because HIT is a core element of PCMHs.5 The new items developed in this study may help organizations evaluate whether their adoption of HIT improves the patient experience.

Supplementary Material

1

Acknowledgments

We thank Margaret Jeddry for sampling assistance, Chong Kim and Chris Palma for their comments on an earlier draft, and Linda Wehnes for her contributions to survey administration.

Supported by cooperative agreements from AHRQ (#U18HS016978 and U18 HS016980). McInnes was also supported by VA Career Development Award (CDA #09-016) and VA QUERI Programs, and Hays by grants from NIA (P30AG021684) and NCMHD (2P20MD000182).

Contributor Information

D. Keith McInnes, VA QUERI Program and Center for Health Quality, Outcomes, and Economic Research, ENRM VA Medical Center, Bedford, MA; Department of Health Care Policy, Boston University School of Public Health, Boston, MA.

Julie A. Brown, RAND Corporation, Santa Monica, CA.

Ron D. Hays, University of California Los Angeles, Department of Medicine; RAND Corporation, Santa Monica, CA.

Patricia Gallagher, Center for Survey Research, University of Massachusetts, Boston, MA.

James D. Ralston, Group Health Research Institute, Seattle, WA.

Mildred Hugh, Regulatory Relations and Performance Assessment, Southern California Permanente Medical Group, Kaiser Permanente, Pasadena, CA.

Michael Kanter, Southern California Permanente Medical Group, Kaiser Permanente, Pasadena, CA.

Carl A. Serrato, Kaiser Foundation Health Plan, Inc., Oakland, CA.

Carol Cosenza, Center for Survey Research, University of Massachusetts, Boston, MA.

John Halamka, Beth Israel Deaconess Medical Center, Boston, MA.

Lin Ding, Department of Health Care Policy, Harvard Medical School, Boston, MA.

Paul D. Cleary, Yale School of Public Health; Yale School of Medicine, New Haven, CT.

References

  • 1.Seidman J, Eytan T. Lessons in the Adoption of Online Consumer Tools. Oakland, CA: California HealthCare Foundation; 2008. Helping Patients Plug. [Google Scholar]
  • 2.Simon SR, Soran CS, Kaushal R, et al. Physicians’ use of key functions in electronic health records from 2005 to 2007: a statewide survey. J Am Med Inform Assoc. 2009 Jul-Aug;16(4):465–470. doi: 10.1197/jamia.M3081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care--a national survey of physicians. N Engl J Med. 2008 Jul 3;359(1):50–60. doi: 10.1056/NEJMsa0802005. [DOI] [PubMed] [Google Scholar]
  • 4.Blumenthal D. Launching HITECH. N Engl J Med. 2010 Feb 4;362(5):382–385. doi: 10.1056/NEJMp0912825. [DOI] [PubMed] [Google Scholar]
  • 5.Landon BE, Gill JM, Antonelli RC, Rich EC. Prospects for rebuilding primary care using the patient-centered medical home. Health Aff (Millwood) 2010 May;29(5):827–834. doi: 10.1377/hlthaff.2010.0016. [DOI] [PubMed] [Google Scholar]
  • 6.Tang PC, Lee TH. Your doctor’s office or the Internet? Two paths to personal health records. N Engl J Med. 2009 Mar 26;360(13):1276–1278. doi: 10.1056/NEJMp0810264. [DOI] [PubMed] [Google Scholar]
  • 7.Ralston JD, Carrell D, Reid R, Anderson M, Moran M, Hereford J. Patient web services integrated with a shared medical record: patient use and satisfaction. J Am Med Inform Assoc. 2007 Nov-Dec;14(6):798–806. doi: 10.1197/jamia.M2302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ralston JD, Hereford J, Carrell D. Use and satisfaction of a patient Web portal with a shared medical record between patients and providers. AMIA Annu Symp Proc. 2006:1070. [PMC free article] [PubMed] [Google Scholar]
  • 9.Wagner PJ, Howard SM, Bentley DR, Seol YH, Sodomka P. Incorporating patient perspectives into the personal health record: implications for care and caring. Perspect Health Inf Manag. 2010;7:1e. [PMC free article] [PubMed] [Google Scholar]
  • 10.Hassol A, Walker JM, Kidder D, et al. Patient experiences and attitudes about access to a patient electronic health care record and linked web messaging. J Am Med Inform Assoc. 2004 Nov-Dec;11(6):505–513. doi: 10.1197/jamia.M1593. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Walker J, Ahern DK, Le LX, Delbanco T. Insights for internists: “I want the computer to know who I am”. J Gen Intern Med. 2009 Jun;24(6):727–732. doi: 10.1007/s11606-009-0973-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Fisher B, Bhavnani V, Winfield M. How patients use access to their full health records: a qualitative study of patients in general practice. J R Soc Med. 2009 Dec;102(12):539–544. doi: 10.1258/jrsm.2009.090328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Wald JS, Grant RW, Schnipper JL, et al. Survey analysis of patient experience using a practice-linked PHR for type 2 diabetes mellitus. AMIA Annu Symp Proc. 2009;2009:678–682. [PMC free article] [PubMed] [Google Scholar]
  • 14.Lafky DB, Horan TA. Health Status and Prospective PHR Use. AMIA Annu Symp Proc. 2008:1016. [PubMed] [Google Scholar]
  • 15.Rouf E, Whittle J, Lu N, Schwartz MD. Computers in the exam room: differences in physician-patient interaction may be due to physician experience. J Gen Intern Med. 2007 Jan;22(1):43–48. doi: 10.1007/s11606-007-0112-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Nagy VT, Kanter MH. Implementing the electronic medical record in the exam room: the effect on physician-patient communication and patient satisfaction. Perm J. 2007 Spring;11(2):21–24. doi: 10.7812/tpp/06-118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Agency for Healthcare Research and Quality. [Accessed May 10, 2011];CAHPS Survey Products. 2010 https://www.cahps.ahrq.gov/content/products/Prod_Intro.asp?p=102&s=2.
  • 18.Keller S, O’Malley AJ, Hays RD, et al. Methods used to streamline the CAHPS Hospital Survey. Health Serv Res. 2005 Dec;40(6 Pt 2):2057–2077. doi: 10.1111/j.1475-6773.2005.00478.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Goldstein E, Cleary PD, Langwell KM, Zaslavsky AM, Heller A. Medicare Managed Care CAHPS: A Tool for Performance Improvement. Health Care Financing Review. 2001;22(3):101–107. [PMC free article] [PubMed] [Google Scholar]
  • 20.Martino SC, Elliott MN, Cleary PD, et al. Psychometric properties of an instrument to assess Medicare beneficiaries’ prescription drug plan experiences. Health Care Financ Rev. 2009 Spring;30(3):41–53. [PMC free article] [PubMed] [Google Scholar]
  • 21.Weidmer-Ocampo B, Johansson P, Dalpoas D, Wharton D, Darby C, Hays RD. Adapting CAHPS for an American Indian population. J Health Care Poor Underserved. 2009 Aug;20(3):695–712. doi: 10.1353/hpu.0.0166. [DOI] [PubMed] [Google Scholar]
  • 22.Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients’ perspective: an overview of the CAHPS Hospital Survey development process. Health Serv Res. 2005 Dec;40(6 Pt 2):1977–1995. doi: 10.1111/j.1475-6773.2005.00477.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Levine RE, Fowler FJ, Jr, Brown JA. Role of cognitive testing in the development of the CAHPS Hospital Survey. Health Serv Res. 2005 Dec;40(6 Pt 2):2037–2056. doi: 10.1111/j.1475-6773.2005.00472.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Katz SJ, Nissan N, Moyer CA. Crossing the digital divide: evaluating online communication between patients and their providers. Am J Manag Care. 2004 Sep;10(9):593–598. [PubMed] [Google Scholar]
  • 25.Fongwa MN, Setodji CM, Paz SH, Morales LS, Steers NW, Hays RD. Readability and missing data rates in CAHPS 2.0 Medicare survey in African American and White Medicare respondents. Health Outcomes Research in Medicine. 2010;1(1):e39–349. doi: 10.1016/j.ehrm.2010.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Halamka JD, Mandl KD, Tang PC. Early experiences with personal health records. J Am Med Inform Assoc. 2008 Jan-Feb;15(1):1–7. doi: 10.1197/jamia.M2562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Weingart SN, Rind D, Tofias Z, Sands DZ. Who uses the patient internet portal? The PatientSite experience. J Am Med Inform Assoc. 2006 Jan-Feb;13(1):91–95. doi: 10.1197/jamia.M1833. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ralston JD, Coleman K, Reid RJ, Handley MR, Larson EB. Patient experience should be part of meaningful-use criteria. Health Aff (Millwood) Apr;29(4):607–613. doi: 10.1377/hlthaff.2010.0113. [DOI] [PubMed] [Google Scholar]
  • 29.Silvestre AL, Sue VM, Allen JY. If you build it, will they come? The Kaiser Permanente model of online health care. Health Aff (Millwood) 2009 Mar-Apr;28(2):334–344. doi: 10.1377/hlthaff.28.2.334. [DOI] [PubMed] [Google Scholar]
  • 30.Zhou YY, Kanter MH, Wang JJ, Garrido T. Improved quality at Kaiser Permanente through e-mail between physicians and patients. Health Aff (Millwood) 2010 Jul;29(7):1370–1375. doi: 10.1377/hlthaff.2010.0048. [DOI] [PubMed] [Google Scholar]
  • 31.Vallejo MA, Mananes G, Isabel Comeche MA, Diaz MI. Comparison between administration via Internet and paper-and-pencil administration of two clinical instruments: SCL-90-R and GHQ-28. J Behav Ther Exp Psychiatry. 2008 Sep;39(3):201–208. doi: 10.1016/j.jbtep.2007.04.001. [DOI] [PubMed] [Google Scholar]
  • 32.Basnov M, Kongsved SM, Bech P, Hjollund NH. Reliability of short form-36 in an Internet- and a pen-and-paper version. Inform Health Soc Care. 2009 Jan;34(1):53–58. doi: 10.1080/17538150902779527. [DOI] [PubMed] [Google Scholar]
  • 33.Richter JG, Becker A, Koch T, et al. Self-assessments of patients via Tablet PC in routine patient care: comparison with standardised paper questionnaires. Ann Rheum Dis. 2008 Dec;67(12):1739–1741. doi: 10.1136/ard.2008.090209. [DOI] [PubMed] [Google Scholar]
  • 34.Spek V, Nyklicek I, Cuijpers P, Pop V. Internet administration of the Edinburgh Depression Scale. J Affect Disord. 2008 Mar;106(3):301–305. doi: 10.1016/j.jad.2007.07.003. [DOI] [PubMed] [Google Scholar]
  • 35.American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 6. AAPOR; 2009. [Google Scholar]
  • 36.Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;6:297–334. [Google Scholar]
  • 37.Hargraves JL, Hays RD, Cleary PD. Psychometric properties of the Consumer Assessment of Health Plans Study (CAHPS) 2.0 adult core survey. Health Serv Res. 2003 Dec;38(6 Pt 1):1509–1527. doi: 10.1111/j.1475-6773.2003.00190.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Solomon LS, Hays RD, Zaslavsky AM, Ding L, Cleary PD. Psychometric properties of a group-level Consumer Assessment of Health Plans Study (CAHPS) instrument. Med Care. 2005 Jan;43(1):53–60. [PubMed] [Google Scholar]
  • 39.Nunnally JD. Psychometric Theory. 3. New York: McGraw-Hill; 1994. [Google Scholar]
  • 40.Shortell SM, Rousseau DM, Gillies RR, Devers KJ, Simons TL. Organizational assessment in intensive care units (ICUs): construct development, reliability, and validity of the ICU nurse-physician questionnaire. Med Care. 1991 Aug;29(8):709–726. doi: 10.1097/00005650-199108000-00004. [DOI] [PubMed] [Google Scholar]
  • 41.Chen C, Garrido T, Chock D, Okawa G, Liang L. The Kaiser Permanente Electronic Health Record: transforming and streamlining modalities of care. Health Aff (Millwood) 2009 Mar-Apr;28(2):323–333. doi: 10.1377/hlthaff.28.2.323. [DOI] [PubMed] [Google Scholar]
  • 42.Ralston JD, Hirsch IB, Hoath J, Mullen M, Cheadle A, Goldberg HI. Web-based collaborative care for type 2 diabetes: a pilot randomized trial. Diabetes Care. 2009 Feb;32(2):234–239. doi: 10.2337/dc08-1220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Green BB, Cook AJ, Ralston JD, et al. Effectiveness of home blood pressure monitoring, Web communication, and pharmacist care on hypertension control: a randomized controlled trial. JAMA. 2008 Jun 25;299(24):2857–2867. doi: 10.1001/jama.299.24.2857. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Simon GE, Ralston JD, Savarino J, Pabiniak C, Wentzel C, Operskalski BH. Randomized Trial of Depression Follow-Up Care by Online Messaging. J Gen Intern Med. 2011 Mar 8; doi: 10.1007/s11606-011-1679-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Couper M. Designing Effective Web Surveys. Cambridge, UK: Cambridge University Press; 2008. [Google Scholar]
  • 46.Brown JA. Effect of a post-paid incentive in a patient experience of care survey. American Association for Public Opinion Research 66th Annual Conference; May 12–15, 2011; Phoenix, AZ. 2011. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES