Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jun 3.
Published in final edited form as: J Ambul Care Manage. 2012 Apr-Jun;35(2):138–148. doi: 10.1097/JAC.0b013e31824a58e9

Patient experience of care in the safety net: Current efforts and challenges

Katharine E Zuckerman 1, Alicia Wong 2, Stephanie Teleki 3, Susan Edgman-Levitan 2
PMCID: PMC3670776  NIHMSID: NIHMS465433  PMID: 22415288

Abstract

Measuring the patient’s experience of care (PEC) fosters the delivery of patient-centered services and increases health care quality. Most pay-for-performance and public reporting programs focus on care provided to insured populations, excluding the uninsured. Using qualitative research methods, we interviewed leaders of California safety-net practices to assess how they measure PEC and the measurement barriers they encounter. Most had unmet needs for assistance with data collection and quality improvement strategies for their patient population. Tailored measurement and quality improvement resources, coupled with policy mandates to give all patients a voice, would improve the quality of patient-centered care in safety-net organizations.

Keywords: Safety-net, primary care, patient experience of care, quality of care, quality improvement

Introduction

Providing patient-centered care has emerged as a central element in the effort to improve the quality of healthcare in the United States (US). In its 2001 report, Crossing the Quality Chasm, the Institute of Medicine defined patient-centered care, or care which encompasses “qualities of compassion, empathy, and responsiveness to the needs, values, and expressed preferences of the individual patient,” as one of its six aims to improve healthcare quality. (Committee on Quality Health Care in America, Institute of Medicine, 2001) Gathering data on patient experience of care (PEC), is an important way to assess whether care is patient-centered. Collecting PEC feedback has been cited by the National Committee for Quality Assurance and other quality measurement organizations as a centerpiece of the Patient-Centered Medical Home, a “model of care that strengthens the clinician-patient relationship by replacing episodic care with coordinated care and a long-term healing relationship.”(National Committee for Quality Assurance, 2011; Patient Centered Primary Care Collaborative,)

PEC feedback is most commonly gathered in consumer surveys that focus on aspects of the care experience that are most important to patients and their families, such as access to care, communication with providers, and helpfulness of office staff. PEC surveys, as opposed to patient satisfaction surveys, are designed to allow patients to report on the presence or absence of specific aspects of the care process or clinician and staff behaviors and they are typically less biased and more actionable than assessments of satisfaction.(Cleary, Edgman-Levitan, McMullen, & Delbanco, 1992; Zapka et al., 1995) These data are an important source of information for the development of quality improvement (QI) initiatives.

Significant resources have been invested to design and implement surveys to evaluate PEC in various healthcare settings. For example, the federal Agency for Healthcare Research and Quality and the Centers for Medicare and Medicaid Services (CMS) support the Consumer Assessment of Healthcare Providers and Systems (CAHPS) consortium surveys, which are some of the most widely-used, standardized PEC instruments currently available to the public. Health care providers and organizations use CAHPS surveys to ask consumers and patients to report on and evaluate their experiences with health care. CAHPS surveys address topics such as communication skills of providers and the accessibility of services.(CAHPS: Consumer assessment of healthcare providers and systems homepage.2009; Giordano, Elliott, Goldstein, Lehrman, & Spencer, 2010; Jha, Orav, Zheng, & Epstein, 2008) Similarly, in 1999 the Health Resources and Services Administration used existing patient surveys to inform the development of a valid and reliable patient survey for patients in health centers.(Health Resources and Services Administration, U.S. Department of Health and Human Services, ) In the private sector, companies such as Press-Ganey, Quality Data Management, and Avatar have developed patient satisfaction surveys that are widely used.(Avatar international LLC:: About us.; Press ganey.; Quality data management.)

To date, however, most PEC data collection has been funded by health plans, insurers, and CMS, and therefore focuses on the Medicare and commercially insured populations. As a result, very little is known about the experience of care among patients who are served by the “safety net,” the system that delivers “a significant level of health care to uninsured, Medicaid, or other vulnerable patients.”(Institute of Medicine, 2000) Although Federally Qualified Health Centers (FQHCs), which provide care to safety-net patients, have a government mandate to collect patient feedback, (Department of Health and Human Services, 2004) they receive no support for collecting this data in the form of promotion of standardized instruments, funding, or uniform reporting standards. In the inpatient setting, CAHPS surveys are required by CMS for hospital care (using the CAHPS Hospital Survey Module). Additionally, some state Medicaid programs use the CAHPS Clinician & Group Survey to monitor PEC in the ambulatory settings. However, there are no national mandates or guidelines for the use of CAHPS in among ambulatory providers or safety-net settings, and to date no CAHPS modules have been specifically developed for safety-net populations.(CAHPS: Consumer assessment of healthcare providers and systems homepage.2009; Giordano et al., 2010; Jha et al., 2008)

To address this gap in measurement of PEC, we conducted a qualitative study of safety-net health organizations in California. With over six million uninsured or publicly insured individuals of varying socioeconomic, cultural, racial, and ethnic backgrounds, the state of California is home to a significant number of safety-net provider organizations.(Fronstin, 2008) This type of evaluation provided important insights into the needs and challenges that safety-net providers face in collecting and using PEC data. Our primary study questions were: (1) How do safety-net organizations measure PEC? (2) How do they use PEC data to improve healthcare quality? (3) What specific barriers exist to the collection and use of PEC in the safety net?

Methods

Sample and Data Collection

We used a qualitative research design based on semi-structured interviews. This research strategy is useful for gaining a deeper understanding of the information and for generating hypotheses in areas where literature is limited.(Miles & Huberman, 1994; C. Pope & Mays, 1995; C. Pope, Ziebland, & Mays, 2000)

Semi-Structured Interviews

Researchers from the John D. Stoeckle Center for Primary Care Innovation at Massachusetts General Hospital, and from RAND, a non-profit research center based in Santa Monica, California, conducted interviews with 34 representatives of 27 safety-net organizations in California. These individuals were key decision makers in collecting and evaluating PEC data in their organizations. This project was approved by the Institutional Review Boards of Partners HealthCare and RAND.

Advisory Panel

We created an advisory panel of healthcare leaders in the California safety net to help identify the interview sample and guide the research agenda. The advisory panel consisted of safety-net organization executives and clinical leaders representing the California HealthCare Foundation, the California Health Care Safety Net Institute, and the California Primary Care Association. The advisory panel met in person or via telephone three times during the study period.

Sampling procedures

As no comprehensive list of safety-net clinics in California was available, we recruited interview subjects from lists of clinics provided by the safety-net organizations that served on our advisory panel. From this list, we purposefully selected a sample that maximized variation in clinic type, size, geographic location, and characteristics of the patient population, including age, gender, race, and ethnicity. Purposeful sampling is a standard qualitative technique used when goals of the research are to achieve representativeness while accurately capturing the heterogeneity of a population of interest.(Maxwell, 2005)

We called or emailed potential interviewees and asked them to participate in the study or to recommend another staff member at their organization who was knowledgeable about PEC data collection and related QI activities. If an organization had more than one appropriate individual willing to participate, we conducted a group interview. We made multiple contact attempts to minimize non-response bias.

Development of Interview Guide

The advisory panel helped construct the interview guide, which contained the following domains: (1) current PEC data collection and analysis practices (2) PEC-related QI initiatives and resources, (3) barriers to PEC data collection, and (4) familiarity with the CAHPS surveys, widely-distributed, public-domain surveys assessing PEC. The interview guide was largely based on open-ended questions designed to establish each organization’s data collection practices and to elicit barriers to data collection. Copies of the interview guide are available from the authors upon request.

Interview process

Subjects were called to complete an interview at a previously agreed-upon time. Up to three rescheduling attempts were made if the subject was not available. Consent was obtained in the form of verbal consent. The interview lasted approximately 1 hour. Interviewees received a $50 gift card to Amazon.com for their participation. All interviews were conducted between September 2008 and March 2009.

Analysis of Interview Transcripts

Interview transcripts were analyzed using Framework Analysis, an analytic strategy used for applied or policy-relevant qualitative research, in which preexisting ideas inform the initial analysis.(C. Pope et al., 2000) To develop our initial coding scheme, we used a review of the medical literature to develop a framework of domains and themes, which we reviewed with each member of the research team and members of our advisory panel. We then tested the framework by having two team members compare coding for 2 transcripts using NVivo 8 (QSR International, Victoria, Australia). Differences were resolved through discussion with a third researcher. Thereafter, two members of the research team independently coded each interview transcript for major themes; any coding discrepancies were resolved by mutual discussion and/or consultation with a third researcher. During coding of the transcripts, emerging themes were added to the framework until we reached thematic saturation. After coding was complete, the coding themes were reviewed. We noted major patterns across transcripts, as well as significant variation between transcripts. Finally, the most frequent domains and sub-domains were summarized in tabular form.

Analysis of PEC Survey Instruments

During the interviews and focus groups, we requested copies of the patient experience survey instruments from participating organizations. Survey sources (Table 2) and content (Figure 2) were coded using QSR NVivo 8. All English surveys (n=21) were analyzed for reading level using the Flesch-Kincaid Grade Level test.

Table 2.

Characteristics of Patient Experience of Care Surveys

Characteristic N=24
Source of Survey Instrument
    Homegrown 12

    Adapted instrument from other source 8

    Proprietary vendor survey 1

    Respondent did not know source 3

Survey Data Collection Mode
  In-office

     Paper survey 16

     Computer Kiosk survey 2

     Individual Interviews 1

    Telephone survey 2

    Mailed survey 1

    Multiple modes 2

Survey Frequency
  Continuously 2

    More than once per year 2

    Annually 10

    Less than Annually 8

    Irregularly or Unknown 2

Survey Languages 2
    English only

    English and Spanish 18

    English, Spanish, and Other Languagesa 4
a

Other Languages include Vietnamese, Chinese, Khmer, Urdu, Russian, Punjab

Figure 2.

Figure 2

Survey Domains Represented by Items in Collected Patient Experience of Care Survey Instruments

Results

Sample

Interviewees represented healthcare organizations from across Northern, Central, and Southern California; about one-third of the respondents were from rural or frontier Medical Service Study Areas (Figure 1).(Office of Statewide Health Planning and Development, U.S. Census Bureau, ; Rural Health Policy Council, 2003) Twenty-two of the respondents represented safety-net provider organizations, and two represented safety-net healthcare organization coalitions. Since we intentionally selected for safety-net providers, slightly over half of the interviewees represented FQHCs, and most of the remaining respondents worked in clinics affiliated with public hospitals (Table 1).

Figure 1.

Figure 1

Geographical Distribution of Participating Organizations and their Satellite Sites by Urban, Rural, and Frontier Medical Service Study Areas in California

SOURCE: Adapted from California Medical Service Study Areas, Census 2000 Configuration, compiled by the California Office of Statewide Health Planning and Development (OSHPD), Rural Health Policy Council, October 2003.

NOTES: Map includes all satellite sites for all study organizations. Frontier MSSA has a population density of less than 11 persons per square mile; Rural has a population of less than 250 persons per square mile and contains no area with a population in excess of 50,000; Urban MSSA has a population of 75,000 to 125,000 and may not be smaller than five square miles in area. (Medical Service Study Areas, California Office of Statewide Health Planning and Development, April 2005).

Table 1.

Characteristics of Studied Healthcare Organizations

Healthcare System Characteristic N = 24
Sites
    Single-site 11

    Multi-site 13

Services
    Adult medicine only 1

    Adult and pediatric medicine 22

    Pediatric medicine only 1

Organization Type
    Federally Qualified Health Cetner [FQHC] 15

    FQHC Look-Alikeb 2

    Public Hospital Clinic 5

    None of these 2

Academic Medical Center Affiliation
    Yes 9

    No 15
a

According to 2000 US Census Data

b

Per HRS A definition: health centers that do not receive funding under section 330, but operate and provide services similar to grant- funded programs

PEC Data Collection, Analysis, and Use in the Safety Net

Measurement of the Patient Experience of Care

All organizations measured PEC in some fashion. Surveys were universally used by all organizations, at least some of the time. Other sources of patient feedback came from focus groups, patient and community advisory boards, and comment cards.

Source of Survey Instrument

One-half of the organizations used “homegrown” surveys, or surveys were not adapted from or based on existing, validated instruments. Eight organizations modified an existing survey instrument to meet their PEC data needs. The most commonly adapted instrument was the Bureau of Primary Health Care survey created by the Health Resources and Services Administration, which four organizations used. One organization used a proprietary vendor instrument. Three organizations did not know the source of their instrument (Table 2). Interviewees from public hospital systems were the only subjects familiar with the CAHPS group of surveys, a publicly-available survey, and none knew that CAHPS surveys could be used in an outpatient setting.

Survey Characteristics

Surveys were generally 1 to 2 pages long and contained a median of 19 items (range 7 to 78 items) that were scored on a Likert scale or with “Yes/No” response options. Approximately one-third of the surveys also collected information on patient demographics and/or solicited open-ended responses. The most common survey domains were communication, access to services, office staff, and satisfaction/willingness to recommend (Figure 2). About three quarters of the organizations in the interview sample reported offering surveys in English and Spanish; two clinics offered the survey in Chinese, Korean, Khmer and Asian languages. Patient literacy was an important consideration in survey design. Most clinics made modifications for patients with low literacy levels by writing their surveys at a low reading level, adding pictograms, or supplying interviewers in the office to orally conduct the survey with patients who could not read (Table 2). Surveys were written at a median level of 5th grade, with a range of 3rd to 9th grade. Interviewees reported that they periodically revise the items in their survey to target areas of service and patient care where they would like feedback.

Survey Mode

Most organizations used self-administered survey instruments to collect data from patients. These surveys were most often administered in the waiting room just prior to or immediately after the visit (Table 2). Three organizations reported using a computer kiosk system for in-clinic data collection, and one organization used a vendor to conduct its surveys (Table 2).

Data Reporting

Nearly all organizations analyzed their own data and generated reports without the use of an external vendor or consultant. Organizations either allocated staff to perform data entry and generate reports or used volunteers to collate survey results and enter data. One organization used online survey applications to perform these functions, as it posed less of a burden on staff time. Many did not use formal statistical methods to compare results.

Data Dissemination

Organizations regularly shared survey results with their senior management and clinic staff. Some also used survey results to justify additional funding for operational improvements or salary support for additional healthcare providers; several used individual provider results as part of performance evaluations. The majority of organizations that were part of healthcare systems or coalitions shared their data with other member organizations. Of the organizations that did not share their data, most expressed interest in doing so and in learning from top performers.

Survey-Related Quality Improvement Efforts

The most common focus of QI initiatives was on improving access to care and decreasing wait times in the office. Other areas targeted for improvement included interpreter services, dental health services, customer service, cultural competency, and clinic amenities such as parking and cleanliness. Organizations often used PEC data to evaluate the effectiveness of QI initiatives by collecting data before and after implementation, and then comparing results. However, several interviewees questioned the validity of their PEC data. This skepticism arose from concerns about the quality of their survey instruments, the mode of survey administration, and the methods used to analyze the data. In addition, many organizations did not process their data quickly enough to use it for real-time QI.

Barriers to PEC data collection and use

Barriers to PEC data collection and use fell into two primary domains: lack of material and educational resources in the organizations, and challenges specific to the safety net population itself. We divided each of these domains into sub-themes, which are shown in Table 3 and described below.

Table 3.

Barriers to PEC Data Collection and Use

Domain Barrier Sample Quotation
Lack of Material and Educational Resources Lack of financial and staff resources “The administration of a survey is time-consuming and it's not revenue-producing. Anybody who is spending time administering or tracking or evaluating the survey is not checking in patients, answering phones, or seeing patients.”

“the hassle… having to get somebody to get [the survey], go out there and get that, get it done, consolidate all of the different elements of it and present it. I mean… just having somebody who can do that can be a challenge for a community health center.”
Lack of knowledge about statistics and survey research “ Sometimes I wonder, because I ask the same question in a different way and see if I got different results…. you know, six months ago, 12% of the people said they waited a long time to make an appointment … Why did that go up? And now it's down to 5%.”
Unmet needs for education regarding PEC and quality improvement “I think that there are a lot of resources out there; we just are unaware of them. I mean there's no guide! Every health center does it differently. There are no role models for our practices. Somebody else might have it, but I don't know where to find it.”
Challenges related to the safety net population Patient culture and language “Patients from other cultures, especially if they don’t feel comfortable with their assimilation level, may never want to say anything negative, or they may never want to appear to be criticizing their provider or the corporation.”
Low literacy levels “Literacy is a problem. And if people can’t read, do you just read the [survey] questions [to them]? How do you do that, because if you are asking these questions, there is bias right there”
Feasibility of survey mode “We have patients who live under the bridge on [street]. So sometimes the challenge is actually finding an address to send the survey out to…. And so doing an interview survey was, although it was a lot more costly, it was something that we felt was necessary because of the patient population that we serve.”
Lack of trust “I think there’s some populations who are more scared, distrusting of the health industry as a whole. And I think, also, documentation status if they’re immigrant-- if they're illegals or not is an issue. They might be not willing to say anything”

Lack of Material and Educational Resources

Lack of financial and staff resources

Administering PEC surveys incurred costs, including support for staff to administer and analyze the surveys in-house, postage costs for mailed surveys, equipment upkeep for computer surveys, and payments to vendors for outsourced surveys. Most practices did not have access to data analysis resources and either had to divert existing staff time to analyze data, hire extra staff, or pay an outside entity to perform this function. Survey-related expenses were particularly burdensome for small, independent community health centers. Allocating staff or financial resources to PEC initiatives usually led to difficult trade-offs which affected the quality of other clinical services.

Lack of knowledge about PEC measurement

Participants felt they did not have adequate knowledge and resources to appropriately measure PEC. Most were not familiar with existing validated instruments and had little training in survey administration. For example, frequent revisions that organizations made to their survey items and administration protocols limited their ability to make longitudinal comparisons because their survey had changed. Several subjects also felt they lacked the statistical expertise necessary to interpret the data they collected. For instance, when survey scores fluctuated, participants did not know how to determine whether these fluctuations were due to actual changes in patient experiences or due to random variation.

Unmet needs for PEC survey and quality improvement resources

The lack of knowledge about PEC surveys, data collection, analysis, and reporting led to frustrations. Most of the participants had an interest in validated survey instruments, in standard processes for data collection and analysis, and in surveys that would allow them to compare data with other practices, particularly those that were tailored to the needs of the safety net population.

Participants also mentioned the need for support and resources for developing and implementing patient-centered QI initiatives. A few of the participants were aware of national QI courses or conferences where they could learn skills; however, their organizations generally did not have the financial means to enroll their staff. Given the challenges of sending staff to conferences, they were particularly interested in internet resources such as virtual or downloadable courses.

Challenges Related to the Safety Net Population

Patient Culture and Language

Many practices had explicit needs for surveys in languages other than English and Spanish, most commonly, in Asian languages or dialects. Several organizations used a homegrown instrument in part because they were unable to find a validated instrument that met their organization’s language needs. Additionally, respondents felt that, because some cultural groups were reluctant to give candid feedback, their data did not necessarily reflect the patients’ true experiences. Providers felt that they could not trust or generalize findings based on data that might have a favorable bias.

Literacy

A primary concern of most participants was the low literacy level of their patients. Their perception of publicly available instruments was that they were written at too high a reading level and were therefore unsuitable. Most organizations attempted to design or use surveys written for low-literacy populations; however, based on our analysis of the instruments we collected, some of the surveys used were actually at a comparable or higher reading level than publicly available surveys.

Mode of survey delivery

Delivery mode posed specific problems in the safety net. Most organizations relied on a self-administered instrument that was distributed in the office setting. However, these organizations acknowledged that by using a written instrument they were probably not reaching their patients with low or no literacy, a potentially significant portion of their patient population. Some clinics used staff or volunteers to help patients complete the surveys in the office, but recognized that this strategy was time-consuming, compromised patient confidentiality, and might bias patient responses.

The use of computer kiosks to administer surveys was not popular because organizations felt that many patients lacked the necessary computer skills to use the kiosks. Telephone surveys were seldom used because organizations perceived them to be expensive and also felt there was no consistent way to reach all of their patients due to patient transience and phone unreliability. Since all forms of administration considered seemed to exclude certain groups of patients, many organizations accepted the fact that whatever the mode of survey administration, their PEC assessments simply could not be inclusive of their entire patient population.

Lack of trust

Many of the organizations were open to all patients, and therefore cared for a large number of undocumented migrant workers and homeless patients. Organizations observed that these patients (often justifiably) distrusted the medical system and were suspicious of attempts to elicit personal information because of concerns that the information could be used to inform legal or immigration authorities. Response rates for undocumented and homeless patients were particularly low, and clinics required extra outreach to obtain data from these patients.

Discussion

In this study, we found that safety-net providers are strongly committed to improving their patients’ experiences of care; however, they are greatly challenged by the absence of funding for reliable and valid PEC data collection, by the lack of information about available patient-centered survey and QI resources, and by the unique problems associated with surveying the safety-net patient population. As a result, many of these organizations are devoting their limited resources to collecting data that likely contained biases, was difficult to interpret, and failed to include the experiences of substantial portions of their patient populations.

The consistent use of PEC data has allowed practices that serve commercially-insured and Medicare populations to improve the quality of care they provide their patients. Improving the collection of PEC data from patients served by safety-net organizations could have a similar effect: reliable and valid PEC feedback would allow safety-net practices to improve the quality and efficiency of their services and could help them effectively and cost-efficiently tailor their services to the populations they serve. However, the effectiveness of many clinics’ quality improvement efforts is likely hampered by the lack of quality and support for their PEC assessment tools.

This study had a number of limitations. The study had a qualitative design, which allowed us to gather in-depth points of view and examine the reasons for difficulties collecting PEC in the safety net. We intentionally selected interview subjects that we felt represented different safety-net practice settings in California in order to capture a broad range of experiences. However, the study did not employ representative sampling or population-level data. Therefore, while the study’s results may represent a range of organizations’ experiences with PEC data collection, it may not represent experiences typical of all California safety-net organizations. In particular, we do not know if the barriers to PEC data use we found were the most frequent or the most important. We hope to conduct additional survey-based research in order to explore these barriers on a state-wide or national level.

We used an interview-based format to collect data from health center leaders and directors, since we felt these individuals had the most influence over and knowledge about their organization’s PEC process. Though the interviews were confidential, the interviewees may have presented misleadingly positive views of their organization in order to please the interviewer or to make their organization look good in comparison to other organizations. Additionally, though many of the health center leaders were also health care providers, their views may have differed from providers in their organization who did not have administrative roles.

The study’s findings suggest some possible interventions in order for safety-net organizations to improve their PEC data collection and use. First, safety-net organizations would benefit from a targeted educational campaign addressing existing instruments, data collection methodologies, and low-cost analytic resources. Most of the organizations we interviewed were unaware of valid, public-domain surveys, such as the CAHPS Clinician & Group Survey or the Bureau of Primary Health Care Patient Satisfaction Survey and their associated resources. Additionally, existing public domain tools would benefit from specific adaptation to better meet the needs of the safety-net population. Reasonable adaptations might include making surveys available in languages other than English and Spanish, establishing clear reading level guidelines, and expanding content to include areas related to cultural competency of care and services specific to patients with limited English proficiency. Finally, efforts are needed to defray or decrease the costs of assessing PEC, especially since health centers bear the cost largely unassisted. Public-domain, standardized surveys would be more financially feasible if the cost of data collection and reporting was not borne by individual clinics and practices, but rather subsidized or distributed across a significant number of safety-net practices.

All patients deserve a voice in evaluating the care they receive, and safety-net clinicians and administrators express strong interest in learning from their patients. Timely, accurate, actionable information on PEC should be available to all health care organizations, not just to those in the private sector. Extending valid PEC measurement tools to safety-net populations will allow safety-net providers to better serve their patient populations, foster better practice environments, and ultimately improve health care quality for the uninsured and underserved.

Acknowledgements

This study was funded by a grant from the California HealthCare Foundation. Dr. Zuckerman’s effort was also supported by a National Research Service Award from the Health Resources and Services Administration, Department of Health and Human Services. The efforts of Ms. Edgman-Levitan, Dr. Teleki, and Ms. Wong were supported in part by the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Consortium, Agency for Healthcare Research and Quality.

Footnotes

The authors have no conflicts of interest to disclose.

References

  1. Avatar international LLC:: About us. Retrieved December 31, 2010, from http://www.avatar-intl.com/about.
  2. CAHPS: Consumer assessment of healthcare providers and systems homepage. 2009 Retrieved December 12, 2010, from https://www.cahps.ahrq.gov/default.asp.
  3. Cleary PD, Edgman-Levitan S, McMullen W, Delbanco TL. The relationship between reported problems and patient summary evaluations of hospital care. QRB.Quality Review Bulletin. 1992;18(2):53–59. doi: 10.1016/s0097-5990(16)30507-3. [DOI] [PubMed] [Google Scholar]
  4. Department of Health and Human Services. Quality assessment and performance improvement, statute §42CFR491.11. 2004 Retrieved December 31, 2010, from http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=/ecfrbrowse/Title42/42cfr482_main_02.tpl.
  5. Fronstin P. Snapshot: California's uninsured, 2008. 2008 Retrieved December 31, 2010, from http://www.chcf.org/~/media/Files/PDF/U/PDF%20UninsuredSnapshot08.pdf.
  6. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Medical Care Research and Review. 2010;67(1):27–37. doi: 10.1177/1077558709341065. [DOI] [PubMed] [Google Scholar]
  7. Health Resources and Services Administration, U.S. Department of Health and Human Services. The health center program: Health center patient satisfaction survey. Retrieved March 22, 2011, from http://bphc.hrsa.gov/patientsurvey/
  8. Institute of Medicine. America's health care safety net: Intact but endangered. Washington, D.C.: National Academies Press; 2000. [PubMed] [Google Scholar]
  9. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N. Engl. J. Med. 2008;359(18):1921–31. doi: 10.1056/NEJMsa0804116. [DOI] [PubMed] [Google Scholar]
  10. Maxwell JA. Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage Publications; 2005. [Google Scholar]
  11. Miles MB, Huberman MA. Qualitative data analysis: An expanded sourcebook. 2nd ed. Thousand Oaks, California: Sage Publications.; 1994. [Google Scholar]
  12. National Committee for Quality Assurance. NCQA patient-centered medical home 2011. 2011 doi: 10.3122/jabfm.2014.03.130267. Retrieved March 22, 2011, from http://www.ncqa.org/Portals/0/Programs/Recognition/2011PCMHbrochure_web.pdf. [DOI] [PubMed]
  13. Office of Statewide Health Planning and Development,U.S. Census Bureau. Medical service study area (MSSA) census tract detail. Retrieved December 31, 2010, from http://www.oshpd.ca.gov/General_Info/MSSA/mssafrontierrural.pdf.
  14. Patient Centered Primary Care Collaborative. Joint principles of the patient centered medical home. Retrieved March 22, 2011, from http://www.pcpcc.net/content/joint-principles-patient-centered-medical-home.
  15. Pope C, Mays N. Qualitative research: Reaching the parts other methods cannot reach: An introduction to qualitative methods in health and health services research. British Medical Journal. 1995;311(6996):42–45. doi: 10.1136/bmj.311.6996.42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Pope C, Ziebland S, Mays N. Qualitative research in health care. analysing qualitative data. BMJ (Clinical Research Ed.) 2000;320(7227):114–116. doi: 10.1136/bmj.320.7227.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Press ganey. Retrieved March 31, 2011, from http://www.pressganey.com/
  18. Quality data management. Retrieved December 31, 2010, from http://qdmnet.com/qdm/home.html.
  19. Rural Health Policy Council. California medical service study areas census 2000 configuration. California Office of Statewide Health Planning and Development; 2003. [Google Scholar]
  20. Zapka JG, Palmer RH, Hargraves JL, Nerenz D, Frazier HS, Warner CK. Relationships of patient satisfaction with experience of system performance and health status. The Journal of Ambulatory Care Management. 1995;18(1):73–83. doi: 10.1097/00004479-199501000-00008. [DOI] [PubMed] [Google Scholar]

RESOURCES