Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Dec 1.
Published in final edited form as: J Pain Symptom Manage. 2017 Aug 9;54(6):806–814. doi: 10.1016/j.jpainsymman.2017.06.008

A survey to evaluate facilitators and barriers to quality measurement and improvement: Adapting tools for implementation research in palliative care programs

Sydney M Dy 1,2,3, Nebras Abu Al Hamayel 1, Susan M Hannum 4, Ritu Sharma 1, Sarina R Isenberg 4, Kamini Kuchinad 2, Junya Zhu 1, Katherine Smith 4, Karl A Lorenz 5, Arif H Kamal 6, Anne M Walling 7, Sallie J Weaver 8
PMCID: PMC5705262  NIHMSID: NIHMS898549  PMID: 28801007

Abstract

Context

Though critical for improving patient outcomes, palliative care quality indicators are not yet widely used. Better understanding of facilitators and barriers to palliative care quality measurement and improvement might improve their use and program quality.

Objectives

Development of a survey tool to assess palliative care team perspectives on facilitators and barriers to quality measurement and improvement in palliative care programs.

Methods

We used the adapted Consolidated Framework for Implementation Research to define domains and constructs to select instruments. We assembled a draft survey and assessed content validity through pilot testing and cognitive interviews with experts and front-line practitioners for key items. We analyzed responses using a constant comparative process to assess survey item issues and potential solutions. We developed a final survey using these results.

Results

The survey includes five published instruments and two additional item sets. Domains include organizational characteristics, individual and team characteristics, intervention characteristics and process of implementation. Survey modules include Quality Improvement in Palliative Care, Implementing Quality Improvement in the Palliative Care Program, Teamwork and Communication, Measuring the Quality of Palliative Care, and Palliative Care Quality in Your Program. Key refinements from cognitive interviews included item wording on palliative care team members, programs and quality issues.

Conclusion

This novel, adaptable instrument assesses palliative care team perspectives on barriers and facilitators for quality measurement and improvement in palliative care programs. Next steps include evaluation of the survey’s construct validity and how survey results correlate with findings from program quality initiatives.

Keywords: implementation research, palliative care, quality measurement, quality improvement, survey, cognitive interviews

Introduction

Measuring and improving the quality of care provided by palliative care programs is critical for improving outcomes for patients with serious and advanced illnesses and their families, from diagnosis with an advanced illness through the end of life.1 Measuring processes of care, or what occurs in interactions between palliative care team members, patients and their families, can be applied through the entire spectrum of care.2 Palliative care process quality indicators have now been developed for key measurable domains, such as communication and pain management, and for settings ranging from hospital to outpatient and home,38 and are endorsed by national organizations.9, 10

However, palliative care programs have been slow to adopt quality measurement activities. Extracting information on quality indicators from existing systems is time-consuming and often yields incomplete data. Although additional methods for collecting data for quality indicators are being developed, such as structured electronic record templates, point-of-care data collection and patient-reported outcomes,11 these methods can be burdensome to both palliative care team members and patients. Palliative care team members may feel that measuring quality of care could lead to “cookbook medicine” and reduce the patient-centeredness of care, or that the aspects most key to quality of palliative care, particularly excellent communication, cannot be accurately measured. Barriers in data collection in the context of clinical care can also lead to incomplete or lower-quality data.11 Similar issues may exist in quality improvement, where palliative care team members may feel that there is insufficient program support for improvement initiatives or that existing initiatives do not address the issues most important to patients.

Better understanding of key facilitators and barriers to implementing palliative care quality indicators could improve the completeness and quality of data collection through engaging palliative care team members more effectively. Understanding team members’ perceptions about quality could elucidate why performance on some quality indicators is lower. Understanding team members’ perceptions of program teamwork and leadership support could also help programs understand what changes may be needed to improve quality of care and better outcomes for patients.12

This paper outlines development of a survey to explore facilitators and barriers to quality measurement and improvement in palliative care programs. Specifically, we describe the process we used to select, adapt and supplement existing instruments and evaluate their content validity. The intended use of the survey is for palliative care teams interested in understanding how to use quality measurement in their programs, how to improve existing quality measurement initiatives, and how to implement or enhance quality improvement activities.

Methods

Existing validated instruments for health care professionals for concepts such as patient safety culture13 or contextual issues for quality improvement14 address key barriers and facilitators to quality measurement and improvement. However, these instruments do not address characteristics specific to palliative care programs or quality indicators for serious illness and end of life care. When instruments are adapted or applied for different purposes and populations, they should be revalidated. This requires consideration of content validity, the extent to which a measurement instrument (e.g., a survey) represents all facets of a given construct; construct validity, the degree to which the instrument actually measures the concept or construct that it is intended to measure; and criterion validity, or relationships of an instrument with other constructs to which it is conceptually related.15

Evaluating the validity of instruments adapted for a new area usually involves four key steps.16, 17 First is defining focal construct(s) and a conceptual model outlining how they relate to other key constructs. Second is operationalizing the focal constructs to select and adapt relevant instruments and then evaluating the items’ validity for the new use (e.g., evaluating how relevant and representative the items are to the construct(s) of interest). Based on this evaluation, an adapted working version of the instrument is developed. Third is evaluating the convergent and discriminant construct validity of the instruments (i.e., evaluating their relationships with conceptually similar and dissimilar constructs). The final step is evaluating criterion validity, the relationships of the instruments with other constructs to which they are conceptually related (i.e., do the instruments relate to measures of other constructs as anticipated). In practice, since steps three and four require significant sample sizes, these steps are generally conducted with the implementation of the survey. During steps three and four, researchers generally also recalculate instruments’ internal consistency reliability (or how individual items relate to each other within instruments’ subscales) to determine if and where the original analysis approach can still be used. If not, researchers rerun factor analysis to determine subscales for analyzing individual items together.

This paper outlines the evaluation of content validity for the survey.

Step 1: Conceptual framework and definition of domains and focal constructs

Conceptual framework

Our conceptual framework was the consolidated framework for implementation research (CFIR)18 with additional concepts from a recent adaptation for complex systems interventions.19 The CFIR synthesized relevant theories into an integrated framework for implementation research and has been widely applied for implementation science.20 We selected four domains from the adapted CFIR as most relevant to the survey: (1) intervention characteristics (e.g., of the quality measurement program), (2) organizational characteristics; (3) characteristics and roles of providers, and (4) process of implementation. Each domain includes multiple constructs. We selected potential constructs from the four domains based on clinical practice guidelines for the conduct of palliative care programs21 and reviews of palliative care quality indicators9 and quality improvement interventions.2224 (Table 2)

Table 2.

Summary of survey modules and items with corresponding Consolidated Framework for Implementation Research (CFIR) domains and constructs

Survey Module Scale description – as applied to palliative care program quality of care Adapted consolidated framework for implementation research* Source
Domain Construct(s) Sub-construct
Module A: Quality Improvement (QI) in Palliative Care – improving the quality of care provided by the palliative care program Team and leaders value QI and understand and effectively use quality improvement methods Palliative care program characteristics Implementation climate Tension for change Change process capability questionnaire (CPCQ)30,31
Program has established process for QI Program characteristics Implementation climate Relative priority CPCQ
Individual preparation to participate in and lead QI Characteristics and roles of team members Self-efficacy CPCQ
Program focus on educational support, recognition and rewards, selection of staff for experience and openness to QI Program characteristics Implementation climate Implementation Climate Scale28
Module B: Implementing Quality Improvement in the Palliative Care Program Implementation of or participation in any program QI initiatives Intervention characteristics Description of intervention CPCQ
Strategies used for QI (information, skills training, encouragement by leadership) Process of implementation Engaging CPCQ
Creating systems, reducing barriers, organizing, setting goals and benchmarking, customizing for QI Process of implementation Executing CPCQ3
Methods to reduce or evaluate negative consequences of QI Process of implementation Reflecting and evaluating CPCQ3
Relative priority of program leaders for QI Program characteristics Implementation climate Relative priority CPCQ3
Individual participation in QI Characteristics and roles of team members Role N/A
Individual accountability for quality of care Characteristics and roles of team members Individual accountability Unpublished
Module C: Teamwork & Communication Openness to ideas, viewpoints, questions and disagreement among palliative care team Characteristics and roles of team members Communication MO-SOPS Communication openness scale27
Team members help each other out, work well together, treat each other with respect, emphasize teamwork Characteristics and roles of team members Teamwork MO-SOPS Teamwork scale27
Collective accountability for quality of care Characteristics and roles of team members Collective accountability Unpublished
Module D: Measuring the Quality of Palliative Care Team and leaders value quality measurement and understand and effectively use quality measurement methods Program characteristics Implementation climate Tension for change CPCQ
Whether leadership is proactive, knowledgeable, supportive and perseverant about quality measurement Program characteristics Readiness for Implementation Leadership engagement Implementation Leadership Scale29
What indicators are collected and how they are used Intervention characteristics Description of intervention N/A
Supported by scientific evidence and experts Intervention characteristics Evidence strength and quality Organizational Readiness to Change Assessment (ORCA) 32
Appropriate, comprehensive, respectful of needs and preferences, acceptable to patients and families Intervention characteristics Patient/family centeredness ORCA
Advantages and disadvantages Intervention characteristics Relative advantage ORCA
Ease and feasibility of collection, understandability Intervention characteristics Complexity, Health information technology usability ORCA
Responsive to change, trustworthy, useful Intervention characteristics Usefulness N/A
Focus on educational support, recognition and rewards for quality measurement in palliative care Program characteristics Implementation climate Implementation Climate Scale
Module E: Palliative care quality in your program Perceptions of quality related to quality indicators N/A N/A Unpublished palliative care surveys
*

Note that, given feedback from the cognitive interviews, we reframed the “organizational characteristics” domain from the adapted CFIR as “palliative care program characteristics”, in order to provide better focus on the program, and the “Characteristics and roles of providers” as “Characteristics and roles of palliative care team members” to be inclusive of all members of the team. “Intervention characteristics” refers to the quality indicator data collection or quality improvement

N/A = Not applicable; MO-SOPS= Medical Office Survey on Patient Safety Culture; ORCA =Organizational Readiness to Change Assessment

Step 2: Operationalization and content validity

We searched for instruments relevant to the constructs in databases compiling validated instruments linked to CFIR constructs, the National Cancer Institute’s GEM-D&I (Grid-Enabled Measures Database – Dissemination and Implementation)25 and the SIRC (Society for Implementation Research Collaboration) Systematic Review and Synthesis of Implementation Science Instruments,26 and consultation with experts.

Based on Step 1, we selected five published instruments and two additional item sets, chosen based on best fit for the key relevant constructs and palliative care (Table 2). These included the Agency for Health Care Research and Quality’s Medical Office Survey on Patient Safety Culture (MO-SOPS),27 which addresses communication and teamwork and includes a database for national benchmarking. For implementation climate, we adapted the Implementation Climate Scale,28 developed and originally tested to evaluate evidence-based practice implementation. For readiness for implementation, we adapted the Implementation Leadership Scale,29 developed and tested for aspects of unit-level leadership for implementation of evidence-based practice. For these and other constructs, we also adapted subscales from Solberg’s Change Process Capability Questionnaire (CPCQ), developed and tested for assessing medical group capability for quality improvement.30, 31 We also adapted subscales from the Organizational Readiness to Change Assessment (ORCA) instrument,32 which addresses strength of evidence for the intervention, quality of the organizational context to support the intervention and organizational capacity to facilitate the intervention. For collective accountability, we used newly developed unpublished items. For palliative care team members’ perceptions of how quality indicators are being applied and the quality of palliative care, we adapted items from unpublished surveys.

As a next step, palliative care team members with expertise in organizational culture, palliative care and survey methods adapted the items for palliative care team quality measurement and improvement. We organized instrument subscales into modules organized for these applications.33 We then pilot tested the survey with six palliative care team members and refined the adapted item pool based on their feedback. Next, we conducted cognitive testing on 18 key adapted items that were unclear to these palliative care team members.34, 35

We conducted six cognitive interviews with three palliative care quality measurement experts (oncology and internal medicine physician from different institutions) and three frontline care team members (a nurse case manager, nurse practitioner, and social worker from academic and community settings). Cognitive interviews are a specific type of qualitative data collection that uses cognitive theory to understand how respondents perceive and interpret the items in a survey. The goal of cognitive interviews is to produce “a coherent set of results and recommendations” regarding specific items, in order to refine item wording to meet the survey’s goals.33 Three issues were addressed in the interviews: (1) Item-specific recommendations for changes to wording, including specificity and clarity; (2) Issues of overall instrument length or burden, such as items that are duplicative, unclear or lack relevance, and (3) Limitations in how the intended respondents can provide valid information through a survey.33

At least two investigators participated in each cognitive interview, with initial analysis following each interview. We probed each question by following the initial query with a subset of both spoken responses and observations and assessed the ease with which respondents could answer each item. For example, for each question we noted if respondents needed any part of the question repeated, had any difficulty using the provided response options, or asked for clarification/qualified their answer. Respondents were then given the opportunity to state why they answered the way in which they did, how sure they were of their answer, and how easy or difficult it was to provide an answer to the question. For several items, we also asked what a certain term in the item meant to them. (Table 1)

Table 1.

Draft item and questions used for cognitive interviewing*

Draft item Thinking about the organization in which you work, please indicate the extent to which you agree with this statement: One of this organization’s goals is to improve the quality of palliative care we provide.*
Draft response options [ ] Not at all (1)
[ ] Slight extent (2)
[ ] Moderate extent (3)
[ ] Great extent (4)
[ ] Very Great Extent (5)
[ ] Does not apply
[ ] Don’t know
[ ] Prefer not to answer
Cognitive interview questions for respondents Why did you answer that way? Walk me through what you were thinking when you chose that response.
Probes if needed: How did you arrive at your answer? Why do you believe that?
How sure are you of your answer?
How easy or hard was it to answer this question?
What, to you, does the phrase “improving palliative care quality” mean?
Probe if needed: Can you rephrase the question I asked you in your own words? (read question back one more time)
*

Final item changed word “organization” to “program”, defined as “palliative care program”

We then entered interview responses into Q-Notes data entry and analysis software for data analysis.36 Q-Notes is an application used to ensure systematic and transparent analyses across cognitive interviews. The analyses followed a three-stage structured method of data reduction and item revision based on the cognitive interviews. For the first stage, to analyze the findings of the cognitive interviews and accompanying interviewer notes, we conducted a question-by-question review of each of the 18 adapted items using Q-Notes. For the second stage, we used a constant comparative process to assess patterns relevant to the survey issues and to identify potential causes of these patterns. We also evaluated responses for patterns of item interpretation, to assess if respondents answered items in the manner originally intended and how responses deviated from the item intent. Third, we then collated results by item, generated a set of recommended changes to the overall protocol that reflected the results of the pilot interviews, and used these recommendations to revise the survey. As a quality check, we also reviewed the revised survey with several of the experts.

Results

Final survey content

The full adapted survey includes 5 modules: (A) Quality Improvement in Palliative Care – improving the quality of care provided by the palliative care program, (B) Implementing Quality Improvement in the Palliative Care Program, (C) Teamwork & Communication, (D) Measuring the Quality of Palliative Care, and (E) Palliative Care Quality in Your Program (Table 2; survey available upon request). If the survey is used only for evaluating either quality improvement or measurement, the other module(s) can be removed. From the adapted CFIR, for the organizational characteristics domain related to quality, the survey addresses communication; overall implementation climate and specific elements of climate, including tension for change and relative priority; and readiness for implementation, including leadership engagement. For the domain of characteristics and roles of individuals and teams, the survey addresses self-efficacy; perceptions of teamwork; and collective and personal accountability. For the CFIR domain of intervention characteristics, the survey addresses the description of the intervention, evidence strength and quality, patient-centeredness, relative advantage, complexity, and usefulness. Engaging, executing, reflecting and evaluating are included for the CFIR domain of process of implementation. Module E addresses respondents’ perceptions of how well the program is collecting quality data on key quality indicators (and can be easily adapted for other types of indicators), and the final section addresses respondent characteristics. Survey modules and how they correspond to CFIR domains are shown in Table 2.

Summary of cognitive interviews and adaptation for the palliative care context

Input from all palliative care quality experts and frontline team members contributed significantly to the interpretation of survey items and was generally consistent. The cognitive interviews highlighted three main potential problems with the survey, which we revised accordingly: a need to better define respondents, setting and leadership, lack of knowledge about all quality initiatives, and questions about how to describe quality measurement and improvement. First, defining the respondents and setting were key, as well as rewording in clear palliative care terms. The original instruments divided respondents into “physicians” or “providers” and “staff” and worded items differently for physicians, and non-physicians were not sure where to include themselves. From the palliative care perspective, we replaced the terms “providers”, “physicians” or “staff” with “palliative care team members”, to be inclusive and reflect the diverse personnel in palliative care programs.

Regarding the setting, interviewees variably interpreted the word “organization” as department/unit, hospital, health care system, or even at a national level, and interviewees did not use the term “facility”. Given the focus of the survey on quality within palliative care programs, we revised wording throughout the survey to use the term “palliative care program” to reflect the organizational level. The term “leadership” (including organizational or executive) was also challenging in settings with many different types of leaders and was interpreted in different ways. Based on input from the interviews, we reworded items to focus at the program level and at leadership most relevant to quality within the program.

Secondly, interviewees often were unsure about what quality initiatives were ongoing in their program, due to different parts of the program (e.g. hospital consult service vs. outpatient clinic vs. hospice) conducting separate projects or large programs where not everyone was aware of all quality activities. This was a particular issue for frontline team members, who felt they were often not in the communication loop. Interviewees discussed different ways to address this, including a “don’t know” option. Lack of awareness of program measurement or improvement activities could be important to assess, reflecting inadequate communication or culture of quality within the program. Since involvement of all palliative care team members is valuable in quality initiatives, understanding heterogeneity in knowledge of activities would also be important. We therefore reworded selected survey items to reflect uncertainty about ongoing initiatives, such as the item, “Has your program conducted any quality improvement projects for the quality of care provided by your program in the past year?” which includes an option of “Not sure”.

Finally, interviewees identified two key areas of unclear wording about quality. When discussing palliative care quality indicators, the terms “use” and “implementation” were unclear and interviewees often did not distinguish between them. To improve clarity, we removed these terms, rephrasing these as “collection” of quality indicator information throughout for clarity. The focus for quality indicators and improvement was also often unclear to respondents, so we reworded throughout the survey to describe the focus as “quality of care provided by the palliative care program.”

Discussion

We compiled, adapted and evaluated the content validity of measurement instruments to develop a survey for evaluating facilitators and barriers of palliative care program quality measurement and improvement, based on an adaptation of the Consolidated Framework for Implementation Research. The survey addresses key domains of organizational characteristics, characteristics and roles of individual and teams, intervention characteristics and the process of implementation. The survey is adaptable to include either or both quality measurement and improvement and to evaluate perceptions of current quality measurement efforts, and can be used with all palliative care team members as well as palliative care quality experts and leadership.

Evaluating content validity through pilot testing and cognitive interviews led to significant revisions of the survey, particularly in how to define key terms and describe the survey respondents, leadership and organization. These findings emphasize the importance of adapting and evaluating the validity of measurement instruments when applying instruments from other areas to palliative care. The next steps of evaluating reliability and criterion and construct validity, including discriminant and convergent validity, will be important in continued evaluation of this survey. This will include evaluating how survey constructs are associated with collection of quality indicator data, as well as the quality of care that patients receive. Ideally, further evaluation will also include assessment of responsiveness to change for programs that implement quality-related training, improvement programs or other initiatives. Results of these evaluations could also help to guide use of the survey in tailoring initiatives to specific sites, palliative care team members, populations and quality indicators, and inform whether and how quality initiatives’ success in specific sites could be translated to other settings.12, 37

Several recent non-US qualitative studies have evaluated perceptions of barriers and facilitators of quality measurement and improvement, many of which correspond well with the content of this survey and support its content validity. A European study of quality measurement found that characteristics of individuals (attitudes and expertise) and the organization (leadership and resources, including time) were key perceived facilitators.38 Another study of quality improvement across five European countries found that characteristics of individual professionals, such as expertise and motivation, and organizational context, such as leadership, culture of change, team climate, resources (including time) and facilities were perceived facilitators. Key barriers included changes inconsistent with local care philosophy and lack of expertise.39, 40

We developed the survey based on validated tools and rigorous methods, but the process had limitations and the survey requires further development. The CFIR may not include all relevant constructs, and our searches for existing surveys using implementation science databases likely missed some relevant instruments. Our cognitive interviews with six palliative care team members might not be representative of palliative care team members broadly.

Additional conceptual work will continue to evaluate how best to address these issues in palliative care quality initiatives, including evaluating different sub-constructs to include and how best to frame items for this context. Instruments not included in this evaluation and newly developed instruments should also be evaluated for applicability to palliative care quality initiatives. Many key CFIR constructs do not have measurement instruments or cannot be effectively evaluated using surveys, and need to be evaluated through other techniques such as interviews, data collection or observation. Finally, palliative care team survey results may help identify potential issues, but understanding how best to address them also requires additional evaluation, including exploring how palliative care team members responded to the items within a specific setting and integrating other sources of information.

Conclusions

Exploring barriers and facilitators to quality measurement is critical for understanding how best to effectively implement and sustain specialty palliative care quality initiatives.41 Using a rigorously-developed survey to measure palliative care team members’ perceptions may be useful for framing how best to implement quality measurement initiatives, interpret results to prioritize quality improvement strategies, and enhance ongoing quality improvement. Further research will evaluate the construct and criterion validity of this survey for this purpose and refine how it should best be implemented. Since many barriers and facilitators cannot be measured using surveys, other methods of evaluating quality initiatives will also be needed. Importantly, the patient and family perspective on burdens of data collection, how quality measurement affects their experience of care and ensuring that quality issues that matter most to them are addressed must always be considered in quality measurement and improvement.

Acknowledgments

Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R21CA197362. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Dr. Walling is supported by the Cambia Health Foundation Sojourns Scholar Leadership Award. Sarina Isenberg’s work was supported by the Canadian Institutes of Health Research # 146181. This work was conducted while Dr. Weaver was an Associate Professor at The Johns Hopkins School of Medicine.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Kamal AH, Hanson LC, Casarett DJ, Dy SM, Pantilat SZ, et al. The quality imperative for palliative care. J Pain Symptom Manage. 2015;49(2):243–53. doi: 10.1016/j.jpainsymman.2014.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Dy SM. Measuring the quality of palliative care and supportive oncology: principles and practice. J Support Oncol. 2013;11(4):160–4. doi: 10.12788/j.suponc.0017. [DOI] [PubMed] [Google Scholar]
  • 3.Seow H, Snyder CF, Mularski RA, Shugarman LR, Kutner JS, et al. A framework for assessing quality indicators for cancer care at the end of life. J Pain Symptom Manage. 2009;38(6):903–12. doi: 10.1016/j.jpainsymman.2009.04.024. [DOI] [PubMed] [Google Scholar]
  • 4.Schenck AP, Rokoske FS, Durham D, Cagle JG, Hanson LC. Quality measures for hospice and palliative care: Piloting the PEACE measures. J Palliat Med. 2014;17(7):769–75. doi: 10.1089/jpm.2013.0652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dy SM, Lorenz KA, O’Neill SM, Asch SM, Walling AM, et al. Cancer Quality-ASSIST supportive oncology quality indicator set: feasibility, reliability, and validity testing. Cancer. 2010;116(13):3267–75. doi: 10.1002/cncr.25109. [DOI] [PubMed] [Google Scholar]
  • 6.Mularski RA, Hansen L, Rosenkranz SJ, Leo MC, Nagy P, Asch SM. Medical Record Quality Assessments of Palliative Care for Intensive Care Unit Patients. Do They Match the Perspectives of Nurses and Families? Ann Am Thorac Soc. 2016;13(5):690–8. doi: 10.1513/AnnalsATS.201508-501OC. [DOI] [PubMed] [Google Scholar]
  • 7.Wright AA, Keating NL, Ayanian JZ, Chrischilles EA, Kahn KL, et al. Family Perspectives on Aggressive Cancer Care Near the End of Life. JAMA. 2016;315(3):284–92. doi: 10.1001/jama.2015.18604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Leff B, Carlson CM, Saliba D, Ritchie C. The invisible homebound: setting quality-of-care standards for home-based primary and palliative care. Health Aff (Millwood) 2015;34(1):21–9. doi: 10.1377/hlthaff.2014.1008. [DOI] [PubMed] [Google Scholar]
  • 9.Dy SM, Kiley KB, Ast K, Lupu D, Norton SA, et al. Measuring what matters: top-ranked quality indicators for hospice and palliative care from the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association. J Pain Symptom Manage. 2015;49(4):773–81. doi: 10.1016/j.jpainsymman.2015.01.012. [DOI] [PubMed] [Google Scholar]
  • 10.National Quality Forum. [Accessed June 26, 2016];Palliative and End-of-Life Care 2015–2016. www.qualityforum.org.
  • 11.Kamal AH, Harrison KL, Bakitas M, Dionne-Odom JN, Zubkoff L, Akyar I, et al. Improving the Quality of Palliative Care Through National and Regional Collaboration Efforts. Cancer Control. 2015;22(4):396–402. doi: 10.1177/107327481502200405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.van Riet Paap J, Vernooij-Dassen M, Sommerbakk R, Moyle W, Hjermstad MJ, et al. Implementation of improvement strategies in palliative care: an integrative review. Implement Sci. 2015;10:103. doi: 10.1186/s13012-015-0293-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Halligan M, Zecevic A. Safety culture in healthcare: a review of concepts, dimensions, measures and progress. BMJ Qual Saf. 2011;20(4):338–43. doi: 10.1136/bmjqs.2010.040964. [DOI] [PubMed] [Google Scholar]
  • 14.Jager AJ, Choudhry SA, Marsteller JA, Telford RP, Wynia MK. Development and Initial Validation of a New Practice Context Assessment Tool for Ambulatory Practices Engaged in Quality Improvement. Am J Med Qual. 2016 doi: 10.1177/1062860616659132. epub. [DOI] [PubMed] [Google Scholar]
  • 15.Chronbach L, Meehl P. Construct validity in psychological tests. Psychological Bulletin. 1955;52:281–302. doi: 10.1037/h0040957. [DOI] [PubMed] [Google Scholar]
  • 16.Nunnally JC, Bernstein IH, Nunnally JC, Bernstein IH. Psychometric theory. New York: McGraw-Hill; 1994. [Google Scholar]
  • 17.Schwab DP. Construct validity in organizational behavior. Research in Organizational Behavior. 1980;2:3–43. [Google Scholar]
  • 18.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Rojas Smith L, Ashok M, Dy SM, Wines RC, Teixeira-Poit S. Developing and Assessing Contextual Frameworks for Research on the Implementation of Complex System Interventions. Agency for Healthcare Research and Quality; 2014. [PubMed] [Google Scholar]
  • 20.Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the Consolidated Framework for Implementation Research. Implement Sci. 2016;11:72. doi: 10.1186/s13012-016-0437-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.National Consensus Project for Quality Palliative Care. [Accessed December 31, 2016];Clinical Practice Guidelines for Quality Palliative Care. (3). 2013 http://www.nationalconsensusproject.org. [PubMed]
  • 22.Fawole OA, Dy SM, Wilson RF, Lau BD, Martinez KA, et al. A systematic review of communication quality improvement interventions for patients with advanced and serious illness. J Gen Intern Med. 2013;28(4):570–7. doi: 10.1007/s11606-012-2204-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Lau BD, Aslakson RA, Wilson RF, Fawole OA, Apostol CC, et al. Methods for improving the quality of palliative care delivery: a systematic review. Am J Hosp Palliat Care. 2014;31(2):202–10. doi: 10.1177/1049909113482039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Dy SM, Apostol C, Martinez KA, Aslakson RA. Continuity, coordination, and transitions of care for patients with serious and advanced illness: a systematic review of interventions. J Palliat Med. 2013;16(4):436–45. doi: 10.1089/jpm.2012.0317. [DOI] [PubMed] [Google Scholar]
  • 25.National Cancer Institute. [Accessed July 12, 2015];Grid-Enabled Measures (GEM) Database - Dissemination and Implementation Initiative. https://www.gem-measures.org/public/wsmeasures.aspx?cat=8&aid=1&wid=11.
  • 26.Society for Implementation Research Collaboration. [Accessed July 12, 2015];The SIRC Instrument Review Project (IRP): A Systematic Review and Synthesis of Implementation Science Instruments. http://www.societyforimplementationresearchcollaboration.org/sirc-projects/sirc-instrument-project/
  • 27.Agency for Healthcare Research and Quality. [Accessed July 7, 2015];Medical Office Survey on Patient Safety Culture. http://www.ahrq.gov/professionals/quality-patient-safety/patientsafetyculture/medical-office/
  • 28.Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS) Implement Sci. 2014;9:157. doi: 10.1186/s13012-014-0157-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9(1):45. doi: 10.1186/1748-5908-9-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Solberg LI, Asche SE, Margolis KL, Whitebird RR. Measuring an organization’s ability to manage change: the change process capability questionnaire and its use for improving depression care. Am J Med Qual. 2008;23(3):193–200. doi: 10.1177/1062860608314942. [DOI] [PubMed] [Google Scholar]
  • 31.Agency for Healthcare Research and Quality. [Accessed December 23, 2016];Practice Facilitation Handbook: Module 6 Appendix. 2013 http://www.ahrq.gov/professionals/prevention-chronic-care/improve/system/pfhandbook/mod6appendix.html.
  • 32.Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38. doi: 10.1186/1748-5908-4-38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Willis GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks: Sage Publications; 2005. [Google Scholar]
  • 34.Willis GB. Cognitive Interviewing and Questionnaire Design: A training manual. Hyattsville, MD: National Center for Health Statistics; 1994. [Google Scholar]
  • 35.Willis GB, DeMaio T, Harris-Kojetin B. Is the bandwagon headed to the methodological promised land? Evaluating the validity of cognitive interviewing techniques. In: Sirken MG, Herrmann DJ, Schechter S, Schwarz N, Tanur JM, Tourangeau R, editors. Cognition and Survey Research. New York: Wiley; pp. 133–15. [Google Scholar]
  • 36.Centers for Disease Control. Questionnaire Design Research Laboratory. [Accessed December 23, 2016];Q-Notes Analysis Software for Cognitive Interviews. 2014 https://www.cdc.gov/qdrl/b4product/prod220.htm.
  • 37.Lau BD, Aslakson RA, Wilson RF, Fawole OA, Apostol CC, Martinez KA, et al. Methods for improving the quality of palliative care delivery: a systematic review. Am J Hosp Palliat Care. 2014;31(2):202–10. doi: 10.1177/1049909113482039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Leemans K, Van den Block L, Vander Stichele R, Francke AL, Deliens L, Cohen J. How to implement quality indicators successfully in palliative care services: perceptions of team members about facilitators of and barriers to implementation. Support Care Cancer. 2015;23(12):3503–11. doi: 10.1007/s00520-015-2687-8. [DOI] [PubMed] [Google Scholar]
  • 39.Sommerbakk R, Haugen DF, Tjora A, Kaasa S, Hjermstad MJ. Barriers to and facilitators for implementing quality improvements in palliative care - results from a qualitative interview study in Norway. BMC Palliat Care. 2016;15:61. doi: 10.1186/s12904-016-0132-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.van Riet Paap J, Vernooij-Dassen M, Brouwer F, Meiland F, Iliffe S, et al. Improving the organization of palliative care: identification of barriers and facilitators in five European countries. Implement Sci. 2014;9:130. doi: 10.1186/s13012-014-0130-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Demiris G, Parker Oliver D, Capurro D, Wittenberg-Lyles E. Implementation science: implications for intervention research in hospice and palliative care. Gerontologist. 2014;54(2):163–71. doi: 10.1093/geront/gnt022. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES