Abstract
Objective
To determine factors that influence the adoption and use of patient-reported outcomes (PROs) in the electronic health record (EHR) among users.
Materials and Methods
Q methodology, supported by focus groups, semistructured interviews, and a review of the literature was used for data collection about opinions on PROs in the EHR. An iterative thematic analysis resulted in 49 statements that study participants sorted, from most unimportant to most important, under the following condition of instruction: “What issues are most important or most unimportant to you when you think about the adoption and use of patient-reported outcomes within the electronic health record in routine clinical care?” Using purposive sampling, 50 participants were recruited to rank and sort the 49 statements online, using HTMLQ software. Principal component analysis and Varimax rotation were used for data analysis using the PQMethod software.
Results
Participants were mostly physicians (24%) or physician/researchers (20%). Eight factors were identified. Factors included the ability of PROs in the EHR to enable: efficient and reliable use; care process improvement and accountability; effective and better symptom assessment; patient involvement for care quality; actionable and practical clinical decisions; graphical review and interpretation of results; use for holistic care planning to reflect patients’ needs; and seamless use for all users.
Discussion
The success of PROs in the EHR in clinical settings is not dependent on a “one size fits all” strategy, demonstrated by the diversity of viewpoints identified in this study. A sociotechnical approach for implementing PROs in the EHR may help improve its success and sustainability.
Conclusions
PROs in the EHR are most important to users when the technology is used to improve patient outcomes. Future research must focus on the impact of embedding this EHR functionality on care processes.
Keywords: patient-reported outcomes, Q methodology, socio-technical, patient-centered
INTRODUCTION
In the United States, rapidly rising healthcare costs and shortfalls in care delivery have resulted in an urgent need to prioritize care quality and patient outcomes.1,2 patient-reported outcomes (PROs) in electronic health records (EHRs) have been proposed to support patient-centered care.3–5 PROs refer to health information reported directly by patients about their symptoms, quality of life, functional status, and satisfaction with treatment, without interpretation by anyone.6–8
PROs offer patients the opportunity to be more engaged in their care, participate in decision-making about their care, and improve communication with their providers; however, integrating PROs in the EHR presents its own set of challenges including workflow issues that have been linked to care efficiency, lack of user-friendly interfaces, administrative issues and time related to the burden incurred on patients and healthcare staff when patients complete PROs in the clinical setting.9,10 Moreover, concerns have been raised about the perceived benefits of PRO measures by both patients and clinicians, access to PRO data at the point of care, patients’ inabilities to access and complete PROs, and a lack of standardized PROs for clinical care.9,11
The growing interest in the use of PROs in direct patient care signifies the need to collect and assess the perspectives of users for embedding PROs in the EHR.10 Prior research has suggested considerations for the successful use of PROs; more specifically, research has suggested input from users in all aspects of the development of PRO measures12–15 such as their design, the language used for the measures, and the reliability and validity of the measures.16 Training for users, including improving their ability to interpret and observe changes in PRO measure scores, has also been suggested.13,17
Consideration of clinicians’ perspectives regarding what is most important to them about this integration, how PROs fit into treatment plans, and how clinicians can maximize the use of PRO data are critical concerns. Such information will help add to the limited research on understanding factors that affect the use of this EHR functionality by clinicians. Moreover, the information can be used for a more comprehensive assessment of patients, thus helping to improve patient outcomes and care quality.
Previous research has used qualitative and quantitative methods to study the effects of barriers and facilitators associated with adopting PROs in the EHR and the difficulties associated with the routine use and interpretation of PRO measures by clinicians.18–20 These studies have identified barriers to the adoption of PROs in the EHR that include the need for training and education of users, organizational policies, technical support, a clear benefit of PROs in clinical assessment, and describe the burden incurred by clinicians and patients when collecting PROs. Limitations in the development of PROs include convenience sampling of study participants, conflicting researcher perspectives, and the use of only one institution or a specific medical specialty. Other studies have identified several gaps in knowledge related to best practices for informing clinicians and patients about the value of PRO data, for identifying what is most important to users of PROs, for collecting PRO data, and for interpreting PRO data for optimal use and population health purposes.10,19,21,22 For success in both integration of PROs in the EHR and its role in decision-making in the clinical setting and the consistent use of the technology by clinicians, gaps must be addressed. Moreover, clinician buy-in, a factor that is important for the use and optimization of the EHR, must also be considered.23–26
The objective of this study was to determine the factors that influence clinicians’ and other health professionals’ decisions to adopt and use PROs within the EHR as a tool for assessing patient care. Insights gained from this study can help in the development of strategies to improve the successful adoption and use of PROs within the EHR, to mitigate barriers that impede adoption and use of the technology, and to establish guidelines for care delivery, quality, and patient-centered services.
MATERIALS AND METHODS
Study setting and participants
Two groups of participants with experience working with PROs within the EHR were recruited for this study. Purposive sampling was used to recruit participants for both groups. The first group of participants was recruited for data input to build the study instrument called the Q set. These participants were recruited from teaching hospitals where PROs were implemented and used within the EHR. These participants (N = 18) worked as physicians (n = 7); physician/researcher (n = 1); data manager (n = 1); informatician (n = 1); information technology manager (n = 1); marketing manager (n = 1); physical therapist (n = 1); nurse manager (n = 1); research scientist lead on PRO governance (n = 1); clinical psychologist/researcher (n = 1); Quality manager (n = 1); and PRO researcher (n =1).
The second group of participants was recruited from national listservs of organizations and workgroups. This group participated in the online Q sorting of the Q set. Consent for participation in the study was implied once participants submitted their responses for the Q sort. The study was approved by the Human Subjects Research Institutional Review Board of the University of Illinois Chicago.
Building the study instrument
Q methodology was the methodological approach used for this study because it supports a unique process for studying human subjectivity and for revealing perspectives and attitudes.27,28 Q methodology is a systematic approach used to study subjectivity whereby similarities and differences in opinions on a specific topic are determined.27–29 The methodology combines both qualitative and quantitative techniques and was developed in 1935 by William Stephenson.30–33 A full explanation is in.29 The methodology’s application has been described in health informatics34 and in research to study subjectivity in healthcare.33,35–39 Table 1 provides the stages and definition of terms used in Q methodology. Figure 1 illustrates the methodology, highlighting its six stages—the identification of a topic; the Q set; the P set; Q sorting; data analysis and data interpretation of factors from the resulting analysis.
Table 1.
Stages of Q methodology and a definition of terms
| Stages of Q methodology | Description |
|---|---|
| Concourse | Comprehensive list of statements that represent the discourse about the topic of interest as related to the research question |
| Q set | The final set of statements that will be sorted and ranked by study participants |
| P set | Participants who are knowledgeable about the topic of interest and will rank and sort the statements in the Q set |
| Q sort | The process where participants rank and sort statements from the Q set and arrange them in the Q grid based on their individual opinions |
| Q grid | A quasi-normal distribution grid that participants use to rank statement from the Q set based on what is most important and most unimportant to them when they considered the adoption and use of PROs in the EHR |
EHR: electronic health record; PROs: patient-reported outcomes.
Figure 1.
Stages of Q methodology used in this study.
To build the concourse, data were collected using focus group and interview discussions and a review of literature specific to clinicians’ and other health professionals’ perspectives on PROs in the EHR.6,18,21,24,40–47 Data were collected face-to-face for focus group and interview sessions by the principal investigator and an assistant, and by a review of the literature by the principal investigator. Some interviews were conducted face-to-face or over the telephone by the principal investigator. Once the data were collected, all statements that identified perspectives about PROs in the EHR were gathered for building a comprehensive list of statements called the concourse. Once the concourse was finalized, an iterative thematic process was undertaken by the principal investigate and 2 other researchers. The process categorized statements that conveyed the same or similar opinions about PROs in the EHR, using the dimensions of the 8-dimensional model48 as a guide. Once categorization was finalized and agreed upon by all 3 researchers, the statements within each category that encompassed the meaning of the other statements within the category were first discussed, agreed upon, and selected for inclusion in the Q set. Demographic questions were added to the Q set to provide context to study results.
The questionnaire for focus group and semi-structured interviews was informed by the 8-dimensional socio-technical model of safe and effective EHR use.48 This 8-dimensional model was developed to study the design, development, use, implementation, and evaluation of new technology in dynamic environments such as in healthcare settings.48 The dimensions of the model can address many of the factors within sociotechnical systems that either facilitate or hinder effective functioning. The multidimensional characteristic of the model highlights the interdependence of each of its dimensions which include: hardware and software; content; user interface; personnel; workflow and communication; organizational policies, procedures and culture, external rules and regulations, and system measurement and monitoring (Figure 2).48
Figure 2.

Eight-dimensional socio-technical model of safe and effective electronic health record use (used with permission49).
To maintain rigor for the qualitative data collected, triangulation, member checking, and saturation were used.50 As we approached the interviews, we evaluated for data saturation of concept. If saturation was not evident from this analysis, additional interviews would have been pursued. Data saturation was complete when no further new concepts were being introduced within the interviews. Triangulation entailed the collection of data from multiple focus group and interview discussions and the literature. Member checking entailed the review of responses after data collection from focus group and interview sessions. To conduct member checking, the assistant or principal investigator read the responses provided for each question after the focus group and interview sessions allowing respondents to either clarify or elaborate on their responses.
Data collection
Each study participant in the P set sorted and ranked the statements in the Q set (Q sorting) on a continuum ranging from −4 (most unimportant) to +4 (most unimportant) using HTMLQ software for electronic and anonymous sorting.51 The sorted statements were placed in a Q sort grid.38 All completed Q sorts were electronically captured, fed into PQMethod, and were subjected to factor analysis using PQMethod software program for Windows (Version 2.35).52
Data analysis and interpretation
The input data were the digital sorting of the 49 statements in the Q set on a continuum from most unimportant (−4) to most important (+4). Using factor analysis, the Q sorts were analyzed to identify shared viewpoints (factors) about PROs in the EHR among participants. The analysis entailed the application of principal component analysis and factor rotation using varimax.29 Results included the calculation of a correlation matrix, and factors that were identified using Eigenvalues of two or more. Participants who shared similar opinions on a particular statement were aligned mathematically on one factor.
RESULTS
Demographics
Fifty participants who received an email invitation to participate in the study completed a Q sort of 49 statements in the Q set. Figure 3 shows an idealized example of a Q sort. Table 2 shows the demographics of participants who completed the Q sorts.
Figure 3.
Representation of an idealized example of a Q sort as ranked by a participant. The numbers in the top row reflect the opinions of participants along the continuum, most unimportant (−4) through zero where opinions are neutral, to most important (+4). The rows represent the number of statements allowed for each column’s value.
Table 2.
Participant demographics (N = 50)
| Demographic | n (%) |
|---|---|
| Gender (N = 45) | |
| Male | 23 (46) |
| Female | 21 (42) |
| Other | 1 (2) |
| Did not respond | 5 |
| Age (N = 32) | |
| 0–25 | 0 |
| 25–34 | 7 (14) |
| 35–44 | 7 (14) |
| 45–54 | 10 (20) |
| 55–64 | 3 (6) |
| 65 years | 5 (10) |
| Did not respond | 18 |
| Occupation (N = 46)a | |
| Clinical manager | 5 (10) |
| Informatician | 9 (18) |
| IT Professional | 3 (6) |
| Nurse | 3 (6) |
| Physician | 12 (24) |
| Physician/researcher | 10 (20) |
| Researcher | 11 (22) |
| Other: (clinical analyst, clinical support, and psychologist) | 5 (10) |
| Work location (N = 44)b | |
| Hospital | 28 (56) |
| University | 29 (58) |
| Research Institute | 9 (18) |
| Other (ambulatory care) | 2 (4) |
| Mean number of years working with PROs (N = 39) | 5.03 ± 3.5 |
PROs: patient-reported outcomes.
Some participants reported dual roles such as physician/researcher.
Some participants worked in multiple settings such as hospitals and universities.
Factor analysis
Factor analysis and varimax rotation resulted in eight factors with eigenvalues of at least two. The factors, which represented similar groupings of statements were defined by 27 of the 50 Q sorts (54%) and explained 61% of the study variance. The variance explained by each factor indicates the similarity among the individuals who aligned on it. The amount of variance is attributed to the similarity on the factor and not the number of individuals. For instance, factor 1 only explains 8% of the variance because the three individuals who aligned on it are very similar. Factors emerge when there is a sufficient number of people to define a factor. It is about shared opinion sets that are identified to address an issue. Although all participants had an opinion about each statement, it may not have been shared by others. It is not uncommon for a factor to emerge only from the shared opinions of a few individuals.29 The overall 61% variance that resulted from this study is higher than the variance reported by other Q methodology studies where overall study variance ranged from 21% to 53%.53–56 The remaining 23 Q sorts did not load significantly on any of the eight factors. None of the Q sorts loaded significantly on more than one factor (no confounding Q sorts).
Supplementary Appendix SA presents the ranking, by level of importance, of each statement within the eight factors. The most important statements were ranked as +4 and those most unimportant were ranked as −4. Results suggest that statement # 25, The ability to review PRO item results and cumulative scores in a way meaningful to clinicians, was deemed most important (score +4) on six of the eight factors. Statement #49, the ability of clinicians to receive payment for use of PROs, was deemed most unimportant (score −4) on all, but factor 2.
Description of factors
The eight factors that emerged from the analysis were described using results of the defining Q sorts, distinguishing statements, and postsort descriptions provided by participants. The overall sentiments suggested by participants supported patient care as shown in the following quotations:
patients need and deserve to have the greatest say in their health (researcher);
[PROs could] save time and be efficient in health tracking for both clinicians and patients (researcher), and
the point of healthcare…is to increase the quality of care [and] this should always be top priority (nurse).
More specifically, the viewpoints of the eight factors are depicted in Figure 4. Each factor is subsequently described, followed by quotations from participants and their roles.
Figure 4.
Factors that emerged from the Q sort analysis.
Factor 1: enable efficient and reliable use
Factor 1 highlighted the priority health professionals place on their ability to use PROs in the EHR efficiently to enrich and inform their care decisions. This factor was defined by five participants (Q sorts that were highly significant on the factor [P < .01)) and accounted for 11% of the total study variance. The viewpoints expressed on Factor 1 suggest how important the value of PRO data is to patient care. As stated by participants:
It is critical to ensure PRO measures provide data that can advance care (Researcher/clinical manager)
change detection is a critical issue (Researcher).
Factor 2: enable care process improvement and accountability
Factor 2 highlighted the importance of quality improvement and the use of PROs in a manner that is useful for clinical care. Two participants defined factor 2, accounting for 5% of the study variance. According to participants, PROs in the EHR:
can be used in quality improvement… will improve patient outcomes over time (Physician/researcher)
are critical and beneficial if used effectively… must unburden providers and staff and make it easy to do the right thing (Physician)
[can] be used as one benchmark of care (Researcher).
Factor 3: enable effective and better symptom assessment
Factor 3 highlighted the value placed on PROs by clinicians as a tool for improving patient assessment. The factor was defined by two participants, accounting for 6% of the total variance. Comments provided by participants highlighted PROs in the EHR as a tool to identify critical PROs result through flagging and:
the patient as an important source of data …adding PRO data to the clinical assessment (Physician/researcher).
Factor 4: enable patient involvement for care quality
Factor 4 highlighted the importance of patients engaging in their care for improving care quality. The factor accounted for 9% of the study variance and was defined by six participants. Participants’ comments included the ability of PROs to improve quality of care measurement and surveillance and should be:
top priority [and] patients need and deserve to have the greatest say in their health (Researcher).
Factor 5: enable actionable and practical clinical decisions
The need to access and use PROs to inform clinical decisions at the point of care was highlighted by factor 5. Factor 5 accounted for 7% of the study variance and was defined by three participants. The factor suggests the importance of PROs for improving patient care and outcome and, as stated by a participant, PROs are most important:
given the need for patient-centered care (Researcher).
Factor 6: enable graphical review and interpretation of results
Factor 6 highlighted the importance of presenting PROs data in a manner that is easily understood for use in clinical decision-making. Two participants defined factor 6, which accounted for 8% of the study variance. The viewpoints expressed in factor 6 suggest the need for easy access and comprehension of PRO results. One participant remarked:
I want to know when someone may be deteriorating; maybe even before they realize it themselves (Physician).
Factor 7: enable use for holistic care planning to reflect patients’ needs
The importance of patients’ needs was suggested by factor 7 whereby care planning and patient outcome were prioritized. Factor 7 accounted for 8% of the study variance and was defined by three participants. As indicated by study participants, PROs are most important:
when it is accessible and understood by patients (Clinical manager) [and]
if it is in the EHR (Informatician).
Factor 8: enable seamless use for all users
Factor 8 suggested the need for the efficient use of PROs so that their benefits could be maximized for all users. This factor accounted for 7% of the study variance and was defined by three participants. Participants suggested that the integration of PROs in the EHR would improve care efficiency through data collection for all users and access to reliable data at the point of care for clinicians. As explained by a participant, PROs in the EHR must be:
mandatory and data collection must be seamless (Physician).
Additionally, results of this study suggest the need for PRO data to determine the delivery of care, provide better patient care, and promote better patient outcomes. Results from the focus group and interview discussions identified the need for a standardized “language” among users for communicating when using PROs. Also, the focus group and interview discussions further suggested there is no common language among study participants to describe their opinions about PROs in the EHR.
DISCUSSION
Opinions regarding the value and use of PROs in the EHR are diverse, even among users within the same institution. The diversity of the eight factors identified demonstrates the complexity associated with adopting and using PROs in the EHR. The factors focused on issues that benefit patients and clinicians at the point of care and on the ability of the embedded functionality to support care processes. This diversity of opinions further suggests the need for comprehensive consideration for successful implementation of PROs within the EHR.
Study results can be interpreted within the context of a socio-technical model. Five of the eight dimensions of the socio-technical model of safe and effective EHR use48 aligned with the factors including people; workflow and communication; human user interface; hardware and software; and system measurement and monitoring. The three dimensions of the sociotechnical model that were not well represented within the factors included organizational policies and procedures; external rules, regulations, and pressures; and clinical content. Though these dimensions are equally important to the success of technology in healthcare systems, our findings suggest that study participants may be more concerned with the actual use of PROs in the EHR to affect patient care and less concerned with EHR issues that are outside of their personal control. The importance of these underreported dimensions in this study should not be discounted as other studies have identified them as an important influence; for example, clinical content is especially important for adopting PROs and has been identified as a barrier to the success of the technology.18,44 Further research that could provide more insight on this finding is needed.
Prioritizing PROs for use at the point of care and more specifically, the need to access and interpret PRO-based data were considered important among study participants as seen in factors 1 and 2. The factors suggest the importance of communication in healthcare between management and users and between software designers and users to ensure that resource needs and requirements of users are met. Factor 3 was characterized by statements that emphasized the importance of using PROs to their full potential and integrating PRO-based metrics into the EHR. In factor 4, the potential use of PROs within the EHR to improve care quality was highlighted. This finding is consistent with the literature that emphasizes the use of PROs in the EHR to support quality improvement.21,57
Three of the eight factors suggest prioritizing a functioning, technical infrastructure that is user-friendly and meets the needs of users. Factor 5 described the importance of PROs in the EHR to enable actionable and practical clinical decisions; factor 6 focused on using PROs in the EHR to enable graphical review and interpretation of results; and factor 8 called for infrastructure to enable seamless use by all users. To address these factors, technology that is well designed, intuitive, accessible, and easy to use is required. Factor 7 described respondents’ viewpoints that PROs are important for optimizing patient care. Well-designed PROs that are easy to use best support patient-centered care by providing a holistic view of patients’ health for improving and planning their care needs. Health organizations should, therefore, consider investing in efforts that use PROs for improving care coordination, workflows, and for using PRO scores across disease specialties and population norms.
An underlying gap in knowledge of PROs was also apparent, highlighting the need for better education and training of users. It is, therefore, important to plan effective implementation strategies that include comprehensive education and training for all users. Moreover, clinicians will not be motivated to use the technology if they do not see benefit to their work such as improving workflow, communication, interoperability, use of their time, and PROs as an effective tool for patient assessment. According to the literature, individual attitude toward the use of technological systems is influenced by the users’ perceived ease of use and personal opinion about the likelihood of the technology to improve their performance.58
Results of this study support some elements in previously published guidelines for implementing PROs in the EHR; however, while participants in our study were interested in how best PROs in the EHR can be used to improve clinical care, they did not feel important other elements that have been highlighted in previous research as important when adopting PROs in the EHR, such as:
Compensation for clinicians who use PRO in the EHR24;
The use of “champions” to promote and support the adoption of the technology24,26;
The value of PROs in terms of clinician performance59; and
The findings of this study have significant implications for nationwide deployment strategies of PROs in the EHR, particularly in clinical settings. Despite the importance of champions when adapting new technology,24 the level of importance of champions to facilitate PRO/EHR technology adoption remains unknown. Similarly, the level of the importance of financial compensation for use of the PROs in the EHR, among other considerations expressed by study participants, indicates the need for future research objectives that focus on resolving these differences, perhaps by utilizing different approaches to query user perspectives or by probing different populations of users. We would expect variation in viewpoints in the factors deemed most important/most unimportant among users who either have different roles or different work settings from those in this study. Other key areas for advancing the research on PROs in the EHR are: (1) How to improve the quality of PRO measures and surveillance; (2) What is the impact of patient involvement in health and treatment decision-making through PROs; and (3) What are methods for presenting PRO item results and cumulative scores in a way that is meaningful to clinicians? These results may inform organizational leaders of strategies that are necessary for the success of the technology in clinical settings for all users.
Study strengths
Strengths of this study include subjective insights from persons directly connected to the topic of interest, the assurance of participant anonymity, and the personal accounts that allow us to gain insight into the decision-making process of participants. Despite the limitations of online sorting in Q methodology, the method presents a level of efficiency compared with manual sorting. The geographical diversity of participants also reflects the strength of the results.
Study limitations
The data collection process presented some limitations. Additionally, although study participants varied in professional roles and location, purposive sampling limits generalizing study results to the general population.29 A majority of the participants who completed the Q sort were physicians or physician/researchers, making the results physician-centric. Other limitations, as explained by Jurczyk61 included the online Q sorting which may result in challenges like a lack of direct communication between the study participants and principal investigator, limited technical skills among study participants, and the availability of visual space needed for sorting a large number of statements. Our findings do not represent the entire population of the United States, but rather is a reflection of a studied example of the population. Future work will expand the findings to other populations to determine how the opinion sets differ and new opinions emerge.
CONCLUSION
The diversity of viewpoints identified in this study suggests that there is no “one size fits all” strategy for ensuring the success of PROs in the EHR in clinical settings. Successful strategies will need to be tailored to specific organizational and practice characteristics. Such strategies may entail effective communication and collaboration among stakeholders such as leadership, users, patients, and technical staff. The preferences of clinical users are especially critical to the success of PROs in the EHR. Results from this study may serve as a catalyst for developing strategies that can bridge the gap between the limited use of PROs in the EHR, knowledge gaps in the literature, and the benefits of PROs to patients and clinicians alike. Additional research must focus on strategies that improve the standardization of PRO use among users, the use of PRO results in clinical settings, and how PROs impact health outcomes.
FUNDING
This work was supported by the National Institutes of Health, National Center for Advancing Translational Sciences grant NCATU01TR001806. This work was part of the Electronic Health Record Access to Seamless Integration of PROMIS (EASIPRO) project.
AUTHOR CONTRIBUTIONS
SVB, PhD, MPH: Study conceptualization; data collection for building the concourse; iterative thematic analysis for building the Q set; assistant with building the P set; data analysis and interpretation of factor analysis; article writing, edit, and review. ALV, DrPH: study conceptualization; assistance with Q methodology; iterative thematic analysis for building the Q set; assistant with building the P set; interpretation of factor data analysis; article writing, edit and review. JS, MD, PhD: Content validity expert; assistant with building the P set; advise for data analysis; article review. JA, PhD: article edit and review and guidance on qualitative data analysis. TN, AM, LSW: Technical advise on implementation of PROs in the EHR. KK, MD: Content validity expert; assistant with building the P set; advise on data analysis and article review. AH, PhD: article review and advise on qualitative data analysis. BH, MD: Content advise on PROs in the EHR. AB, MD: Data collection for building the study concourse; iterative thematic analysis for building the Q set; assistance with building the P set; factor analysis using PQMethod; data interpretation of output from factor analysis; article writing, edit, and review.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
Supplementary Material
ACKNOWLEDGMENTS
We would like to extend special thanks to all those who made this work possible including the focus group and interview discussion participants; EASIPRO principal investigators and members; PRO-EPIC PROMIS members; the following AMIA Working Groups: Evaluation, Student, Clinical Information System working group, Primary Care Informatics, Consumer & Pervasive Health Informatics; individuals from Northwestern Memorial Hospital/Northwestern University; the University of Utah; and the University of Illinois Chicago.
CONFLICT OF INTEREST STATEMENT
None declared.
DATA AVAILABILITY STATEMENT
Data will be made available upon request for the factor analysis. The statements for the Q set will be shared for future Q sorts as well.
Contributor Information
Shirley V Burton, Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, Illinois, USA.
Annette L Valenta, Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, Illinois, USA.
Justin Starren, Department of Preventive Medicine, Northwestern University, Chicago, Illinois, USA.
Joanna Abraham, Department of Anesthesiology and Institute for Informatics, Washington University in St. Louis, St. Louis, Missouri, USA.
Therese Nelson, Department of Preventive Medicine, Northwestern University, Chicago, Illinois, USA.
Karl Kochendorfer, Department of Clinical Family Medicine, University of Illinois Chicago, Chicago, Illinois, USA.
Ashley Hughes, Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, Illinois, USA.
Bhrandon Harris, Department of Family Medicine, University of Illinois Chicago, Chicago, Illinois, USA.
Andrew Boyd, Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, Illinois, USA.
REFERENCES
- 1. Burstin H, Cobb K, McQueston K, et al. Measuring what matters to patients: Innovations in integrating the patient experience into development of meaningful performance measures. National Quality Forum [Internet]; 2017: 21.http://www.qualityforum.org/Publications/2017/08/Measuring_What_Matters_to_Patients__Innovations_in_Integrating_the_Patient_Experience_into_Development_of_Meaningful_Performance_Measures.aspx.
- 2. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med [Internet] 2003. Jun 26; 348 (26): 2635–45. http://www.nejm.org/doi/abs/10.1056/NEJMsa022615 Accessed April 13, 2019. [DOI] [PubMed] [Google Scholar]
- 3. Davis K, Schoenbaum SC, Audet A-M. A 2020 vision of patient-centered primary care. J Gen Intern Med [Internet] 2005; 20 (10): 953–7. http://link.springer.com/10.1111/j.1525-1497.2005.0178.x Accessed April 13, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Delbanco T, Gerteis M, Aronson MD, Park L. A patient-centered view of the clinician-patient relationship. Uptodate. 2012. https://www.uptodate.com Accessed March 2020.
- 5. Leslie HH, Hirschhorn LR, Marchant T, Doubova SV, Gureje O, Kruk ME. Health systems thinking: a new generation of research to improve healthcare quality. PLoS Med 2018; 15 (10): e1002682.https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002682 Accessed July 4, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Anatchkova M, Donelson SM, Skalicky AM, McHorney CA, Jagun D, Whiteley J. Exploring the implementation of patient-reported outcome measures in cancer care: need for more real-world evidence results in the peer reviewed literature. J Patient Rep Outcomes [Internet] 2018; 2 (1). https://jpro.springeropen.com/articles/10.1186/s41687-018-0091-0 Accessed April 16, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Frost MH, Reeve BB, Liepa AM, Stauffer JW, Hays RD. What is sufficient evidence for the reliability and validity of patient-reported outcome measures? Value Health [Internet] 2007; 10: S94–105. https://linkinghub.elsevier.com/retrieve/pii/S1098301510606341 [DOI] [PubMed] [Google Scholar]
- 8. Stehlik J, Rodriguez-Correa C, Spertus JA, et al. Implementation of real-time assessment of patient-reported outcomes in a heart failure clinic: a feasibility study. J Card Fail [Internet] 2017; 23 (11): 813–6. https://linkinghub.elsevier.com/retrieve/pii/S1071916417312186 Accessed September 30, 2020. [DOI] [PubMed] [Google Scholar]
- 9. Basch E, Barbera L, Kerrigan CL, Velikova G. Implementation of patient-reported outcomes in routine medical care. Am Soc Clin Oncol Educ Book [Internet] 2018; 38: 122–34. https://ascopubs.org/doi/full/10.1200/EDBK_200383 Accessed June 18, 2020. [DOI] [PubMed] [Google Scholar]
- 10. Lavallee DC, Chenok KE, Love RM, et al. Incorporating patient-reported outcomes into health care to engage patients and enhance care. Health Aff (Millwood) 2016; 35 (4): 575–82. http://www.healthaffairs.org/doi/10.1377/hlthaff.2015.1362 Accessed May 6, 2019. [DOI] [PubMed] [Google Scholar]
- 11. Snyder C, Wu AW. Users’ guide to integrating patient-reported outcomes in electronic health records. Baltimore, MD: Johns Hopkins University; 2017. Funded by Patient-Centered Outcomes Research Institute (PCORI); JHU Contract No. 10.01.14 TO2 08.01.15). http://www.pcori.org/document/users-guide-integrating-patient-reported-outcomes-electronic-health-records.
- 12. Basch E, Dueck AC, Rogak LJ, et al. Feasibility assessment of patient reporting of symptomatic adverse events in Multicenter Cancer Clinical Trials. JAMA Oncol 2017; 3 (8): 1043–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Edbrooke-Childs J, Barry D, Rodriguez IM, Papageorgiou D, Wolpert M, Schulz J. Patient reported outcome measures in child and adolescent mental health services: associations between clinician demographic characteristics, attitudes and efficacy. Child Adolesc Ment Health [Internet] 2017; 22 (1): 36–41. http://doi.wiley.com/10.1111/camh.12189 Accessed April 23, 2019. [DOI] [PubMed] [Google Scholar]
- 14. Heinemann AW, Nitsch KP, Ehrlich-Jones L, et al. Effects of an implementation intervention to promote use of patient-reported outcome measures on clinicians’ perceptions of evidence-based practice, implementation leadership, and team functioning. J Contin Educ Health Prof [Internet] 2019; 39 (2): 103–11. https://journals.lww.com/jcehp/Fulltext/2019/03920/Effects_of_an_Implementation_Intervention_to.5.aspx Accessed June 24, 2020. [DOI] [PubMed] [Google Scholar]
- 15. Montgomery N, Bartlett SJ, Brundage MD, et al. Defining a patient-reported outcome measure (PROM) selection process: what criteria should be considered when choosing a PROM for routine symptom assessment in clinical practice? J Clin Oncol 2018; 36 (30_suppl): 187. https://ascopubs.org/doi/abs/10.1200/JCO.2018.36.30_suppl.187 Accessed June 24, 2020. [Google Scholar]
- 16. McKenna SP. Measuring patient-reported outcomes: moving beyond misplaced common sense to hard science. BMC Med 2011; 9 (1): 86. 10.1186/1741-7015-9-86 Accessed June 23, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Payne C, Michener LA. Physiotherapists use of and perspectives on the importance of patient-reported outcome measures for shoulder dysfunction. Shoulder Elbow 2014; 6 (3): 204–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Biber J, Ose D, Reese J, et al. Patient reported outcomes – experiences with implementation in a University Health Care setting. J Patient Rep Outcomes [Internet] 2018; 2 (1). https://jpro.springeropen.com/articles/10.1186/s41687-018-0059-0 Accessed April 24, 2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Burr SK, Fowler JC, Allen JG, Wiltgen A, Madan A. Patient-reported outcomes in practice: clinicians’ perspectives from an inpatient psychiatric setting. J Psychiatr Pract 2017; 23 (5): 312–9. [DOI] [PubMed] [Google Scholar]
- 20. Harle CA, Lipori G, Hurley RW. Collecting, integrating, and disseminating patient-reported outcomes for research in a learning healthcare system. eGEMs 2016; 4 (1): 13.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975567/ Accessed July 5, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Ahmed S, Ware P, Gardner W, et al. Montreal Accord on Patient-Reported Outcomes (PROs) use series – Paper 8: patient-reported outcomes in electronic health records can inform clinical and policy decisions. J Clin Epidemiol [Internet] 2017; 89: 160–7. https://linkinghub.elsevier.com/retrieve/pii/S0895435617304079 Accessed May 27, 2020 [DOI] [PubMed] [Google Scholar]
- 22. Bitton A, Onega T, Tosteson ANA, Haas JS. Toward a better understanding of patient-reported outcomes in clinical practice. Am J Manag Care [Internet] 2014; 20 (4): 281–3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4083494/ Accessed June 26, 2020 [PMC free article] [PubMed] [Google Scholar]
- 23.Three ways to achieve physicians’ EHR buy in [Internet]. https://www.ehrinpractice.com/achieve-physicians-ehr-buy-in.html Accessed June 5, 2021.
- 24. Zhang R, Burgess ER, Reddy MC, et al. Provider perspectives on the integration of patient-reported outcomes in an electronic health record. JAMIA Open [Internet] 2019; 2 (1): 73–80. https://academic.oup.com/jamiaopen/article/2/1/73/5362011 Accessed April 25, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Shah T, Kitts AB, Gold JA, et al. Electronic health record optimization and clinician well-being: a potential roadmap toward action. NAM Perspectives [Internet] 2020; https://nam.edu/electronic-health-record-optimization-and-clinician-well-being-a-potential-roadmap-toward-action/ Accessed June 6, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Rotenstein LS, Agarwal A, O’Neil K, et al. Implementing patient-reported outcome surveys as part of routine care: lessons from an academic radiation oncology department. J Am Med Inform Assoc [Internet] 2017; 24 (5): 964–8. http://academic.oup.com/jamia/article/24/5/964/3051752/Implementing-patientreported-outcome-surveys-as Accessed June 18, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Brown SR. A primer on Q methodology. Oper Sub 1993; 16 (3/4): 91–138. [Google Scholar]
- 28. Cross RM. Exploring attitudes: the case for Q methodology. Health Educ Res [Internet] 2004; 20 (2): 206–13. https://academic.oup.com/her/article-lookup/doi/10.1093/her/cyg121 April 23, 2020. [DOI] [PubMed] [Google Scholar]
- 29. Watts S, Stenner P. Doing Q Methodological Research: Theory, Method and Interpretation [Internet]. London: SAGE Publications Ltd; 2012. https://methods.sagepub.com/book/doing-q-methodological-research. [Google Scholar]
- 30. Newman I, Ramlo S. Using Q Methodology National quality forum and Q factor analysis in mixed methods research In: SAGE Handbook of Mixed Methods in Social & Behavioral Research [Internet]. Thousand Oaks, CA: SAGE Publications, Inc.; 2010: 505–30. http://methods.sagepub.com/book/sage-handbook-of-mixed-methods-social-behavioral-research-2e/n20.xml Accessed May 5, 2019. [Google Scholar]
- 31. Ramlo S. Mixed method lessons learned from 80 years of Q methodology. J Mixed Methods Research [Internet] 2016; 10 (1): 28–45. http://journals.sagepub.com/doi/10.1177/1558689815610998 Accessed April 13, 2019. [Google Scholar]
- 32. Amin Z. Q methodology – a journey into the subjectivity of human mind. Singapore Med J 2000; 41 (8): 410–4. [PubMed] [Google Scholar]
- 33. Alderson S, Foy R, Bryant L, Ahmed S, House A. Using Q-methodology to guide the implementation of new healthcare policies. BMJ Qual Saf 2018; 27 (9): 737–42. https://qualitysafety.bmj.com/lookup/doi/10.1136/bmjqs-2017-007380 Accessed September 28, 2021. [DOI] [PubMed] [Google Scholar]
- 34. Valenta AL, Wigger U. Q-methodology: definition and application in health care informatics. J Am Med Inform Assoc [Internet] 1997; 4 (6): 501–10. https://academic.oup.com/jamia/article/4/6/501/787307 Accessed September 11, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Churruca K, Ludlow K, Wu W, et al. A scoping review of Q-methodology in healthcare research. BMC Med Res Methodol [Internet] 2021; 21 (1): 125.https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-021-01309-7 Accessed September 20, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Berghout M, van Exel J, Leensvaart L, Cramm JM. Healthcare professionals’ views on patient-centered care in hospitals. BMC Health Serv Res [Internet] 2015; 15 (1): 2020.http://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-015-1049-z Accessed May 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Prabakaran R, Seymour S, Moles DR, Cunningham SJ. Motivation for orthodontic treatment investigated with Q-methodology: patients’ and parents’ perspectives. Am J Orthod Dentofacial Orthop [Internet] 2012; 142 (2): 213–20. https://linkinghub.elsevier.com/retrieve/pii/S0889540612004179 Accessed May 28, 2020. [DOI] [PubMed] [Google Scholar]
- 38. Shabila NP, Al-Tawil NG, Al-Hadithi TS, Sondorp E. Using Q-methodology to explore people’s health seeking behavior and perception of the quality of primary care services. BMC Public Health [Internet] 2014; 14 (1): 2.http://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-14-2 Accessed May 27, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Witton RV. Identifying dentists’ attitudes towards prevention guidance using Q-sort methodology. Community Dent Health [Internet] 2015; (2): 72. 10.1922/CDH_3417Witton05 Accessed May 28, 2020. [DOI] [PubMed] [Google Scholar]
- 40. Berry DL, Nayak MM, Abrahm JL, Braun I, Rabin MS, Cooley ME. Clinician perspectives on symptom and quality of life experiences of patients during cancer therapies: implications for eHealth. Psychooncology 2017; 26 (8): 1113–9. [DOI] [PubMed] [Google Scholar]
- 41. Boyce MB, Browne JP, Greenhalgh J. The experiences of professionals with using information from patient-reported outcome measures to improve the quality of healthcare: a systematic review of qualitative research. BMJ Qual Saf [Internet] 2014; 23 (6): 508–18. http://qualitysafety.bmj.com/lookup/doi/10.1136/bmjqs-2013-002524 Accessed May 5, 2019. [DOI] [PubMed] [Google Scholar]
- 42. Calvert M, Kyte D, Price G, Valderas JM, Hjollund NH. Maximising the impact of patient reported outcome assessment for patients and society. BMJ [Internet] 2019; 364: k5267. http://www.bmj.com/lookup/doi/10.1136/bmj.k5267 Accessed October 11, 2019. [DOI] [PubMed] [Google Scholar]
- 43. Estabrooks PA, Boyle M, Emmons KM, et al. Harmonized patient-reported data elements in the electronic health record: supporting meaningful use by primary care action on health behaviors and key psychosocial factors. J Am Med Inform Assoc [Internet] 2012; 19 (4): 575–82. https://academic.oup.com/jamia/article-lookup/doi/10.1136/amiajnl-2011-000576 Accessed August 27, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Harle CA, Listhaus A, Covarrubias CM, et al. Overcoming barriers to implementing patient-reported outcomes in an electronic health record: a case report. J Am Med Inform Assoc [Internet] 2016; 23 (1): 74–9. https://academic.oup.com/jamia/article-lookup/doi/10.1093/jamia/ocv085 Accessed April 25, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Harle CA, Marlow NM, Schmidt SOF, et al. The effect of EHR-Integrated patient reported outcomes on satisfaction with chronic pain care. Am J Manag Care 2017; 22 (12): e403. [PMC free article] [PubMed] [Google Scholar]
- 46. Jagsi R, Chiang A, Polite BN, Medeiros BC, et al. Qualitative analysis of practicing oncologists’ attitudes and experiences regarding collection of patient-reported outcomes. J Oncol Pract 2013; 9 (6): e290–7. [DOI] [PubMed] [Google Scholar]
- 47. Desantis D, Baverstock RJ, Civitarese A, Crump RT, Carlson KV. A clinical perspective on electronically collecting patient-reported outcomes at the point-of-care for overactive bladder. Can Urol Assoc J [Internet] 2016; 10 (11–12): 359.http://www.cuaj.ca/index.php/journal/article/view/3757 Accessed April 23, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care [Internet] 2010; 19 (Suppl 3): i68–74. http://qualitysafety.bmj.com/lookup/doi/10.1136/qshc.2010.042085 Accessed June 28, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Menon S, Smith MW, Sittig DF, et al. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open [Internet] 2014; 4 (11): e005985. http://bmjopen.bmj.com/lookup/doi/10.1136/bmjopen-2014-005985 Accessed June 28, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Pope C, Mays N, eds. Qualitative Research in Health Care. 3rd ed. Malden, MA: Blackwell Pub./BMJ Books; 2006: 156. [Google Scholar]
- 51.HTMLQ [Internet]. https://github.com/aproxima/htmlq Accessed September 29, 2020.
- 52.PQMethod Download for Windows [Internet]. 2014. http://schmolck.org/qmethod/downpqwin.htm Accessed September 30, 2021.
- 53. Venkatesh V, Sykes TA, Zhang X. “Just What the Doctor Ordered”: A Revised UTAUT for EMR System Adoption and Use by Doctors. In: 2011 44th Hawaii International Conference on System Sciences; Koloa, Kauai, HI: IEEE; 2011:1–10.
- 54. Qurtas DS, Shabila NP. Using Q-methodology to understand the perspectives and practical experiences of dermatologists about treatment difficulties of cutaneous leishmaniasis. BMC Infect Dis 2020; 20 (1): 645. 10.1186/s12879-020-05365-0 Accessed September 30, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Ladan MA, Wharrad H, Windle R. eHealth adoption and use among healthcare professionals in a tertiary hospital in Sub-Saharan Africa: a Qmethodology study. PeerJ [Internet] 2019; 7: e6326. https://peerj.com/articles/6326 Accessed September 30, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Huang S-F, Huang C-M, Chen S-F, Lu L-T, Guo J-L. New partnerships among single older adults: a Q methodology study. BMC Geriatr 2019; 19 (1): 74. 10.1186/s12877-019-1091-5 Accessed September 30, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Javid SH, Lawrence SO, Lavallee DC. Prioritizing patient-reported outcomes in breast cancer surgery quality improvement. Breast J 2017; 23 (2): 127–37. https://onlinelibrary.wiley.com/doi/abs/10.1111/tbj.12707 Accessed November 8, 2020. [DOI] [PubMed] [Google Scholar]
- 58. Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: a comparison of two theoretical models. Manage Sci [Internet] 1989; 35 (8): 982–1003. https://pubsonline.informs.org/doi/abs/10.1287/mnsc.35.8.982 Accessed September 23, 2020. [Google Scholar]
- 59. Van Der Wees PJ, Nijhuis-Van Der Sanden MWG, Ayanian JZ, Black N, Westert GP, Schneider EC. Integrating the use of patient-reported outcomes for both clinical practice and performance measurement: views of experts from 3 countries. Milbank Q 2014; 92 (4): 754–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Hsiao C-J, Dymek C, Kim B, Russell B. Advancing the use of patient-reported outcomes in practice: understanding challenges, opportunities, and the potential of health information technology. Qual Life Res [Internet] 2019; 28 (6): 1575–83. http://link.springer.com/10.1007/s11136-019-02112-0 Accessed April 23, 2020. [DOI] [PubMed] [Google Scholar]
- 61. Jurczyk J. Using a web-based interface to collect data for Q Methodology studies. [Paper presented] 19th Annual Conference of the International Society for the Scientific Study of Subjectivity, Canton, Ohio; 2003.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data will be made available upon request for the factor analysis. The statements for the Q set will be shared for future Q sorts as well.



