Skip to main content
PLOS One logoLink to PLOS One
. 2020 Dec 28;15(12):e0244546. doi: 10.1371/journal.pone.0244546

Understanding the use of patient-reported data by health care insurers: A scoping review

Anne Neubert 1,2,#, Óscar Brito Fernandes 3,4,*,#, Armin Lucevic 3,4, Milena Pavlova 5, László Gulácsi 3, Petra Baji 3, Niek Klazinga 4,, Dionne Kringos 4,
Editor: Mathieu F Janssen6
PMCID: PMC7769438  PMID: 33370405

Abstract

Background

Patient-reported data are widely used for many purposes by different actors within a health system. However, little is known about the use of such data by health insurers. Our study aims to map the evidence on the use of patient-reported data by health insurers; to explore how collected patient-reported data are utilized; and to elucidate the motives of why patient-reported data are collected by health insurers.

Methods

The study design is that of a scoping review. In total, 11 databases were searched on. Relevant grey literature was identified through online searches, reference mining and recommendations from experts. Forty-two documents were included. We synthesized the evidence on the uses of patient-reported data by insurers following a structure-process-outcome approach; we also mapped the use and function of those data by a health insurer.

Results

Health insurers use patient-reported data for assurance and improvement of quality of care and value-based health care. The patient-reported data most often collected are those of outcomes, experiences and satisfaction measures; structure indicators are used to a lesser extent and often combined with process indicators. These data are mainly used for the purposes of procurement and purchasing of services, quality assurance, improvement and reporting, and strengthening the involvement of insured people.

Conclusions

The breadth to which insurers use patient-reported data in their business models varies greatly. Some hindering factors to the uptake of such data are the varying and overlapping terminology in use in the field and the limited involvement of insured people in a health insurer’s business. Health insurers are advised to be more explicit in regard to the role they want to play within the health system and society at large, and accommodate implications for the use of patient-reported data accordingly.

Introduction

In recent years, there has been an increased focus among policy makers, health insurers, and care providers on maximizing value and reducing waste in healthcare. In this regard, two central concepts have emerged: quality of care (QoC) and value-based healthcare (VBHC). QoC emphasizes the importance of care delivery that is compliant with the best possible standards, taking into consideration the cultures in a society, and aligned with the healthcare service users’ needs, expectations, and preferences [13]. Nowadays, it is commonplace to associate VBHC with care quality. Although a key component of quality, it is not necessarily the mainstream culture for measuring thereof. The VBHC agenda, similarly to the QoC, puts forward patients’ values regarding health and care outcomes, stressing their involvement in decision-making processes [4]. The construct of patient-centeredness emerges as a sub-dimension of those two concepts (QoC and VBHC) [5]. However, the inclusion of a people-centered perspective in VBHC is not without tensions as VBHC is a concept derived from management theories, with a clear conceptual focus on costs [6]. Hence, there can be a tension between the business model of a health care insurer oriented to optimizing the value for individual patients/insured versus optimizing the health of a population such as the group of individuals that pay their premium for the insurance. To strengthen people-centeredness and strive towards QoC and VBHC, health system stakeholders (e.g. health care insurers and care providers) should commit to the value agenda supported by intelligence on the healthcare system users’ needs, expectations, and preferences [79]. Hence, patient-reported data have become crucial to gain insight on one’s voice and inform the decisions of those key stakeholders.

The most commonly collected patient-reported data are those related to outcomes and experiences of care. Patient-reported outcome measures (PROMs) can be either used to measure the outcome of a specific disease or to assess the general health status of a person, and they are commonly used by clinicians and hospitals [10]. Other uses are those related to drug reimbursement schemes [4,11,12] and health technology assessment [13]. On the other hand, patient-reported experience measures (PREMs) refer to a person’s experiences while interacting with the healthcare system (e.g. to receive care) [9].Research and policy discussions on PROMs and PREMs have predominantly focused on the use of patient-reported data by healthcare providers to improve clinical practice [1416]. For example, the work of the International Consortium for Health Outcomes Measurement (ICHOM) has contributed to setting international standards for outcome measures that matter most to patients on varying diseases [17]. In parallel, the Organisation for Economic Co-operation and Development (OECD) is promoting the PaRIS project [18,19], which focuses on indicator surveys that capture PROMs and PREMs of people with breast cancer, hip- and knee surgery, or mental health problems, as well as the development of new tools to people with multiple chronic conditions treated in primary care settings. However, less is known about the use of patient-reported data by health insurers in supporting people-centeredness for QoC and VBHC [20,21]. An investigation of this issue is opportune given the evolving role of insurers across health systems. Health insurers are no longer solely focused on cost containment and cost-effectiveness, but also on adequate health service design and planning of improved health of the (insured) population [22]. Hence, research on the use of patient-reported data by health insurers can help to determine to what extent health insurers respond to the insurees’ needs and preferences [8,20].

Our study aims to: 1) map the evidence on the use of patient-reported data by health insurers; 2) explore how patient-reported data are utilized; and 3) elucidate the motives of why patient-reported data are collected by health insurers.

Methods

We conducted a scoping review design following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for scoping reviews (PRISMA–ScR) [23] (S1 File). The procedures to conduct the scoping review were disseminated across the research team for feedback and improvement. To enhance the quality of the methodology used, we based the review on the stepwise methodological framework suggested by Arksey and O'Malley (2005) [24], while also taking into account the recommendations of other authors [25,26].

Search for relevant studies

We performed a two-tier search: systematic and non-systematic. The search criteria were discussed among the researchers before the start and during the search period, as suggested by Levac et al. (2010) [26]. The systematic search was conducted between May 21st and May 26th, 2019. In total, 11 databases were searched: PubMed, Embase, Health Systems Evidence, NICE, JSTOR, Emerald, Wiley, Cochrane Library, PDQ-Evidence, NIHR Journals Library, and EBSCO/Health Business Elite. The following search terms were used as subject headings or free-text words, including synonyms and closely related words: (“health insurer,” or “health insurance,” or “private health plans,” or “medical care insurance,”) and (“patient-reported data,” or “consumer reported data,” or “PREMS,” or “PROMS,” or “consumer satisfaction,” or consumer preferences,” or “consumer feedback”). The choice for the search terms was based on an initial quick literature scan and discussion among the researchers. During the initial exploratory searches, we have observed that the terminology on patient-reported data varied widely. Thus, we informed our search strategy with key terms used in systematic reviews on patient-reported measures (e.g. [2729]). We limited the search to keywords found in title and abstract to minimize the number of off-topic hits, which otherwise would have been unmanageable. The search strategy was adapted to each database and can be found as supplemental information (S2 File). We included all peer-reviewed study types that were written in the English, German, or Dutch language. The search was not time bounded except for the JSTOR database. Not limiting the timeframe was producing a large number of hits off-topic to this review, which revealed to be unmanageable; hence, we limited the search from the year 2000 onward, where documents of potential relevance to the screening process started to emerge.

For the non-systematic search, we included relevant grey literature such as webpages of insurers and third-party reports (S3 File). The search was performed between May 8th and June 18th, 2019. The documents retrieved were identified through online searches, reference mining, and recommendations from experts. The latter refers to contacts we have established via email (e.g. health insurers, insurance associations/federations, consultancy firms, or patient advocates) to direct us towards potential documents to complement the internet search. In total, 23 emails were sent to institutions that we believed could bring clarity or provide further information on top of what we had read on their webpages; 9 answers were received until the 18th of August 2019. A reminder was sent to all unanswered emails; we have received no reply to 14 emails and closed further contacts by the end of August 2019.

Selection of studies

After removing duplicates, the selection of studies was independently and blindly performed by Anne Neubert (AN), Óscar Brito Fernandes (OBF), and Armin Lucevic (AL) with the open-source application Rayyan [30]. Prior to the screening, the reviewers grounded the eligibility criteria based on two principles: 1) to exclude the studies that did not highlight patient data (e.g. PREMs or PROMs) and its use by health insurers or information on how insurers respond to patients’ expectations, needs, and preferences; and 2) to exclude the studies where the setting was not a high-income country, based on the assumption that a health insurance system in developing countries might differ greatly from that of developed countries (e.g. on the extent to which patient-reported data are employed), and thus limiting the generalizability of the study’s findings. The reviewers also agreed that if two researchers agreed on inclusion (exclusion) of a document, the document would be included (excluded) for full text reading. In cases where all three researchers had divergent opinions, the researcher who classified the document with maybe (a possibility with Rayyan) had to make a blinded final decision for inclusion or exclusion without prior information on arguments that supported the decision of the other two researchers involved. If needed, discussions with non-scoring researchers were allowed. All three researchers first screened the publications by title and abstract/executive summary. The full text review that followed was performed by AN and OBF; AL was involved as tiebreaker, if any.

Data charting and analysis

All documents retained for analysis were subject to content extraction into a data charting form and synthesis of the following information: author(s) and year of publication, study setting (country), a brief description of the content, the indicator(s) highlighted in the study, and the use and function of those indicators by health insurers. We organized the list of indicators following the Donabedian’s healthcare quality model (structure-process-outcomes) [1,31] given how widespread and familiar this model is across health systems and its stakeholders. This option could also facilitate a first approach to organize scattered information about the purposes and uses of patient-reported data by health insurers. Data mapped under structure highlight measures regarding the context and setting wherein care is delivered, and data under process highlight the interactions between a person and providers throughout the care trajectory; regarding data under outcomes, we organized information as clinical measures (referring to the diagnosis, treatment, and monitoring of a person), and patient-reported measures (PROMs, PREMs, and satisfaction measures). The focus of our work is on patient-reported data, but by using structure, process, and clinical measures as ancillary indicators in our work, we expected to have a better understanding on how patient-reported measures are used (as standalone or combined with other measures). The charting form was agreed among the research team; both AN and AL independently identified the relevant information in the studies to populate the table (S4 File).

Study validity and reliability

To improve the validity of the review we considered two types of triangulation: tier triangulation (related to the researchers) and data triangulation. To support the former, the research team maintained an open discussion and iterative approach across all phases of the study. In addition, our review triangulated data accessed from different sources [32,33] and all searches and data analysis were thoroughly documented [34]. Given that the data collected differed greatly in breadth and depth, we followed the suggestion of Silverman (2009) [25] of synthesizing evidence with the support of a table to enhance the reliability of the review.

Results

The systematic search initially generated 2986 articles and the non-systematic search 158 documents, including grey literature and email correspondence (Fig 1). After the screening process, 42 documents were considered eligible for inclusion: 15 retrieved from the systematic search and 27 from the non-systematic search. From the latter group, 17 documents were classified as grey literature.

Fig 1. PRISMA chart of document selection.

Fig 1

Characteristics of the documents

The documents included in the study covered the period from 1996 to mid-2019 (Table 1). The majority were written in the English language (n = 30), followed by German (n = 11), and Dutch (n = 1). More than a quarter of the documents portrayed the situation in Germany [3546], followed by the USA [4757], and the Netherlands [5868]. Six documents discussed a multiple-country setting [6974], one highlighted the UK context [75], and one document had no specific country attached [76]. Among the journal articles, 13 documents were quantitative studies and six were qualitative; also, seven articles were classified as non-empirical, such as reports, commentaries, and summaries.

Table 1. Characteristics of the documents included in the study (N = 42).

Characteristic N %
Year of publication
Prior 2005 1 2%
2005–2009 7 17%
2010–2014 17 40%
2015–2019 17 40%
Language
English 30 71%
German 11 26%
Dutch 1 2%
Setting
Germany 12 29%
The Netherlands 11 26%
USA 11 26%
United Kingdom 1 2%
Multiple 6 14%
Non-specific 1 2%
Type of indicator
Structure 6 14%
Process 10 24%
Outcome 37 88%
Clinical 14 33%
Patient-reported outcome measure 27 64%
Patient-reported experience measure 12 29%
Patient satisfaction measures 9 21%
Non-specific 3 7%
Use and function of indicators
Procurement and purchasing of healthcare services 17 40%
Quality reporting 11 26%
Involvement of insured people 8 19%
Performance assessment of providers 6 14%
Profiling 6 14%
Quality assurance and improvement 4 10%
Product/Program development 3 7%

The percentages in year of publication, language, and setting have been rounded and may not total to 100% (rounding error).

What kind of data do insurers use?

The use of PROMs was the most spread across the documents [27,35,37,38,4244,4752,5457,59,62,63,65,67,68,7276] relative to PREMs [35,36,38,39,41,44,46,50,52,57,59,60,71] or satisfaction measures [35,37,38,41,44,50,58,61,66]. Generic measures on patient satisfaction were often complemented with specific PREMs or PROMs [35,37,38,41,44,50]. Often, PROMs were employed in combination with clinical indicators or in combination with patient-reported outcome-based performance measures [38,49,52,54,62,67,73,76]. The use of structure indicators by health insurers, such as the availability of specific disease programs or the existence of quality assurance certification, were less frequent [38,42,52,53,67,70] and often used in combination with process indicators [38,42,52,53,67]. On the other hand, process indicators [35,37,38,42,5254,66,67,71] and clinical outcome indicators [38,41,42,49,52,54,58,62,64,66,67,71,73,76] were frequently mentioned.

How do insurers use patient-reported data?

Based on the uses and fuctions of patient-reported data among selected documents, we identified 17 documents (40%) discussing the procurement and purchasing of healthcare services [42,43,5153,56,57,60,6469,71,74,76]. Quality reporting was highlighted in 11 documents (26%) [3537,39,41,44,46,50,60,70,76], and four more (10%) focused on quality assurance and improvement [42,63,72,75]. Other key uses and functions were those of strengthening the involvement of insured people [38,55,60,63,7275], measuring the performance of providers [38,41,59,60,71,76], profiling [40,47,48,54,58,62], and the development of products/programs [45,49,61].

Procurement and purchasing of healthcare

The use of PREMs in the context of procurement and purchasing of healthcare services was available in Delnoij et al. (2010) [60], Cashin et al. (2014) [71], and Damberg et al. (2014) [52], whereas the use of satisfaction measures was discussed in Dohmen and van Raaij (2019) [66]. The use of PROMs was most frequently discussed [42,43,51,52,56,57,65,67,68,74,76], and only Damberg et al. (2014) [52], Klakow-Franck (2014) [42], and Moes et al. (2019) [67] discussed the broader use of structure, process, and outcome indicators in procurement and purchasing processes.

Selective contracting was discussed in five documents (12%) [60,6769,74]. In general, selective contracting refers to the contractual agreement between a health insurer and a provider, where the former selects those providers that meet certain QoC expectations. The inclusion of QoC indicators is highly dependent on the availability of data; hence, the most common data used in these contracts are based on volume and costs [66,69], and only recently some incorporate PROMs (and in a lesser extent, PREMs) [68,74]. The use of structure, process, and outcome indicators for the purpose of selective contracting was discussed in Moes et al. (2019) [67]. A pitfall, however, relates to the varying terminology used for selective contracting. The term ‘selective contracting’ was mainly deployed in the Dutch literature [60,6769,74]. Other terms were ‘outcome-based purchasing’ [64], ‘quality contracting’ (predominant in German literature) [42,43], and ‘value-based purchasing’ or ‘payments’ (predominant within the US literature) [5153,56,57].

One of the main objectives for selective contracting was that of value-based purchasing or value-based payment programs (VBP). Notwithstanding, improvements on QoC at large were also an objective, with a special focus on dimensions such as effectiveness, efficiency, and safety. Conversely, patient-centeredness was not one of the major areas to strive for and it was commonly discussed as appropriateness of care (e.g. reduction of overuse and underuse of care) [68,69]. In general, different patient-reported data were required for selective contracting [60,6769,74] and pay-for-performance programs (P4P) [56,71]. The former required data that enabled comparisons across providers to contract those that are performing best; the latter required data that enabled health insurers to compare the performance of a provider with a predetermined target, norm, or past performance [60].

Quality assurance, improvement, and reporting

The focus on quality assurance, improvement, and reporting was frequent among the documents, with higher frequency for quality reporting of the performance of providers (of mainly inpatient services, but of late also outpatient services). Two perspectives on the reporting of the QoC of providers emerged: 1) as an ancillary instrument to inform the decisions of insurees; and 2) as a means of supporting and enhancing quality improvement via the benchmark of providers [70,76]. Terminology about quality reporting was also varying across the documents: ‘public reporting’ [70], ‘hospital ranking’ [36], ‘quality reports’ [35], ‘doctor assessment portals’ [41,46], or ‘performance comparison’ [76]. Different tools were discussed for quality reporting, such as the Dutch Consumer Quality Index [60] and the German Patients’ Experience Questionnaire [41]. PREMs were often used for quality reporting [35,36,39,41,44,46,50,60], as were PROMs [35,37,44,50,76]. The use of structure and process measures for quality reporting was featured far less, relative to PROMs and PREMs [35,37,70].

Prediction models

Our findings suggested that health insurers use prediction modelling to forecast and profile enrollees who are likely to incur high medical costs [40,47,48,54,58,62]. The documents often applied the term ‘self-reported data’ when referring to health behavior, healthcare utilization, morbidity, and health status data, which were often combined with claims data. For example, Fleishman et al. (2006) [48] and Hornbrook and Goodman (1996) [47] used PROMs in their profiling studies, namely the RAND-36 and SF-12.

Other purposes

Alongside the uses of patient-reported data by health insurers reported so far, other uses were identified, such as the involvement of insured people in decision-making (n = 8; 19%) and the development of products/programs (n = 3; 7%). The first stresses the role of a health insurer in research by granting access to data (e.g. claims data) [72] and the development of novel PREMs and PROMs that are both fit for purpose and use [55,63,75]; the second relates to the role of a insurer in the development or co-creation of healthcare projects that incorporate the use of patient-reported data, such as those portrayed by Nickel et al. (2010) [38] and Franklin et al. (2017) [55].

Why do insurers use patient-reported data?

Quality of care

The focus of most of the uses of patient-reported data was that of QoC at large or that of a particular dimension of QoC, such as effectiveness, efficiency, access, patient-centeredness, equity, or safety [2,3]. Effectiveness of care was relevant mostly in relation to cost-effectiveness [38,45,47,50,53,55] and effectiveness of care [49,51,54,58,66,70,76]. Efficiency was a dominant topic with a focus on economic efficiency, cost-efficiency, allocative, and technical efficiency [38,43,52,56,69,76]. Some sources used efficiency in relation to the efficient targeting of patients with high healthcare needs [54].

Patient-centeredness was often mentioned in relation to the appropriateness of an intervention or service [52,68,69], interventions that are centered around the patient [49], and as a goal of using PROMs [56]. In addition, patient safety was discussed, alongside to effectiveness, as key to selective contracting and measuring the quality of a provider. Other authors employed the term ‘patient safety’ to judge the performance and quality of health services for diverse purposes [42,57,67], or to refer to requirements of treatments to guard patients’ safety [64].

Access was often a topic in relation to equity [40] and accessibility of healthcare for people with disabilities [43]. Equity was the least mentioned in the documents retrieved.

Value-based healthcare

VBHC was mentioned as an important reason for employing patient-reported data. One approach viewed VBHC as the value of a service for a patient, whereas a second approach focused on the purchasing or payment methods. Value-based payments [5457,66] and value-based purchasing [52,53] were described in some of the documents. For example, Dohmen and van Raaij (2019) [66] showed how Zilveren Kruis, a Dutch health insurer, was piloting a method (best-value procurement) to purchase services from providers that do not only focus on volume and cost. Similarly, Squitieri et al. (2017) [56] explored how to integrate PROMs in value-based payment reforms to measure the performance of service providers from a patient’s perspective.

Discussion

In this study, we looked at the what, how, and why of health insurers using patient-reported data. Our findings inform that the patient-reported data most often collected by health insurers are those of PROMs, followed by PREMs and satisfaction measures. These data are mainly used for the procurement and purchasing of services; quality assurance, improvement, and reporting; and strengthening the involvement of insured people. Health insurers use patient-reported data for assurance and improvement of QoC and VBHC.

The findings of our study suggest that the use of patient-reported data by health insurers is common and centered on PROMs, often combined with clinical outcomes or process measures. PREMs data, albeit used to a lessen extent, were somewhat depicted in the documents analysed. These data are central to support health insurers towards the procurement and purchasing of services (including the practice of selective contracting), and quality assurance, improvement, and reporting with the purpose of supporting QoC improvement and VBHC. However, the breadth to which insurers use such data in their business models varies greatly. Some factors may hinder the use of patient-reported data on a larger scale. First, requiring from the insurer side data collection in a timely fashion, including patient-reported data, entails the ability of an insurer to invest in a robust health information system which could reduce fragmentation of data flow between the insurer and care providers [77]. Second, the culture of an insurer, as well as the organization’s corporate values, may influence the role of the insurer in a healthcare system (and in the society at large) and the perception of the usefulness of patient-reported data as key to inform business practices and decision-making [78,79]. Third, contextual factors, such as country-specific legislation, data protection regulation, the organization of the healthcare system, and market competition may influence the diffusion of the use of patient-reported data across a health insurer’s business. As suggested by Klose et al. (2016) [80] and Brito Fernandes et al. (2020) [8], the knowledge about patients’ needs, preferences, and experiences could help organizations such as health insurers in developing and optimizing a patient-centered approach.

Selective contracting and P4P programs that use patient-reported data such as PROMs and PREMs are still under-developed, albeit some initiatives. For example, insurers in the Netherlands are being encouraged by governmental regulation to assume a role of active purchasers [81]. If health insurers are enhancing their procurement and purchasing practices in relation to QoC, health systems could evolve from demand-driven to quality-driven purchasing, as well as from performance-based towards quality-rewarding payments. This would entail that purchasers change from passive funders of care to an active promoter of QoC, who base the financing of healthcare services on good quality and on what is of value to (insured) people and communities [60,73,74].

In relation to quality assurance, improvement, and reporting, we found that health insurers have a growing role in driving the performances of care providers. This entails giving insurees the possibility of choosing providers based on information related to quality. This may influence the decision-making of an insured person when selecting a care provider, and to a limited extent influence the QoC provided [81]. However, patients often do not rely on quality reporting to support their decision-making, partly because they perceive these initiatives not driven by quality concerns but rather by political interests [82]. Hence, health insurers should further commit to involve insurees in initiatives that develop and report on measures that resonate with insurees. Further, health insurers should not only concentrate on reporting the quality of providers, but also align incentives that support the investigation of root causes of poor quality at a provider-level [82].

Our findings highlight a large heterogeneity of the terminology used in the literature. This was also identified by Desomer et al. (2018) [83]. The extent to which it may have hindered a clearer picture of how health insurers use patient-reported data remains unanswered. This wide variation in the conceptualization of PROMs and PREMs could suggest that these measures are not yet optimized to fully address a wide scope of need for information across actors [84]. In addition, methodological challenges (e.g. fit for risk-adjustment or a people-centered approach to developing such measures) offer another layer of complexity to the conceptualizations of such data. This context also sets the opportunity for new measures to arise, such as that of patient-reported outcome-based performance measures [51,56] and preference-based PREMs [8].

Strengths and limitations

The main strength of our study is its design, which enabled us to find literature that is highly scattered and unstructured. The findings of our review should be interpreted in light of some limitations. The heterogeneity of terminology, the use of an unsystematic search component, and language restriction may have introduced bias. To mitigate possible effects, the search strategy was informed by (but not limited to) the terminology used in other systematic reviews related to patient-reported data; we also assessed the extent to which documents retrieved via unsystematic search were aligned with those retrieved via systematic search. Also, limiting our search to high-income countries may have also introduced general bias. On the one hand, we did not consider studies from low-middle income countries because the use of patient-reported data in such contexts is yet limited [85]; on the other hand, we acknowledge that even in high-income countries, the extent to which patient-reported data are used may vary greatly, considering the role and involvement of a health insurer in the health system. Finally, given the study design, generalizability of results is limited; for example, contextual factors (e.g. the organization and digitalization of the healthcare system) vary greatly, and the extent to which these affect the uses and applications of patient-reported data are unknown.

Conclusions

The breadth to which insurers use patient-reported data in their business models varies greatly across countries. Health insurers are actively using patient-reported data to enhance QoC and VBHC, predominantly through procurement and purchasing of healthcare; quality assurance, improvement and reporting; and the involvement of insured people. However, our study highlights three key aspects that hinder a more robust use of such data in a health insurer’s business. First, the insurers’ use of patient-reported data is affected by a large technological and methodological heterogeneity that inhibits the transferability of innovative and effective initiatives across contexts. Second, the varying terminology of constructs used by the many stakeholders with whom an insurer interacts. Third, the involvement of insured people by insurers in the development of patient-reported measures and decision-making in regard to a health insurer’s strategy and practices is still limited. To overcome these hindering factors, health insurers are advised to be more explicit in regard to the role they want to play within the health system and society at large. In addition, health insurers should have a clear scope about the use and actionability of patient-reported measures, and further involve insurees to the extent where it is feasible and deemed necessary. For many years now, there is a generalized consensus among healthcare providers and professionals for a greater involvement and engagement of people in decision-making towards a more people-centered health system. Albeit significant advances, we still fall short on that cornerstone. The extent to which lessons learned by health systems could be used and known obstacles could be overcome by health insurers remain overlooked and deserve further research.

Supporting information

S1 File. Preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) checklist.

(DOCX)

S2 File. Full search strategy for each database.

(PDF)

S3 File. Overview of the organizations whose websites were consulted during the non-systematic search.

(PDF)

S4 File. Charting form of the documents retrieved in the systematic and non-systematic search following a structure, process, outcome organization.

(XLSX)

Acknowledgments

An initial version of this research was presented in the master’s dissertation of AN at Maastricht University.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

The participation of AL, DK, LG, NK, OBF and PB occurred within a Marie Skłodowska-Curie Innovative Training Network (HealthPros – Healthcare Performance Intelligence Professionals) that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement Nr. 765141 (https://healthpros-h2020.eu). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Donabedian A. The seven pillars of quality. Archives of pathology & laboratory medicine. 1990;114(11):1115–8. [PubMed] [Google Scholar]
  • 2.World Health Organization. Quality of care: a process for making strategic choices in health systems. 2006. [Google Scholar]
  • 3.Institute of Medicine Committee on Quality of Health Care in America. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press; 2001. [Google Scholar]
  • 4.Gentry S, Badrinath P. Defining health in the era of value-based care: lessons from England of relevance to other health systems. Cureus. 2017;9(3):e1079 10.7759/cureus.1079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.International Alliance of Patients' Organizations. Declaration on patient-centred healthcare. 2006. [Google Scholar]
  • 6.Tseng E, Hicksm L. Value based care and patient-centered care: Divergent or complementary? Curr Hematol Malig Rep. 2016;11(4):303–10. 10.1007/s11899-016-0333-2 [DOI] [PubMed] [Google Scholar]
  • 7.Fujisawa R, Klazinga NS. Measuring patient experiences (PREMS): progress made by the OCED and its member countries between 2006 and 2016. 2017. [Google Scholar]
  • 8.Brito Fernandes Ó, Péntek M, Kringos D, Klazinga N, Gulácsi L, Baji P. Eliciting preferences for outpatient care experiences in Hungary: A discrete choice experiment with a national representative sample. PLOS ONE. 2020;15(7):e0235165 10.1371/journal.pone.0235165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Brito Fernandes Ó, Baji P, Kringos D, Klazinga N, Gulácsi L, Lucevic A, et al. Patient experiences with outpatient care in Hungary: results of an online population survey. The European Journal of Health Economics. 2019;20(1):79–90. 10.1007/s10198-019-01064-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Black N. Patient reported outcome measures could help transform healthcare. BMJ. 2013;346(346):f167 10.1136/bmj.f167 [DOI] [PubMed] [Google Scholar]
  • 11.Morgan S. Summaries of national drug coverage and pharmaceutical pricing policies in 10 countries: Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland and the UK: University of British Columbia, 2016. [Google Scholar]
  • 12.Barnieh L, Manns B, Harris A, Blom M, Donaldson C, Klarenbach S, et al. A synthesis of drug reimbursement decision-making processes in Organisation for Economic Co-operation and Development countries. Value Health. 2014;17(1):98–108. 10.1016/j.jval.2013.10.008 [DOI] [PubMed] [Google Scholar]
  • 13.Facey KM, Hansen HP, Single AN. Patient involvement in health technology assessment Facey KM, Hansen HP, Single AN, editors. Singapore: Springer; 2017. [Google Scholar]
  • 14.Slawomirski L, van den Berg M, Karmakar-Hore S. Patient-reported indicator survey (PaRIS): aligning practice and policy for better health outcomes. World Medical Journal. 2018;64(3):8–14. [Google Scholar]
  • 15.Slawomirski L, van den Berg M. Harnessing the voice of the patient from the ward to the boardroom. World Hospitals and Health Services Journal. 2018;54(3). [Google Scholar]
  • 16.Gurría A, Porter M. Putting people at the centre of health care: HuffPost; 2017. [updated Jan 19, 2018]. Available from: https://www.huffpost.com/entry/putting-people-at-the-cen_b_14247824. [Google Scholar]
  • 17.International Consortium for Health Outcomes Measurement. Standard sets [cited 2020 Feb 20]. Available from: https://www.ichom.org/standard-sets.
  • 18.Organisation for Economic Co-operation and Development. Patient-reported indicators surveys (PaRIS) [cited 2020 Feb 19]. Available from: https://www.oecd.org/health/paris.htm.
  • 19.Organisation for Economic Co-operation and Development. Measuring what matters: the patient-reported indicator surveys [cited 2020 Feb 19]. Available from: http://www.oecd.org/health/health-systems/Measuring-what-matters-the-Patient-Reported-Indicator-Surveys.pdf.
  • 20.Colombo F, Tapay N. Private health insurances in OECD countries: the benefits and costs for individuals and health systems. OECD, 2004. [Google Scholar]
  • 21.Hostetter M, Klein S. Using patient-reported outcomes to improve health care quality: The Commonwealth Fund; 2012. [cited 2020 Feb 21]. Available from: https://www.commonwealthfund.org/publications/newsletter-article/using-patient-reported-outcomes-improve-health-care-quality. [Google Scholar]
  • 22.Silvello A. How connected insurance is reshaping the health insurance industry: IOS Press; 2018. [PubMed] [Google Scholar]
  • 23.Tricco A, Lillie E, Zarin W, O’Brien K, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISM-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
  • 24.Arksey H, O'Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005;8(1):19–32. [Google Scholar]
  • 25.Silverman D. Doing qualitative research. London: SAGE Publications; 2009. [Google Scholar]
  • 26.Levac D, Colquhoun H, O'Brien K. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69 10.1186/1748-5908-5-69 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hendrikx J, de Jonge MJ, Fransen J, Kievit W, van Riel PL. Systematic review of patient-reported outcome measures (PROMs) for assessing disease activity in rheumatoid arthritis. RMD Open. 2016;2(2):e000202 10.1136/rmdopen-2015-000202 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ open. 2016;6(8):e011907 10.1136/bmjopen-2016-011907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Bastemeijer C, Boosman H, van Ewijk H, Verweij L, Voogt L, Hazelzet J. Patient experiences: a systematic review of quality improvement interventions in a hospital setting. Patient Relat Outcome Meas. 2019;10:157–69. 10.2147/PROM.S201737 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5:210 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Donabedian A. Evaluating the quality of medical care. The Milbank Memorial Fund Quarterly. 1965;44(3):166–206. [PubMed] [Google Scholar]
  • 32.Generalizability Finfgeld-Connett D. and transferability of meta-synthesis research findings. J Adv Nurs. 2010;66(2):246–54. 10.1111/j.1365-2648.2009.05250.x [DOI] [PubMed] [Google Scholar]
  • 33.Validity Leung L., reliability, and generalizability in qualitative research. J Family Med Prim Care. 2015;4(3):324–7. 10.4103/2249-4863.161306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Peters MDJ, Godfrey C, McInerney P, Soares CB, Khalil H, Parker D. Chapter 11: scoping reviews In: Aromataris E, Munn Z, editors. Joanna Briggs Institute Reviewer's Manual: The Joanna Briggs Institute; 2017. [Google Scholar]
  • 35.Ose D, Grande G, Badura B, Greiner W. Patienteninformation zur bewertung von gesundheitseinrichtungen. Prävention und Gesundheitsförderung. 2008;3:152–62. [Google Scholar]
  • 36.Adrian A-M. AOK Arzt Navigator: empfehlung für den besten freund. Arzt & Wirtschaft; 2010. [Google Scholar]
  • 37.Bitzer E, Grobe T, Neusser S, Schneider A, Dörning H, Schwarzt F. BARMER GEK Report Krankenhaus 2010. Schwäbisch Gmünd: IGES, 2010. [Google Scholar]
  • 38.Nickel S, Thiedemann B, von den Knesebeck O. The effects of integrated inpatient health care on patient satisfaction and health-related quality of life: results of a survey among heart disease patients in Germany. Health Policy. 2010;98(2–3):156–63. 10.1016/j.healthpol.2010.06.012 [DOI] [PubMed] [Google Scholar]
  • 39.Arzt-Bewertung im internet [Internet]. 2010.
  • 40.Hofmann A, Browne M. One-sided commitment in dynamic insurance contracts: evidence from private health insurance in Germany. Journal of Risk and Uncertainty. 2013;46(1):81–112. 10.1007/s11166-012-9160-6 [DOI] [Google Scholar]
  • 41.Rupprecht CJ, Schulte C. Patienten und versicherten eine stimme geben. Forschungsjournal Soziale Bewegungen. 2013;26(2):149–51. [Google Scholar]
  • 42.Klakow-Franck R. Perspektive: rolle der qualitatsmessung aus sicht des gemeinsamen bundesausschusses. Z Evid Fortbild Qual Gesundhwes. 2014;108(8–9):456–64. 10.1016/j.zefq.2014.10.008 [DOI] [PubMed] [Google Scholar]
  • 43.IQTIG. Qualitätsverträge nach § 110a SGB V Evaluationskonzept zur Untersuchung der Entwicklung der Versorgungsqualität gemäß § 136b Abs. 8 SGB V. Berlin: Institut für Qualitätssicherung und Tranparenz im Gesundheitswesen, 2017. [Google Scholar]
  • 44.AOK, BARMER, KKH. Methodendokument versichertenbefragung mit dem Patients‘ Experience Questionnaire (PEQ). Berlin: Weisse Liste, 2018. [Google Scholar]
  • 45.Scholz S, Beißel A, Heidl C, Zerth Vom payer zum player Jr. KU Gesundheitsmanagement. 2018;7:40–2. [Google Scholar]
  • 46.AOK. Online-Arztsuche neuer Qualität. In: Liste W, editor. Berlin: Weisse Liste; 2020. [Google Scholar]
  • 47.Hornbrook MC, Goodman MJ. Chronic disease, functional health status, and demographics: a multidimensional approach to risk adjustment. Health Services Research. 1996;31(3):283–307. [PMC free article] [PubMed] [Google Scholar]
  • 48.Fleishman J, Cohen J, Manning W, Kosinski M. Using the SF-12 health status measure to improve predictions of medical expenditures. Medical Care. 2006;44(5 Suppl.):I54–I63. 10.1097/01.mlr.0000208141.02083.86 [DOI] [PubMed] [Google Scholar]
  • 49.Green AW, Foels TJ. Improving asthma management one health plans experience. The American Journal of Managed Care. 2007;13(8):482–5. [PubMed] [Google Scholar]
  • 50.Elbel B, Schlesinger M. Responsive consumerism: empowerment in markets for health plans. Milbank Q. 2009;87(3):633–82. 10.1111/j.1468-0009.2009.00574.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.National Quality Forum. Patient reported outcomes (PROs) in performance measurement. Washington, DC: National Quality Forum, 2013. [Google Scholar]
  • 52.Damberg C, Sorbero M, Lovejoy S, Martsolf G, Raaen L, Mandel D. Measuring success in health care value-based purchasing programs. Summary and recommendations. RAND, 2014. [PMC free article] [PubMed] [Google Scholar]
  • 53.Ryan AM, Tompkins CP. Efficiency and value in healthcare: linking cost and quality measures Washington, D.C.: National Quality Forum, 2014. [Google Scholar]
  • 54.Cunningham PJ. Predicting high-cost privately insured patients based on self-reported health and utilization data. Am J Manag Care. 2017;23(7):e215–e22. [PubMed] [Google Scholar]
  • 55.Franklin P, Chenok K, Lavalee D, Love R, Paxton L, Segal C, et al. Framework to guide the collection and use of patient-reported outcome measures in the learning healthcare system. Generating Evidence & Methods to improve patient outcomes. 2017;5(1):1–17. 10.5334/egems.227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Squitieri L, Bozic K, Pusic A. The role of patient-reported outcome measures in value-based payment reform. Value Health. 2017;20(6):834–6. 10.1016/j.jval.2017.02.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Safran DG, Higgins A. Getting to the next generation of performance measures for value-based payments Bethesda, Maryland: Health Affairs; 2019. [cited 2019 14.07.2019]. [Google Scholar]
  • 58.van den Berg B, van Dommelen P, Stam P, Laske-Aldershof T, Buchmueller T, Schut F. Preferences and choices for care and health insurance. Soc Sci Med. 2008;66(12):2448–59. 10.1016/j.socscimed.2008.02.021 [DOI] [PubMed] [Google Scholar]
  • 59.Damman O, Stubbe J, Hendriks M, Arah O, Spreeuwenberg P, Delnoij D, et al. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care. Medical Care. 2009;47(4):496–503. 10.1097/MLR.0b013e31818afa05 [DOI] [PubMed] [Google Scholar]
  • 60.Delnoij D, Rademakers J, Groenewegen P. The Dutch consumer quality index: an example of stakeholder involvement in indicator development. BMC Health Serv Res. 2010;10(88). 10.1186/1472-6963-10-88 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Wendel S, de Jong J, Curfs E. Consumer evaluation of complaint handling in the Dutch health insurance market. BMC Health Serv Res. 2011;11 10.1186/1472-6963-11-11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Roos A-F, Schut FT. Spillover effects of supplementary on basic health insurance: evidence from the Netherland. The European Journal of Health Economics. 2012;13(1):51–62. 10.1007/s10198-010-0279-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Ministrie van Volksgezondheid Welzijn en Sport. Outcome based healthcare 2018–2022. 2018. [Google Scholar]
  • 64.van Veghel D, Schulz D, van Straten A, Simmers T, Lenssen A, Kuijten-Slegers L, et al. Health insurance outcome-based purchasing: the case of hospital contracting for cardiac interventions in the Netherlands. International Journal of Healthcare Management. 2018;11(4):371–8. 10.1080/20479700.2018.1458177 [DOI] [Google Scholar]
  • 65.Verkerk E, Verbiest M, Dulmen S, Wees P, Terwee C, Beurskens S, et al. PROM-cycle eight steps to select and implement PROMs for healthcare settings. Zorginstituut; Nederlands, 2018. [Google Scholar]
  • 66.Dohmen PJG, van Raaij EM. A new approach to preferred provider selection in health care. Health Policy. 2019;123(3):300–5. 10.1016/j.healthpol.2018.09.007 [DOI] [PubMed] [Google Scholar]
  • 67.Moes FB, Houwaart ES, Delnoij DMJ, Horstman K. "Strangers in the ER": quality indicators and third party interference in Dutch emergency care. J Eval Clin Pract. 2019;25(3):390–7. 10.1111/jep.12900 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.IQhealthcare. Wat zijn PROs en PROMs: IQ Scientific Center for Quality of Healthcare; 2019 [cited 2019 July 8, 2019]. Available from: http://iqprom.nl/wat-zijn-pros-en-proms.
  • 69.Figueras J, Robinson R, Jakubowski E. Purchasing to improve health systems performance Josep Figueras RR, Elke J, editors. Berkshire: Open University Press; 2005. [Google Scholar]
  • 70.Cacace M. Public reporting on the quality of healthcare providers: international experience and prospects. 2012. [Google Scholar]
  • 71.Cashin C, Chi Y, Smith P, Borowitz M, Thomson S. Paying for performance in health care. Implications for health system performance and accountability Cashin C, Chi Y, Smith P, Borowitz M, Thomson S, editors. Berkshire: Open University Press; 2014. [Google Scholar]
  • 72.Dreyer NA, Rodriguez AM. The fast route to evidence development for value in healthcare. Curr Med Res Opin. 2016;32(10):1697–700. 10.1080/03007995.2016.1203768 [DOI] [PubMed] [Google Scholar]
  • 73.Wiering B, de Boer D, Delnoij D. Asking what matters: the relevance and use of patient-reported outcome measures that were developed without patient involvement. Health Expect. 2017;20(6):1330–41. 10.1111/hex.12573 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Wiering B, de Boer D, Delnoij D. Patient involvement in the development of patient-reported outcome measures: a scoping review. Health Expect. 2017;20(1):11–23. 10.1111/hex.12442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Vallance-Owen A. My working day: Andrew Vallance-Owen. J R Soc Med. 2011;104(10):429–31. 10.1258/jrsm.2011.11k034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Smith PC, Street AD. On the uses of routine patient-reported health outcome data. Health Econ. 2013;22(2):119–31. 10.1002/hec.2793 [DOI] [PubMed] [Google Scholar]
  • 77.Health Metrics Network. Framework and standards for country health information systems. Geneva: 2012.
  • 78.PricewaterhouseCoopers. Insurance 2020 and beyond: Creating a winning culture. 2016. [Google Scholar]
  • 79.Gallì G, Hagh C, Hammar P. Digital and cultural transformation in the insurance industry: Five leadership challenges and their solutions. 2017. [Google Scholar]
  • 80.Klose K, Kreimeier S, Tangermann U, Aumann I, Damm K, Group RHO. Patient- and person-reports on healthcare: preferences, outcomes, experiences, and satisfaction. An essay. Health Econ Rev. 2016;6(1):18 10.1186/s13561-016-0094-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.McNamara P. Purchaser strategies to influence quality of care: from rhetoric to global applications. Qual Saf Health Care. 2006;15(3):171–3. 10.1136/qshc.2005.014373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Greenhalgh J, Dalkin S, Gibbons E, Wright J, Valderas J, Meads D, et al. How do aggregated patient-reported outcome measures data stimulate health care improvement? A realist synthesis. J Health Serv Res Policy. 2018;23:57–65. 10.1177/1355819617740925 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Desomer A, van den Heede K, Triemstra M, Paget J, de Boer D, Kohn L, et al. Use of patient-reported outcome and experience measures in patient care and policy. Belgian Health Care Knowledge Centre, 2018. [Google Scholar]
  • 84.Cole A, Cubi-Molla P, Pollard J, Sim D, Sullivan R, Sussex J, et al. Making outcome-based payment a reality in the NHS Office of Health Economics, RAND Europe and Kings College London, 2019. [Google Scholar]
  • 85.Akachi Y, Kruk M. Quality of care: Measuring a neglected driver of improved health. Bulletin of the World Health Organization. 2017;95(6). 10.2471/BLT.16.180190 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Mathieu F Janssen

7 Sep 2020

PONE-D-20-09776

Why, what and how do health insurers use patient-reported data? Results of a scoping review

PLOS ONE

Dear Dr. Brito Fernandes,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The main issue is that the frameworks used should be described better and, especially, applied more thoroughly throughout the manuscript. Also address inconsistencies in the scoping review methodologies used. Furthermore, focus on a critical discussion of the VBHC concept, address differences among health insurance systems in high-income countries, provide definitions or at least descriptions according to the reviewers’ comments (e.g. on clinical quality, selective contracting), consider revising the title (I am also not a native English speaker but it feels a bit odd, should “what” perhaps be “which”?) Please do not describe or discuss new literature/evidence from other studies in the results section but either in the methods or discussion (or introduction if suitable). Also consider to include only a selection of the main table in the main manuscript and include the remainder/full table in an appendix. Finally I am not entirely sure on the aspects included in the results section for “how”, as e.g. selective contracting is more of a purpose instead of exactly describing how patient-reported data are used, and ensure that the order of the different topics of what, how and why are always the same throughout the manuscript (e.g. in the title, how they are mentioned in the introduction, subsections in results and discussion).

Please submit your revised manuscript by Oct 22 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Mathieu F. Janssen, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Additional Editor Comments (if provided):

The main issue is that the frameworks used should be described better and, especially, applied more thoroughly throughout the manuscript. Also address inconsistencies in the scoping review methodologies used. Furthermore, focus on a critical discussion of the VBHC concept, address differences among health insurance systems in high-income countries, provide definitions or at least descriptions according to the reviewers’ comments (e.g. on clinical quality, selective contracting), consider revising the title (I am also not a native English speaker but it feels a bit odd, should “what” perhaps be “which”?) Please do not describe or discuss new literature/evidence from other studies in the results section but either in the methods or discussion (or introduction if suitable). Also consider to include only a selection of the main table in the main manuscript and include the remainder/full table in an appendix. Finally I am not entirely sure on the aspects included in the results section for “how”, as e.g. selective contracting is more of a purpose instead of exactly describing how patient-reported data are used, and ensure that the order of the different topics of what, how and why are always the same throughout the manuscript (e.g. in the title, how they are mentioned in the introduction, subsections in results and discussion).

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

Reviewer #3: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors conducted a scoping review in order to map the evidence how insurers use patient-reported data. In doing so, the investigators highlighted patient-centeredness with respect to quality of care and value-based healthcare. The investigators specifically sought to understand why and how these data are collected and utilized. The review was conducted using the PRISMA extension for scoping reviews, and the analysis was framed using a value chain concept.

The scoping review was conducted in a rigorous and transparent manner. The manuscript had a logical flow and the investigators explained, and provided citations for, the basis/framework for each step of the process. The manuscript was well-written, too. As a result, I do not perceive any areas of weakness with regard to the design or implementation of the manuscript.

Although this may be outside of the scope of the manuscript, I was most interested in the potential “next steps” of this work based on the manuscript’s limitations. One area that might merit further thought is with regard to how stakeholders could implement a more consistent terminology of constructs. Furthermore, did the investigators think about broad differences in the way that citizens residing in different countries might have conceptualized the role of patient-reported measures and decision-making based on their nation’s health system and prevailing ideology regarding health? Finally, what role within the health system and in society should a health insurer play and is that role different in different nations? While these are questions have no easy answer, the investigative team is well-equipped to provide their opinions.

Reviewer #2: The manuscript entitled “Why, what and how do health insurers use patient-reported data? Results of a scoping review” is well written and highlights an important topic of current interest, so far not studied to any extent. Given the breadth and variety of the topic, a scoping reviews seems like the best choice of method. The review process is well described and seems to be conducted in a sound way.

The title askes “why, what and how”. I understand why and how, and I guess “what do health insurers use patient-reported data” means which patient-reported data is being used, though I am not proficient enough in English to know if that is a grammatically correct expression? If “what” refers to something else, maybe it could be further explained?

Though QoC is by now a concept widely used globally, and the use of indicators of structure, process and outcome commonplace in evaluations, VBHC is more of a management trend (in some countries more than others), following earlier trends like TQM and lean production, and as such also subjected to criticism not acknowledged in the manuscript.

The decision to exclude the studies where the setting was not a high-income country, based on the assumption that a health insurance system in developing countries might differ greatly from that of developed countries, seems fair. However, also among high-income countries health insurance systems and their involvement in healthcare systems differ greatly, not acknowledged by the authors. Though I guess out of the scope of the present study it would be interesting to study the difference in the use of patient-reported outcomes divided by differences in health insurance systems and their involvement in healthcare systems.

In the section “Data charting and analysis” it is described that indicators are organized in a traditional structure-process-outcome approach. For those not familiar with QoC and Donabedian, this might need an explanation, or at least a reference, maybe [1]?

In Table 1, I do not seem to find an explanation for the star* in PREM*? Furthermore, the definition of clinical quality is not stated. I am well aware of the problems with categorizing in this field, and reading Table 1 makes we wonder if all patient-reported data in the column PROM is actually reported as outcomes? I suspect some to be reported more as “patient characteristics”. “Health status” is a very generic term. I guess that the terms included in the table are the actual terms used in the papers (could be described), but if it is the authors interpretations, they should be defined/explained in more detail. Moreover, I wonder if only PROMS, PREMS and patient satisfaction are patient-reported? I guess that though the title implies that only patient-reported data are included in the study, the data regarding structure and process are mainly not patient-reported, but rather supportive data for your study. However, this is not very clearly stated.

Some specifics: In row 1, it is stated “RAND-36 survey, which included information on…”. I suspect that is not true, RAND 36 is the original, free version of SF-36, and does not include self-reported morbidity etc.

The concept of selective contracting needs a short description, I think many readers are not familiar with the concept.

I find your Discussion and Conclusion very apprehensive. The recommendation “In addition, health insurers should have a clear scope of the use of patient-reported measures” made me reflect on the fact that for so many years healthcare professionals have failed to accomplish just that. Could health insures achieve what health professionals could not, or will it be even more difficult for health insurers…

Reviewer #3: General comment:

In general, the topic of the article is well picked and scientifically relevant. The use of a scoping review is suitable for the aim of the article. Research question was addressed in methods and result section.

However, there are some inconsistency in methods and results, so that question regarding external validity and pictured landscape of health insurer's use of patient reported data arises. In my opinion the concerns could be solved by clarification/specification of the explanations given. The main critic relies on the 13 pages table within the main text. This should be solved and adjusted due to the understanding and focus of the article. Also the frameworks used (structure-process-outcome as well as value chain) should not only be described better, but also be more applied within the text.

In detail feedback:

Methods:

Scoping review methodology: The article stated that it was following Arksey and O’Malley (2005) stepwise framework. In the next section, they referred to the framework of Levac et al (2010) and later on to Silverman (2009), which seems a little bit inconsistent. Levac et al (2010) recommended and suggested for instance a update of the Arksey and O’Malley framework. The question arises, why different methodology were used or not only Levac was the entire foundation of the methodology. Probably it should be enough to frame this differently saying that Arksey was the basis, but recommendation of other authors was taken into account as well.

Structure-process-outcome approach: there was no explanation included within the methods section about the approach (even though it was claimed as ‘traditional’ and as the indicators for the research question). There was also no explanation of the rationale behind, the application in data extraction (e.g. why are outcomes differentiated into four subtypes and others not and are the suitable) and the suitability for the research question itself. However, it was referred to it in the result section entirely (good!), but, therefore, an introduction/explanation before is needed.

The search strategy for NIHR Journals Library, PDQ-Evidence, EBSCO/Health Business Elite and Cochrane Library is not attached within the supplement file. This should be added due to the PLOS One requirements. Moreover, NICE is stated within the supplement file, but not listed in the text for the systematic search.

Row 132-135: Rationale for expert interviews with Hungarian health insurance is unclear. Is this adoptable to other settings? Why was then the Hungarian language not included? Is asking only one insurance company not quite biased?

Time limit for JSTOR: why was a time limit for JSTOR applied, but not for the other databases? Argumentation is flawed here.

Focus on PROMs and PREMs: in the introduction it is stated that a specific focus is on PROMs and PREMS. This was reflected by using the terms specifically in the search strategy for Pubmed and Embase. However, the focus was mainly seen in the introduction, later on the results showed a wider perspective. I would recommend to reframe it. Otherwise questions about possible bias, rational and differentiation towards other methods arises.

Search terms: Search terms were mainly limited to Title/Abstract. The authors argued in the limitation section that Mesh terms/indexed words were not available for their specific topic. The question arises, why then the limitation on Title/Abstract was applied to mitigate the obstacles (especially because it is a scoping review). Furthermore, the limitation of Mesh Terms is in general obvious due to the time gap between new topics and the indexing process, so research in general should not entirely rely on them.

Results:

The presented table is covering 13 pages in total in the main text. In my opinion, this is way too long for a scientific journal article and not benefitting the understanding of the article. I would highly recommend to move the table towards the supplemented files and add a table with aggregated results within the section.

Row 200: why are mail correspondences included?

Row 208: The period of published years is stated correctly. However, it is a bit misleading, as the search strategy stopped in mid 2019, so this year could not be entirely included.

Row 226: Often does not match with only one source mentioned. Moreover, the result does not match up with the results table, as there is only the cell patient satisfaction filled out and not cells for PROMs or PREMs.

Selective contracting: It would have been beneficial to start of with the terminological definitions/differences so that the reader understands the qualitative aspects of it and can follow the section easier.

Row 273-274: What is the source for it?

Row 279: What are the referred articles for PREMs here?

Prediction model: What kind of patient reported data is used here from the structure-process-outcome approach?

Quality of care: Analyses of QoC at large is lacking

Value chain framework:

The idea of using the value chain framework for the interpretation of the results sounds good. However, it is only mentioned three times in the article mainly by saying that the indicators are affecting all of the areas and without an in-depth interpretation of effects. It is entirely missing within the discussion part. The stated aim in the methods of using the framework to map patient reported data in the light of health insurer activity is therefore not fulfilled. It should be also reflected in the data charting, as this was described as one of the main points. If the framework should be incorporated into the article, a more in depth analysis should be performed. Otherwise it should be removed.

Limitations:

Some rudimental limitations of the work are missing, e.g. language restrictions that are leading towards a selection bias of included material, general bias within unsystematic approaches, here by choosing relevant organizations, influencing factors of different health systems, as they occur not only between high and middle/low income countries.

Row 425-427: The argumentation here is not convincing. A scoping review only makes sense, if an area is not exhaustively researched. Limiting then the search strategy towards previous work pieces seems a bit odd, as the scoping review aims to get more and broader insights into the topic before. A more grounded argumentation would have been beneficial.

Wording:

Row 275-277: Doesn’t some terminologies stated within the text more reflect applications like hospital ranking rather than terminologies?

Cost-effective: The article is using the term cost-effective inappropriately. In row 64 the authors stated that patient-provider collaboration are “cost-effective from a clinical perspective” and refer to a source from 2006. First of all it is questionable, if a statement of being cost-effective from 2006 is still applicable. Secondly, to what does a clinical perspective refer in this case in regards to cost-effectiveness? Thirdly, is it reliable to generalize the statement that patient-provider collaborations are cost-effective from an economics point of view (as the term is based on health economics)? Often, QoC or VBHC are not “good” in terms of the cost-effective ratio, but there are other ethical considerations that influence the decision about integration of interventions (or here patient-provider collaborations). Row 287-289 is here a little bit misleading, as it is meant that using patient reported data instead of “normally” gathered data for risk assessment could be cost-effective. In the article the ratio behind is not becoming clear. I would reframe it here.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Evalill Nilsson

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Dec 28;15(12):e0244546. doi: 10.1371/journal.pone.0244546.r002

Author response to Decision Letter 0


22 Oct 2020

Editor (E)

The main issue is that the frameworks used should be described better and, especially, applied more thoroughly throughout the manuscript. Also address inconsistencies in the scoping review methodologies used. Furthermore, focus on a critical discussion of the VBHC concept, address differences among health insurance systems in high-income countries, provide definitions or at least descriptions according to the reviewers’ comments (e.g. on clinical quality, selective contracting), consider revising the title (I am also not a native English speaker but it feels a bit odd, should “what” perhaps be “which”?) Please do not describe or discuss new literature/evidence from other studies in the results section but either in the methods or discussion (or introduction if suitable). Also consider to include only a selection of the main table in the main manuscript and include the remainder/full table in an appendix. Finally I am not entirely sure on the aspects included in the results section for “how”, as e.g. selective contracting is more of a purpose instead of exactly describing how patient-reported data are used, and ensure that the order of the different topics of what, how and why are always the same throughout the manuscript (e.g. in the title, how they are mentioned in the introduction, subsections in results and discussion).

We thank the Editor for highlighting the key aspects that needed revision to strengthen the quality and reading experience of our manuscript. One of the main revisions relates to how we report the summary of the documents retrieved in our searches, where we synthesized the information in a frequency table. While working on the results, we identified one document (a pamphlet retrieved via unstructured search) that did not meet the inclusion criteria, and thus, should have not been included. We updated the results accordingly. In the following pages we address the concerns highlighted by the Editor, which were also those of the Reviewers.

Reviewer #1 (R1)

The authors conducted a scoping review in order to map the evidence how insurers use patient-reported data. In doing so, the investigators highlighted patient-centeredness with respect to quality of care and value-based healthcare. The investigators specifically sought to understand why and how these data are collected and utilized. The review was conducted using the PRISMA extension for scoping reviews, and the analysis was framed using a value chain concept. The scoping review was conducted in a rigorous and transparent manner. The manuscript had a logical flow and the investigators explained, and provided citations for, the basis/framework for each step of the process. The manuscript was well-written, too. As a result, I do not perceive any areas of weakness with regard to the design or implementation of the manuscript. Although this may be outside of the scope of the manuscript, I was most interested in the potential “next steps” of this work based on the manuscript’s limitations. One area that might merit further thought is with regard to how stakeholders could implement a more consistent terminology of constructs. Furthermore, did the investigators think about broad differences in the way that citizens residing in different countries might have conceptualized the role of patient-reported measures and decision-making based on their nation’s health system and prevailing ideology regarding health? Finally, what role within the health system and in society should a health insurer play and is that role different in different nations? While these are questions have no easy answer, the investigative team is well-equipped to provide their opinions.

We thank the Reviewer for these very thoughtful insights regarding future research. We do agree that there are many unanswered questions in this field; some of which we mention in the Discussion section. One of our goals with conducting this scoping review was that of grasping a better understanding on the interactions between health insurers and insured people, and the role of patient-reported data in those interactions. Taking into consideration the findings of this study, we are conducting a new study which focuses on the societal role of a health insurer in a health system and how that shapes the extent to which insured people are involved in decision-making.

Reviewer #2 (R2)

(R2C1) The manuscript entitled “Why, what and how do health insurers use patient-reported data? Results of a scoping review” is well written and highlights an important topic of current interest, so far not studied to any extent. Given the breadth and variety of the topic, a scoping reviews seems like the best choice of method. The review process is well described and seems to be conducted in a sound way.

The title askes “why, what and how”. I understand why and how, and I guess “what do health insurers use patient-reported data” means which patient-reported data is being used, though I am not proficient enough in English to know if that is a grammatically correct expression? If “what” refers to something else, maybe it could be further explained?

We changed the title to “Understanding the use of patient-reported data by health care insurers: A scoping review” to accommodate the Reviewer’s comment.

(R2C2) Though QoC is by now a concept widely used globally, and the use of indicators of structure, process and outcome commonplace in evaluations, VBHC is more of a management trend (in some countries more than others), following earlier trends like TQM and lean production, and as such also subjected to criticism not acknowledged in the manuscript.

We now acknowledge in the text in a clearer manner that value-based healthcare may have different perspectives based on the taxonomy used, and thus, clarifying our use of such terminology. We believe that further discussion about the criticisms assigned to each taxonomy falls out of the scope of this work; rather, we use referencing that may be of interest to those who wish to seek further information on the topic. The text now reads as follows:

“Nowadays, it is commonplace to associate VBHC with care quality. Although a key component of quality, it is not necessarily the mainstream culture for measuring thereof. The VBHC agenda, similarly to the QoC, puts forward patients’ values regarding health and care outcomes, stressing their involvement in decision-making processes [4]. The construct of patient-centeredness emerges as a sub-dimension of those two concepts (QoC and VBHC) [5]. However, the inclusion of a people-centered perspective in VBHC is not without tensions as VBHC is a concept derived from management theories, with a clear conceptual focus on costs [6]. Hence, there can be a tension between the business model of a health care insurer oriented to optimizing the value for individual patients/insured versus optimizing the health of a population such as the group of individuals that pay their premium for the insurance. To strengthen people-centeredness and strive towards QoC and VBHC, health system stakeholders (e.g. health care insurers and care providers) should commit to the value agenda supported by intelligence on the healthcare system users’ needs, expectations, and preferences [7-9]. Hence, patient-reported data have become crucial to gain insight on one’s voice and inform the decisions of those key stakeholders.”

(R2C3) The decision to exclude the studies where the setting was not a high-income country, based on the assumption that a health insurance system in developing countries might differ greatly from that of developed countries, seems fair. However, also among high-income countries health insurance systems and their involvement in healthcare systems differ greatly, not acknowledged by the authors. Though I guess out of the scope of the present study it would be interesting to study the difference in the use of patient-reported outcomes divided by differences in health insurance systems and their involvement in healthcare systems.

The Reviewer addresses a valid viewpoint by highlighting the influence of context to understand differences on the use of patient-reported data among health insurers in high-income countries; we made this limitation clearer. The text now reads as follows:

“Also, limiting our search to high-income countries may have also introduced general bias. On the one hand, we did not consider studies from low-middle income countries because the use of patient-reported data in such contexts is yet limited [85]; on the other hand, we acknowledge that even in high-income countries, the extent to which patient-reported data are used may vary greatly, considering the role and involvement of a health insurer in the health system. Finally, given the study design, generalizability of results is limited; for example, contextual factors (e.g. the organization and digitalization of the healthcare system) vary greatly, and the extent to which these affect the uses and applications of patient-reported data are unknown.”

(R2C4) In the section “Data charting and analysis” it is described that indicators are organized in a traditional structure-process-outcome approach. For those not familiar with QoC and Donabedian, this might need an explanation, or at least a reference, maybe [1]?

We followed the Reviewer’s suggestion and added further explanation and two references. The text now reads as follows:

“We organized the list of indicators following the Donabedian’s healthcare quality model (structure-process-outcomes) [1, 30] given how widespread and familiar this model is across health systems and its stakeholders. This option could also facilitate a first approach to organize scattered information about the purposes and uses of patient-reported data by health insurers. Data mapped under structure highlight measures regarding the context and setting wherein care is delivered, and data under process highlight the interactions between a person and providers throughout the care trajectory; regarding data under outcomes, we organized information as clinical measures (referring to the diagnosis, treatment, and monitoring of a person), and patient-reported measures (PROMs, PREMs, and satisfaction measures). The focus of our work is on patient-reported data, but by using structure, process, and clinical measures as ancillary indicators in our work, we expected to have a better understanding on how patient-reported measures are used (as standalone or combined with other measures).”

(R2C5) In Table 1, I do not seem to find an explanation for the star* in PREM*? Furthermore, the definition of clinical quality is not stated. I am well aware of the problems with categorizing in this field, and reading Table 1 makes we wonder if all patient-reported data in the column PROM is actually reported as outcomes? I suspect some to be reported more as “patient characteristics”. “Health status” is a very generic term. I guess that the terms included in the table are the actual terms used in the papers (could be described), but if it is the authors interpretations, they should be defined/explained in more detail. Moreover, I wonder if only PROMS, PREMS and patient satisfaction are patient-reported? I guess that though the title implies that only patient-reported data are included in the study, the data regarding structure and process are mainly not patient-reported, but rather supportive data for your study. However, this is not very clearly stated.

To improve the reading experience, we removed the previous Table 1 (available now as Supplementary file). In this new Supplementary file, we clarify in a footnote that the terms used in the Table are the same as in the source document. We also clarify in-text the use of structure, process, and clinical indicators in support of achieving a better understanding about the use of patient-reported data by health insurers. That passage now reads as follows:

“Data mapped under structure highlight measures regarding the context and setting wherein care is delivered, and data under process highlight the interactions between a person and providers throughout the care trajectory; regarding data under outcomes, we organized information as clinical measures (referring to the diagnosis, treatment, and monitoring of a person), and patient-reported measures (PROMs, PREMs, and satisfaction measures). The focus of our work is on patient-reported data, but by using structure, process, and clinical measures as ancillary indicators in our work, we expected to have a better understanding on how patient-reported measures are used (as standalone or combined with other measures).”

(R2C6) In row 1, it is stated “RAND-36 survey, which included information on…”. I suspect that is not true, RAND 36 is the original, free version of SF-36, and does not include self-reported morbidity etc.

Thank you for flagging this issue; we revised the text.

(R2C7) The concept of selective contracting needs a short description, I think many readers are not familiar with the concept.

In hopes of clarifying the concept of selective contracting, we revised the text as follows:

“Selective contracting was discussed in five documents (12%) [60, 67-69, 74]. In general, selective contracting refers to the contractual agreement between a health insurer and a provider, where the former selects those providers that meet certain QoC expectations. The inclusion of QoC indicators is highly dependent on the availability of data; hence, the most common data used in these contracts are based on volume and costs [66, 69], and only recently some incorporate PROMs (and in a lesser extent, PREMs) [68, 74]. The use of structure, process, and outcome indicators for the purpose of selective contracting was discussed in Moes et al. (2019).”

(R2C8) I find your Discussion and Conclusion very apprehensive. The recommendation “In addition, health insurers should have a clear scope of the use of patient-reported measures” made me reflect on the fact that for so many years healthcare professionals have failed to accomplish just that. Could health insures achieve what health professionals could not, or will it be even more difficult for health insurers…

This is a fair reflection that we agree with. We included it in the text as follows:

“In addition, health insurers should have a clear scope about the use and actionability of patient-reported measures, and further involve insurees to the extent where it is feasible and deemed necessary. For many years now, there is a generalized consensus among healthcare providers and professionals for a greater involvement and engagement of people in decision-making towards a more people-centered health system. Albeit significant advances, we still fall short on that cornerstone. The extent to which lessons learned by health systems could be used and known obstacles could be overcome by health insurers remain overlooked and deserve further research.”

Reviewer #3 (R3)

(R3C1) In general, the topic of the article is well picked and scientifically relevant. The use of a scoping review is suitable for the aim of the article. Research question was addressed in methods and result section. However, there are some inconsistency in methods and results, so that question regarding external validity and pictured landscape of health insurer's use of patient reported data arises. In my opinion the concerns could be solved by clarification/specification of the explanations given. The main critic relies on the 13 pages table within the main text. This should be solved and adjusted due to the understanding and focus of the article. Also the frameworks used (structure-process-outcome as well as value chain) should not only be described better, but also be more applied within the text.

Following the comments of the Reviewer and those of the other Reviewers, we made changes throughout the text to address these issues, such as i) adding a revised version of Table 1 to synthesize our findings in a much simpler manner in hopes of improving the reading experience, rather than having an exhausting list of retrieved references (now available as supplementary material); and ii) excluding the use of the value chain framework to present and discuss the findings. We have also added an explanation to the use of the structure-process-outcome framework.

(R3C2) Scoping review methodology: The article stated that it was following Arksey and O’Malley (2005) stepwise framework. In the next section, they referred to the framework of Levac et al (2010) and later on to Silverman (2009), which seems a little bit inconsistent. Levac et al (2010) recommended and suggested for instance a update of the Arksey and O’Malley framework. The question arises, why different methodology were used or not only Levac was the entire foundation of the methodology. Probably it should be enough to frame this differently saying that Arksey was the basis, but recommendation of other authors was taken into account as well.

We thank the Reviewer’s suggestion to clarify this aspect of the methodology. We made the following in-text change:

“To enhance the quality of the methodology used, we based the review on the stepwise methodological framework suggested by Arksey and O'Malley (2005), while also taking into account the recommendations of other authors [25, 26].”

(R3C3) Structure-process-outcome approach: there was no explanation included within the methods section about the approach (even though it was claimed as ‘traditional’ and as the indicators for the research question). There was also no explanation of the rationale behind, the application in data extraction (e.g. why are outcomes differentiated into four subtypes and others not and are the suitable) and the suitability for the research question itself. However, it was referred to it in the result section entirely (good!), but, therefore, an introduction/explanation before is needed.

We included an earlier mention in-text to clarify this choice. It reads as follows:

“We organized the list of indicators following the Donabedian’s healthcare quality model (structure-process-outcomes) [1, 31] given how widespread and familiar this model is across health systems and its stakeholders. This option could also facilitate a first approach to organize scattered information about the purposes and uses of patient-reported data by health insurers. Data mapped under structure highlight measures regarding the context and setting wherein care is delivered, and data under process highlight the interactions between a person and providers throughout the care trajectory; regarding data under outcomes, we organized information as clinical measures (referring to the diagnosis, treatment, and monitoring of a person), and patient-reported measures (PROMs, PREMs, and satisfaction measures). The focus of our work is on patient-reported data, but by using structure, process, and clinical measures as ancillary indicators in our work, we expected to have a better understanding on how patient-reported measures are used (as standalone or combined with other measures).”

(R3C4) The search strategy for NIHR Journals Library, PDQ-Evidence, EBSCO/Health Business Elite and Cochrane Library is not attached within the supplement file. This should be added due to the PLOS One requirements. Moreover, NICE is stated within the supplement file, but not listed in the text for the systematic search.

Thank you for flagging these inconsistencies. We updated the search strategy in the Supplementary file.

(R3C5) Row 132-135: Rationale for expert interviews with Hungarian health insurance is unclear. Is this adoptable to other settings? Why was then the Hungarian language not included? Is asking only one insurance company not quite biased?

We decided to delete this information as it had no impact on our methodology nor the results and its interpretability. After the initial literature scan on terminology used by health insurers and early discussions among the research team to define the search strategy, we reached out to senior employees of an international insurance company in Budapest (Hungary) with whom we had discussed an early plan of this study. We asked these experts to weigh in with their working knowledge of the business on the most frequent or common terms used by health insurers to refer to patient-reported data, taking into consideration not only the company to which they worked for, but also their competitors.

(R3C6) Time limit for JSTOR: why was a time limit for JSTOR applied, but not for the other databases? Argumentation is flawed here.

JSTOR provides access to millions of academic journal articles spanning across 75 disciplines. Initial searches with no time limit retrieved many irrelevant hits off-topic and that could not be organized. When we considered more recent years by including a time limit (year 2000 onwards), the information retrieved were more on-topic and somewhat easier to work with. We clarified this in-text as follows:

“The search was not time bounded except for the JSTOR database. Not limiting the timeframe was producing a large number of hits off-topic to this review, which revealed to be unmanageable; hence, we limited the search from the year 2000 onward, where documents of potential relevance to the screening process started to emerge.”

(R3C7) Focus on PROMs and PREMs: in the introduction it is stated that a specific focus is on PROMs and PREMS. This was reflected by using the terms specifically in the search strategy for Pubmed and Embase. However, the focus was mainly seen in the introduction, later on the results showed a wider perspective. I would recommend to reframe it. Otherwise questions about possible bias, rational and differentiation towards other methods arises.

Thank you for flagging this inconsistency. We revised the aims of the study to clarify that we assumed a wider perspective when referring to patient-reported data.

(R3C8) Search terms: Search terms were mainly limited to Title/Abstract. The authors argued in the limitation section that Mesh terms/indexed words were not available for their specific topic. The question arises, why then the limitation on Title/Abstract was applied to mitigate the obstacles (especially because it is a scoping review). Furthermore, the limitation of Mesh Terms is in general obvious due to the time gap between new topics and the indexing process, so research in general should not entirely rely on them.

The key reason to limit the search to terms in the title/abstract is because broadening the search strategy resulted in a plethora of references off-topic. We sought to clarify this in-text, which now reads as follows: “We limited the search to keywords found in title and abstract to minimize the number of off-topic hits, which otherwise would have been unmanageable. The search strategy was adapted to each database and can be found as supplemental information (S2 File).”

(R3C9) The presented table is covering 13 pages in total in the main text. In my opinion, this is way too long for a scientific journal article and not benefitting the understanding of the article. I would highly recommend to move the table towards the supplemented files and add a table with aggregated results within the section.

We proceeded as suggested by the Reviewer.

(R3C10) Row 200: why are mail correspondences included?

These correspondences refer to email communication supporting references retrieved via the non-systematic search (e.g. searches on the webpages of health insurers). Often the information available on a health insurer website was insufficient and we felt the need to contact them in hopes of getting access to better information or other documents that could complement the initial webpage search. We tried to clarify this in-text; it now reads as follows:

“For the non-systematic search, we included relevant grey literature such as webpages of insurers and third-party reports (S3 File). The search was performed between May 8th and June 18th, 2019. The documents retrieved were identified through online searches, reference mining, and recommendations from experts. The latter refers to contacts we have established via email (e.g. health insurers, insurance associations/federations, consultancy firms, or patient advocates) to direct us towards potential documents to complement the internet search. In total, 23 emails were sent to institutions that we believed could bring clarity or provide further information on top of what we had read on their webpages; 9 answers were received until the 18th of August 2019. A reminder was sent to all unanswered emails; we have received no reply to 14 emails and closed further contacts by the end of August 2019.”

(R3C11) Row 208: The period of published years is stated correctly. However, it is a bit misleading, as the search strategy stopped in mid 2019, so this year could not be entirely included.

We clarified the timeframe of the search strategy. The text now reads as follows: “The documents included in the study covered the period from 1996 to mid-2019 (Table 1)”.

(R3C12) Row 226: Often does not match with only one source mentioned. Moreover, the result does not match up with the results table, as there is only the cell patient satisfaction filled out and not cells for PROMs or PREMs. Selective contracting: It would have been beneficial to start of with the terminological definitions/differences so that the reader understands the qualitative aspects of it and can follow the section easier.

We cross checked the references to this issue flagged by the Reviewer. Also, we added the following in the text to clarify the term ‘selective contracting’:

“Selective contracting was discussed in five documents (12%) [60, 67-69, 74]. In general, selective contracting refers to the contractual agreement between a health insurer and a provider, where the former selects those providers that meet certain QoC expectations. The inclusion of QoC indicators is highly dependent on the availability of data; hence, the most common data used in these contracts are based on volume and costs [66, 69], and only recently some incorporate PROMs (and in a lesser extent, PREMs) [68, 74]. The use of structure, process, and outcome indicators for the purpose of selective contracting was discussed in Moes et al. (2019).”

(R3C13) Row 273-274: What is the source for it?

This was an attempt of linking our results to the value chain framework. Given that we decided not to use the value chain, we removed that sentence.

(R3C14) Row 279: What are the referred articles for PREMs here? Prediction model: What kind of patient reported data is used here from the structure-process-outcome approach?

Thank you for flagging these missing references. We have now included them.

(R3C15) Value chain framework: The idea of using the value chain framework for the interpretation of the results sounds good. However, it is only mentioned three times in the article mainly by saying that the indicators are affecting all of the areas and without an in-depth interpretation of effects. It is entirely missing within the discussion part. The stated aim in the methods of using the framework to map patient reported data in the light of health insurer activity is therefore not fulfilled. It should be also reflected in the data charting, as this was described as one of the main points. If the framework should be incorporated into the article, a more in depth analysis should be performed. Otherwise it should be removed.

Following the suggestion of the Reviewer, we decided not to anchor our results and discussion in the value chain framework; hence, we removed it.

(R3C16) Limitations: Some rudimental limitations of the work are missing, e.g. language restrictions that are leading towards a selection bias of included material, general bias within unsystematic approaches, here by choosing relevant organizations, influencing factors of different health systems, as they occur not only between high and middle/low income countries.

We revised the text as follows:

“The findings of our review should be interpreted in light of some limitations. The heterogeneity of terminology, the use of an unsystematic search component, and language restriction may have introduced bias. To mitigate possible effects, the search strategy was informed by (but not limited to) the terminology used in other systematic reviews related to patient-reported data; we also assessed the extent to which documents retrieved via unsystematic search were aligned with those retrieved via systematic search. Also, limiting our search to high-income countries may have also introduced general bias. On the one hand, we did not consider studies from low-middle income countries because the use of patient-reported data in such contexts is yet limited [85]; on the other hand, we acknowledge that even in high-income countries, the extent to which patient-reported data are used may vary greatly, considering the role and involvement of a health insurer in the health system. Finally, given the study design, generalizability of results is limited; for example, contextual factors (e.g. the organization and digitalization of the healthcare system) vary greatly, and the extent to which these affect the uses and applications of patient-reported data are unknown.”

(R3C17) Row 425-427: The argumentation here is not convincing. A scoping review only makes sense, if an area is not exhaustively researched. Limiting then the search strategy towards previous work pieces seems a bit odd, as the scoping review aims to get more and broader insights into the topic before. A more grounded argumentation would have been beneficial.

We did not limit the search to previous works; rather, we were informed by other works on keywords that could be of beneficial use in our searches, taking into considerations our aims. We clarified this in the text that now reads as follows:

“The findings of our review should be interpreted in light of some limitations. The heterogeneity of terminology, the use of an unsystematic search component, and language restriction may have introduced bias. To mitigate possible effects, the search strategy was informed by (but not limited to) the terminology used in other systematic reviews related to patient-reported data; we also assessed the extent to which documents retrieved via unsystematic search were aligned with those retrieved via systematic search.”

(R3C18) Wording: Row 275-277: Doesn’t some terminologies stated within the text more reflect applications like hospital ranking rather than terminologies?

We do agree that some expressions reflect more an application of quality reporting, rather than a terminology. However, as with other terminologies in this field, the cited expressions are often used interchangeably or as proxy to quality reporting.

(R3C19) Cost-effective: The article is using the term cost-effective inappropriately. In row 64 the authors stated that patient-provider collaboration are “cost-effective from a clinical perspective” and refer to a source from 2006. First of all it is questionable, if a statement of being cost-effective from 2006 is still applicable. Secondly, to what does a clinical perspective refer in this case in regards to cost-effectiveness? Thirdly, is it reliable to generalize the statement that patient-provider collaborations are cost-effective from an economics point of view (as the term is based on health economics)? Often, QoC or VBHC are not “good” in terms of the cost-effective ratio, but there are other ethical considerations that influence the decision about integration of interventions (or here patient-provider collaborations). Row 287-289 is here a little bit misleading, as it is meant that using patient reported data instead of “normally” gathered data for risk assessment could be cost-effective. In the article the ratio behind is not becoming clear. I would reframe it here.

We have removed this reference to the term cost-effective.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Mathieu F Janssen

14 Dec 2020

Understanding the use of patient-reported data by health care insurers: A scoping review

PONE-D-20-09776R1

Dear Dr. Brito Fernandes,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Mathieu F. Janssen, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Feel free to take some final minor suggestions on board provided by reviewer 3.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #3: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have responded to the comments of the reviewers in a comprehensive and thoughtful manner. I do not have any additional comments or feedback.

Reviewer #3: The authors revised and incorporated the suggestions of the reviewers in their manuscript sufficiently. Now, the article holds for internal validity. The adjusted research questions and method section are consistent and precisely tailored towards the results presented. The discussion analyses the results thoughtfully and transferred the findings onto a broader scale. The limitation of the study are transparently represented.

There are three fine points I would like to further comment on:

Value-based healthcare: The clarification of the concept within the introduction section is reasonable. However, as introduced by Porter (original source is not mentioned within the paper), costs are a driver of value within his introduced formula of outcome/costs=value. This indicates, that a greater outcome of a more expensive intervention could produce a greater value than a less expensive intervention with a great outcome loss under certain circumstances. Therefore, outcome and costs are dependent resulting in theory towards efficient use of resources, so that the mentioned conceptual focus on costs alone as a description of the concept is, from my point of view, too simplified. Furthermore, as described by the authors, VBHC is an emerging trend within health care systems, so that a more exhaustive discussion of (missing) VBHC in the context of patient-reported data in the literature would have been interesting

Search strategy: The argumentation of the time limit for JSTOR is pragmatic and reasonable, but a source about the emerging number of papers from 2000 onwards would have increased the convincement of the sentence.

Lastly, as a formal point, the text formatting is not consistent throughout the manuscript, but I guess this is solved by the publisher itself.

Overall, as the points raised are more part of a scientific discussion as well as editorial remarks rather than serious weaknesses of the article, I would recommend the acceptance of the article (optionally with some small, final adjustments).

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #3: No

Acceptance letter

Mathieu F Janssen

16 Dec 2020

PONE-D-20-09776R1

Understanding the use of patient-reported data by health care insurers: A scoping review

Dear Dr. Brito Fernandes:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Mathieu F. Janssen

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) checklist.

    (DOCX)

    S2 File. Full search strategy for each database.

    (PDF)

    S3 File. Overview of the organizations whose websites were consulted during the non-systematic search.

    (PDF)

    S4 File. Charting form of the documents retrieved in the systematic and non-systematic search following a structure, process, outcome organization.

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES