Skip to main content
PLOS One logoLink to PLOS One
. 2025 Jul 10;20(7):e0325833. doi: 10.1371/journal.pone.0325833

What instruments are available to aid or evaluate personalised care delivery, from the perspectives of healthcare practitioners and service users? A narrative scoping review

Louise Johnson 1,2,*, Beth Clark 2, Lyndsay Court 1,2, Hayden Kirk 3, Matthew Wood 1,4, Sharon Jackson 5, Mari Carmen Portillo 2,6
Editor: Sefki Kolozali7
PMCID: PMC12244752  PMID: 40638613

Abstract

Background

Although the widespread implementation of personalised care is commonly cited as one of the solutions to managing the burden of increasing multimorbidity, there is no established method for assessing and evaluating personalised care delivery. This review sought to describe the range of existing tools, instruments or methods for assessing, evaluating or measuring personalised care delivery, from the perspectives of healthcare practitioners and/or service users.

Methods

A scoping review of literature published since 1990.

Results

From 3851 potential citations, 172 were included. Of these, 103 reported the development of a new instrument, and 69 reported the adaption of an existing instrument. Most instruments (81%) were designed for use in a specific clinical population. A focus on supported self-management was particularly common (80%). Few instruments were identified that explored the views of healthcare staff (n = 8) or carers (n = 1). Content analysis generated six domains: understanding the person; understanding capability; understanding behaviour; personalised care interventions; experience of care; and wider determinants. These domains have been used to propose a concept framework.

Conclusion

This review identified a high number of instruments, designed to support personalised care delivery or evaluation. Many were designed to understand a single construct of care (e.g., supported self-management), at an individual level (e.g., patients) and in a specific population (e.g., diabetes). For clinicians wishing to utilise a standardised instrument for a specific purpose, there are many to choose from. Yet no tools encompass the full spectrum of constructs encapsulated within personalised care. Future work should focus on how instruments are used to improve personalised care delivery, particularly through a less siloed, multimorbidity lens.

Introduction

Developing healthcare systems with the readiness and ability to deliver holistic and personalised care, is a key priority for the UK National Health Service (NHS) [1]. While personalised care interventions hold promise for addressing the challenges of preventing and managing multimorbidity, their success depends on achieving widespread implementation; which remains limited [2]. One factor inhibiting implementation is the lack of an established method for assessing and evaluating personalised care delivery. Robust evaluations not only determine whether an intervention works, but also why and how – enabling us to learn from effective interventions and develop new ones [3]. In personalised care, an effective evaluation should consider the views of multiple stakeholders and would ideally be applicable across diverse settings and populations. This is pertinent given the increasing prevalence of multimorbidity, and the subsequent burden on healthcare utilisation, healthcare expense, overall functioning, and quality of life [4].

Before methods for full scale evaluation of personalised care are developed, it is important to understand the range and characteristics of the existing instruments and methods available to evaluate and/or measure personalised care, or aspects of it. While there are various instruments available in the field of personalised care, there is also a lack of guidance on how to effectively select and utilise these tools. Enhancing this understanding is essential to improving clinical delivery and systematic implementation. This scoping review will compile and categorise the existing instruments, laying the groundwork for a deeper understanding of their role in personalised care delivery, and identifying areas that require further research.

As there is no universally accepted definition of personalised care, our review aligns to the description of whole person care; an approach that considers multiple dimensions of the patient and their context, including biological, psychological, social and possibly spiritual and ecological factors, and addresses these in an integrated fashion that keeps sight of the whole [5]. Implementation of a more personalised approach can be enabled through six evidence-based and inter-linked interventions, as outlined in the NHS Personalised Care Operating Model [6]. These are shared decision making, personalised care and support planning, enabling choice, social prescribing and community-based support, supported self-management and personal/integrated health budgets [6].

Scoping reviews are a form of evidence synthesis, that are used to describe the available literature on a topic, with the specific objective of describing the volume and nature of the existing evidence [7]. Through this review, we aim to identify and describe the available instruments through which personalised care can be evaluated, or through which elements of it may be measured. Given the complexity of personalised care, and the potential breadth and variety of research in this field, a scoping review provides the appropriate methodology to do this.

Review question

What tools, instruments or methods are available to aid or evaluate personalised care delivery, from the perspectives of healthcare practitioners and/or service users?

Objectives were to

  • a)

    identify the different types of measurement and/or evaluation instruments used to understand personalised care delivery within clinical services.

  • b)

    identify and describe the key characteristics of these methods or instruments.

  • c)

    report how the methods and instruments are used, and who they are used with.

  • d)

    summarise existing knowledge relating to the evaluation of personalised care delivery, from the perspective of a range of stakeholders.

Methods

We conducted a scoping review with a systematic methodology, broadly following the stages outlined by Arksey and O’Malley: identifying the research question; identifying relevant studies; selecting studies; charting data; collating, summarising and reporting data [8]. Our protocol was pre-registered on the OpenScience Framework (osf.io/2wfg7) and we used The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Review (PRISMA-SR) checklist to guide reporting.

To focus our research question, we used the Participant, Concept, Context (PCC) criteria [9,10].

P: adults with long term condition(s) who are accessing healthcare services; healthcare staff who are involved in the delivery of services to people with long term conditions; healthcare teams, services, or systems of care, that are involved in supporting people with long term conditions (“service units”)

C: studies that report the development of a method or instrument aimed at understanding, evaluating or measuring the delivery of one or more element of personalised care.

C: any health care setting, including both physical and mental health

To keep the review broad, we included studies that focus on the evaluation of one or more element of personalised care [11]. Studies were included from all healthcare settings, and all long-term condition patient groups.

Identifying relevant studies

Studies were identified through searching electronic databases (CINAHL, MEDLINE, AMED and EMBASE), using the terms outlined in S1 File. Due to the vast number of records identified when we used all three PCC categories, we applied a search using the concept search terms only – these related to “personalised care” and “measurement”. We applied the participant and context criteria when completing our title and abstract review. The search was run from 1990−11th December 2024 and was restricted to English-language publications.

Selecting studies

All identified records were imported to Endnote20 for removal of duplicates. Remaining records were transferred to Rayyan [12], to facilitate collaborative title and abstract screening. Titles were screened by the primary researcher (LJ), removing records that clearly did not meet the inclusion criteria. The remaining records were independently reviewed by two members of the research team, through abstract screening and full paper retrieval if necessary. Five reviewers were involved in this process (LC, BC, MW, SJ, LJ), with each paper being considered by at least two. Data labels were agreed a-priori, with new labels generated as required. Discrepancies were resolved by consensus of the whole group. Reasons for exclusion are reported in the PRISMA-SR flow chart (Fig 1)

Fig 1. PRISMA Flow Chart.

Fig 1

Studies were eligible if they met the following criteria: a) inclusion of people over the age of 18 with one or more LTCs and/or healthcare staff or services that support people over 18 with one or more LTCs; b) reports on the development of a method or instrument for understanding, evaluating or measuring personalised care delivery; c) the evaluation/measurement instrument is (or could be) used within clinical practice (e.g., a survey); d) written in English.

Papers were excluded if they: developed and tested an intervention for personalised care (including Decision Support Tools); focussed solely on paediatric to adult transition services; used an instrument within a study, but did not develop or evaluate the instrument itself; used qualitative methods to collect data relating to personalised care, but not with the intention of those methods being used outside of the research. Instruments that sought to specifically and solely understand personal characteristics, such as self-efficacy, coping behaviours or activation, were excluded from the review. Whilst we recognise these are important characteristics that influence personalised care, they also play an independent role in overall health outcomes, beyond personalised approaches. To maintain a direct focus on personalised care delivery, we only included instruments that explored two or more characteristics, with the explicit intention of informing personalised care delivery. Given the significant number of related constructs and the high volume of instruments in these related fields, this was also important to maintain a realistic scope for the review. Further studies were identified by reviewing the reference lists of selected papers.

Charting data

Data extraction followed the principles outlined by Pollock et al [13]. A data extraction template was developed, trialled with the first five papers, and refined through an iterative process involving multiple members of the research team (LJ, MCP, BC). Once finalised, data relating to study characteristics and findings was extracted and charted by a single researcher (LJ). Throughout the process, members of the research team met regularly to discuss and reflect on the findings, and to collectively address areas of ambiguity or concern. New data items were incorporated throughout the review process. Where necessary, authors of studies were contacted to clarify aspects of their research or provide missing data.

In addition to extracting data from each paper, we reviewed the instruments themselves. Using conventional qualitative content analysis [14,15], we describe the nature of the instruments, and other factors relating to concept [13]. Each item within each instrument was coded, using categories that were derived directly from the instruments themselves. This initial process of coding was led by the primary researcher (LJ). Codes, illustrated by examples, were then discussed with the wider research team to agree categorisation and mapping, to aid simplification [10].

It was not our intention to scope the effectiveness of these tools nor their psychometric properties – and therefore this information was not extracted. Furthermore, in line with accepted recommendations for scoping reviews, we did not apply any formal critical appraisal tool to the included studies [10,16].

Collating, summarising and reporting results

Key findings were tabulated and are presented in a descriptive way, as per the initial study objectives. Data relating to the study characteristics and the study findings is reported as a narrative summary, providing a broad overview of the evidence in this field. Similarities and differences across papers are described, alongside an overview of the strengths and limitations of the evidence base. Mapping of codes from the content analysis is presented narratively, and visually as a concept framework.

Results

Our original search returned 3851 titles for review. After title screening, 668 abstracts were screened for inclusion, 265 full text papers were reviewed, and 172 were included in the final review. Of these, 103 papers underwent full review. The remaining 69 papers reported the adaption of an existing personalised care instrument; these underwent partial review, to describe the nature of these adaptions. In total, 101 unique personalised care instruments were identified (S2 File).

Overview of study characteristics

The number of publications in this field has risen steadily since 1990, demonstrating an ongoing interest and drive for instruments that can support personalised care delivery (Fig 2).

Fig 2. Publications over time.

Fig 2

Whilst most research took place in North America, Europe and parts of Asia, we identified studies from a wide range of geographical settings. The development of novel instruments took place almost exclusively in high income countries, likely reflecting a greater focus on personalised care within these settings, particularly within health system policy. The adaption of existing instruments involved a broader range of settings, with a large proportion being adapted for use in Asia (42%), followed by Europe (35%) and North America (14%). Most of these papers report language (22%) and cultural adaptions (45%), with associated psychometric testing. Other adaptations included testing within a new clinical population (16%), a new clinical setting (4%), or the modification of the instrument itself (e.g., addition or removal of items) (13%).

Participants.

The majority (94%) of instruments were designed to understand the perspective of patients or service users; primarily using self-administered questionnaires. We identified only a small number of instruments that were developed to evaluate the perspectives of healthcare practitioners (n = 8) or caregivers (n = 1). Two instruments included mirrored questions, to directly compare patient and provider perspectives of shared decision-making consultations [17,18]. Healthcare practitioner focussed instruments used observational methods to evaluate practitioner-patient interactions [19], or self-report to understand perceived behaviours [20], knowledge, confidence and attitudes of clinicians themselves [21].

Instruments have been developed and tested with a wide demographic of patients. Given that clinical condition and geographical region will both impact the included population, we have not collated this information. However, we note that reporting of demographic characteristics varied. Whilst almost all studies reported core demographics (e.g., age, gender), reporting of wider factors that are of importance to personalised care was more limited; 68% report educational level, 37% participant ethnicity, 37% household income/employment status and 18% social deprivation (or related data, such as housing and insurance status).

Context.

Instruments were developed and evaluated in a wide range of healthcare settings and specialities (Table 1); most commonly in primary care. Many tools (81%) were designed for use in a specific clinical population; and whilst there were examples from a wide range of clinical specialities, instruments designed for use in diabetes care were particularly common (23%).

Table 1. Number of instruments aimed at service users, per clinical speciality.
Condition Group Number of Instruments %
Endocrine (diabetes) 22 22.9
Mixed Long-Term Conditions 17 17.7
Cancer 7 7.3
Cardiac 8 8.3
Musculoskeletal Conditions 6 6.3
Respiratory 6 6.3
Neurology 5 5.2
Mental Health 5 5.2
Renal 5 5.2
Geriatrics 3 3.1
Rehabilitation 1 1.0
Audiology 1 1.0
Pain 1 1.0
Hospital Discharge 1 1.0
Surgery 1 1.0
Haematology 1 1.0
Infectious Diseases 1 1.0
Vascular 1 1.0
Major Trauma 1 1.0
Gastroenterology 1 1.0
Rheumatology 1 1.0
General Health 1 1.0
Total 96 100

Instruments vary in both length and breadth; ranging from those purposively designed with clinical utility in mind, to those designed to give more comprehensive insights. The mean number of factors (domains) within an instrument was 4.3 (range 1–16; SD 2.7), and the mean number of items was 22.4 (range 2–91; SD 15.6).

Context.

Of all identified instruments, most were unidimensional – focussing on one aspect of personalised care. The most common aspects were supported self-management (80%) and shared decision making (17%). We did not identify any instruments aimed specifically at evaluating the personalised care and support planning or social prescribing aspects of the personalised care model. Only a small number of instruments (n = 3) focussed on personalised care experiences more broadly. For example, The Kim Alliance Scale [22] is designed to evaluate the therapeutic alliance between patient and provider, and the Perceived Involvement in Care Scale [23] is designed to evaluate patient attitudes about illness and medical care. Both are grounded in an evaluation of patient-health care provider relationships and were developed to facilitate improvements in personalised interventions. We did not identify any multidimensional instruments that capture a range of personalised care domains.

Content analysis of instruments.

Content analysis of individual instruments revealed a range of constructs. We categorised instruments broadly in terms of overall purpose: to be used pre-intervention to determine needs and to direct treatment (42%); to be used post-intervention to evaluate outcome or experience (22%); or to be used flexibly across the time course of input, to tailor input, as well as to evaluate change (36%).

Whilst some instruments focus specifically on a single construct (e.g., knowledge), many evaluate a range. We found most tools included multiple choice or rating scales as response items, whilst some had open text answers that require a degree of analysis/coding, e.g., [24]

Content analysis generated six domains (Table 2). Some instruments focus on a single domain, and others cross several, but no instruments covered all six domains. These domains relate specifically to instruments that evaluate personalised care from a service user perspective. There were insufficient instruments to generate similar domains in relation to healthcare professionals.

Table 2. Domains and Examples.
DOMAIN EXAMPLE(s)
Domain 1: Understanding the Person
1.1 Values and Beliefs I have a firm belief which guides me for better diabetes control [Character Strengths in Diabetes Self-Management Scale [25]]
I am confident with controlling my symptoms [Chronic Hepatitis B Self-Management Scale [26]]
I have been asked about my values and traditions [Patient Assessment of Care in Chronic Conditions [27]]
1.2 Priorities and Preferences
1.3 Self-Efficacy and Confidence
1.4 Acceptance and Resilience
Domain 2: Understanding Capability
2.1 Knowledge – Condition Specific Knowledge: During an asthma attack, the muscles around the air tubes tighten and the tubes become narrow (True/False) [Asthma General Knowledge Questionnaire [28]]
Skill: (I am able to ensure) The solution bags are correctly attached and the tubes are correctly organized [Self-Management Scale for Peritoneal Dialysis [29]]
2.2 Knowledge – General
2.3 Skills – Physical
2.3 Skills – Psychological
Domain 3: Personalised Care Interventions
Interventions offered or accessed In the last 3 months, how often have you used community resources to help manage your illness such as senior centres, community centres, or mall walking programmes [The Chronic Illness Resources Survey [30]].
Attending support groups is an important part of my HIV management strategy [HIV Self-Management Scale [30]]
Domain 3: Understanding Behaviour
4.1 Intent or Preparedness Intent: I realize now that it is time for me to come up with a better plan to cope with or manage my injury related problems [Readiness to Engage in Self-Management after Acute Traumatic Injury [31]]
Action – Condition Specific: I check my blood sugar levels with care and attention [Diabetes Self-Management Questionnaire [32]]
Action/Impact – General: I do things to maintain a healthy weight [Adult Epilepsy Self-Management Measurement Instrument [33]]
4.2 Behavioural Action
4.2 Behavioural Impact
Domain 4: Personalised Care Interventions
Interventions offered or accessed In the last 3 months, how often have you used community resources to help manage your illness such as senior centres, community centres, or mall walking programmes [The Chronic Illness Resources Survey [34]].
Attending support groups is an important part of my HIV management strategy [HIV Self-Management Scale [30]]
Domain 5: Experience
5.1 Experience and relationship with healthcare team My HIV doctor and I have a good relationship [HIV Self-Management Scale [30]]
I am comfortable suggesting treatment plan changes to my health care provider [Diabetes Self-Management Instrument [35]]
I have problems with different healthcare providers not communicating with each other about my medical care [Patient Experience with Treatment and Self-Management [36]]
5.2 Experience and satisfaction of care overall
Domain 6: Wider Determinants of Health
6.1 Social To what extent did cost or insurance make it hard to take best care of your diabetes? [Barriers and Supports Evaluation [37]]
6.2 Economic
6.3 Environmental

Domain 1: Understanding the Person includes items that seek to understand the person (patient) and their unique sense of self; typically for the purpose of tailoring care interventions. Included within this domain are items that explore a person’s values, beliefs, priorities and preferences; characterising individual aspects that are important to personalisation of care. Whilst these items may be influenced by external factors, they are inherently internal to a person. This domain also includes items relating to character strengths, exploring self-efficacy, confidence, acceptance and resilience. These items are potentially changeable and therefore understanding them is important not just for tailoring interventions, but also as the direct focus of those interventions, e.g., a self-management support programme aimed at building self-efficacy.

Domain 2: Understanding Capability includes items that explore a person’s knowledge and skills, in relation to one or more component of personalised care. Knowledge was further categorised as general or condition specific; and skills were further categorised as physical or psychological. Included within psychological skills were executive functions, such as problem solving and proactivity.

Domain 3: Understanding Behaviour primarily includes items that ask about actions – i.e., what someone does and/or how often they do it. Behaviours were both general and condition specific. Whilst we acknowledge that many instruments exploring behaviour are grounded in behavioural change theory, for simplicity, we categorised behaviour related items into one of three groups. Behavioural intent, including preparedness or readiness for change (what someone plans to do); behavioural action, exploring what someone actually does (or perceives they do); and behavioural impact, including items that explore someone’s ability to identify change, and/or insight into the outcome of their behaviours. As behaviour is influenced by both the person and their capability, this forms the central thread of our proposed framework.

Domain 4: Personalised Care Interventions includes items that explore what happens for someone; what interventions were given or what was offered by the service.

Domain 5: Experience of Care explores a person’s experience (or satisfaction) with healthcare; whether relating to a specific intervention, or their overall healthcare experience. This domain includes items about a person’s relationship and trust with their healthcare practitioner, healthcare team, or the health system more broadly.

Domain 6: Wider Determinants covers any social, economic or environmental factors that may influence a person’s mental or physical health.

The proposed schematic (Fig 3) illustrates the likely relationship between these domains. Factors relating to “the person” and “capability” (Domain 1 and 2) will influence an individual’s “behaviour” (Domain 3); and these insights can be used to inform “personalised care interventions” (Domain 4), through needs based and tailored treatment plans. Equally, those interventions, if targeted correctly, should mediate behaviours, and may alter aspects of Domains 1 and 2 (e.g., increasing knowledge or improving self-efficacy). All factors impact “experience” (Domain 5). Wider determinants of health (Domain 6) sit around the outside, as wider contextual factors that can influence all aspects of the model.

Fig 3. Personalised Care Evaluation Domains (PCED-6).

Fig 3

Discussion

To our knowledge, this is the first review to summarise the breadth and range of instruments available within the personalised care field.

Despite featuring heavily in healthcare policy across many parts of the world, the widespread implementation of personalised care is lagging. The ambition to embed personalised care within modern health systems is hindered by many challenges, including that of evaluation. This raises questions for how we effectively understand and evaluate a complex, nebulous construct, which is shaped by the ethos and culture of those delivering it and experienced uniquely by those receiving it.

It can be difficult to rate personalised care if patients (or indeed clinicians) lack a frame of reference, and therefore instruments need to be specific enough to elicit useful responses, but broad enough to capture insights into a concept that is by nature, underpinned by an individual’s values and preferences [38]. This review identified a large volume of instruments, designed to support personalised care evaluation in some way. Many of the instruments identified were designed to understand a single construct of care (e.g., supported self-management), to do this at an individual level (e.g., patients) and in a specific clinical population (e.g., diabetes). For clinicians looking to utilise a standardised instrument for a specific purpose, there are many to choose from. Such tools can aid with tailoring treatment for an individual, and if used well, can act as a clinical prompt and conversation starter to facilitate a more personalised approach to care. Such assessment scales can help clinicians to understand the status and ability of patients, to formulate a common understanding, and consequently provide more individualised and effective support to meet individual need [39]. At the latter end of the patient experience, outcomes from specific personalised care initiatives can be evaluated using patient reported outcome and experience measures (PROMs and PREMs); allowing patients to report how they function or feel with respect to their health and wellbeing, or their healthcare experience [Person-Centered Outcome Measures - NCQA].

The PCED-6 framework developed through this review, highlights the range of constructs that could form part of a personalised care evaluation. Which ones are relevant, is dependent on the context and purpose of any given evaluation. Future development of the framework could consider the weighting of each domain, ideally in a dynamic manner that is tailored to the importance of each domain to the individual person and situation. When selecting an instrument to use in clinical practice or research, mapping to this framework will aid contextualisation and provide clarity on what is and isn’t being considered within the chosen instrument.

Most of the instruments identified through this scoping review, focus on either supported self-management or shared decision making. As self-management encompasses all the actions a person takes daily to manage symptoms, avoid relapse and optimise well-being (Lorig and Holman, 2003) [40]  , it is not surprising the instruments measuring self-management had a strong focus on behaviours, as well as the knowledge, skills and confidence to sustain these behaviours. Instruments that include a range of factors, rather than evaluating just one construct (e.g., knowledge), are likely to give insights that are more impactful for clinical practice. For SDM, previous research has classified instruments into three categories: tools that capture decision antecedents (e.g., role preference), scales that describe the decision-making process, and instruments that assess decision outcomes (i.e., decision quality, decisional conflict, regret, knowledge) [10]. We found tools that focus on SDM to broadly fit with these categories. Overall, content of the identified SDM tools was homogenous, likely because SDM is a well-defined construct with an underpinning theoretical basis. One limitation of the instruments for SDM, is that most were designed to evaluate a single interaction or decision; with an overemphasis on medical decision making. In clinical practice, decision making is not (and should not) always be neatly bound into one interaction. A heavy emphasis on measuring single events in healthcare has been noted elsewhere [41], with most PREMS related to short-term care episodes, largely in the hospital setting, and limited to a singular disease focus. Future PREM development should aim to capture experiences of the continuity and coordination within and between health care services and providers [41], a notion that is supported throughout this current review. Whilst there is value in having instruments that focus on specific interventions for personalised care delivery (i.e., SDM and SSM), our review highlights a lack of tools that account for the complex and interacting nature of personalised care, and how this is delivered across systems and over time.

The context in which personalised care is implemented is changing, which in time, may impact clinical and research utility of instruments. For example, some instruments include specific questions relating to digital confidence or competence [42]. Instruments with very specific questions about technical knowledge or skills, in relation to a specific condition, are likely to become outdated as clinical care evolves.

Many authors cite that no tools exist to assess self-management capability as the reason for tool development, yet this review found a plethora of instruments exist. There is a risk of oversaturation of condition specific instruments in this field, and efforts should shift focus to how instruments can be integrated within clinical pathways, to embed and improve personalised care delivery. Furthermore, it is known that the compound effects of multiple long-term conditions are an independent barrier to self-management [43,44]; instruments designed to understand the common barriers, and the cumulative impacts for those living with multiple long-term conditions, are essential.

This review did not identify any instruments designed to understand personalised care more broadly. As such, there is no universally accepted way to understand the translation of policy to practice at a system or national level, and in a way that captures the increasing complexity of health systems, or the challenge of multimorbidity. An excess of definitions contributes to this challenge and has led to the development and adaption of many tools. Capturing all aspects of personalised care within a single tool presents conceptual challenges, reflecting the potential for multi-instrument approaches. The PCED-6 framework could be used to guide such approaches, aiding instrument selection in line with the intended purpose of the evaluation. Our review also highlights a lack of instruments that consider multiple perspectives. We identified very few instruments that were targeted at understanding the perspectives of health care professionals, or informal caregivers/family members. No instruments were specifically designed for system leaders, or those in senior leadership or policy roles. Evidence suggests that what people/patients want, and what clinicians think they want, can be very different [27]. At a delivery level, achieving personalised care requires a horizontal balance of power, and is directly influenced by the attitudes and approaches of healthcare practitioners [45]. Wider contextual factors, at an organisational and system level, also act and interact as barriers or facilitators to personalised care delivery [46]. Yet current evaluation instruments focus almost exclusively on the patient perspective. This misses a vital opportunity to identify, describe and compare wider factors that govern implementation, and is at odds with the fundamental notion that successful personalised care delivery requires a whole system approach. This is an important area for future international research.

Strengths, limitations and challenges

Defining personalised care as a concept and setting the boundaries for inclusion within this scoping review, was a challenge. We used the NHS England Comprehensive Model of Personalised Care to develop our search terms, keeping our lens purposively broad. To ensure there was a degree of focus to the review, and the size remained manageable, we excluded instruments that evaluate constructs related to personalised care, but that are not personalised care per se – for example, scales for measuring patient activation and self-efficacy. Arguably, these concepts could be the most useful ones to measure in clinical practice, and their exclusion does not reflect their importance.

It was not within the scope of this review to report the reliability or validity of the included measures. Given the overwhelming number of instruments identified, future work to clarify and report the psychometric properties and clinical application of various instruments, could help clinicians in their selection and use. Importantly, the instrument is only the start, and how it is used constructively to inform actions that improve personalisation of care, is crucial.

Whilst we have mapped the development of instruments around the globe, we have only included studies published in the English language and accept this may skew the true geographical picture.

Conclusions

This review provides a comprehensive overview of the current body of evidence relating to the evaluation of personalised care delivery. Personalised care is a complex intervention, made up of multiple interacting behaviours. Its principles are applicable in a broad range of settings, for a broad range of purposes – benefitting not just individuals, but also the wider health system. The implementation of personalised care is widely accepted as being reliant on a shift in attitudes and culture within healthcare and is therefore influenced by a range of stakeholders. This complexity makes the evaluation and measurement of personalised care globally challenging. Nonetheless, evaluation is an important pre-requisite to improving delivery and meeting the ambitious challenge of operationalising personalised care across health systems and countries. Through an improved understanding of current methods to objectively and consistently evaluate the delivery of care that is personalised, areas for improvement can be identified and targeted, and change can be monitored.

We found a wide range of instruments that have been developed and tested within a range of populations and settings. However, this review confirms that we currently lack instruments that a) can be broadly applied, including to understand the experiences of people living with multiple long-term conditions and b) provide insights into the multiple perspectives (e.g., healthcare professionals, family carers) that have a role in personalised care delivery. Future studies should focus efforts towards the development and use of single or multi-instruments that address these gaps, allowing for comparison and shared learning across and between services and health systems. Future work should also focus on how instruments are used to improve personalised care delivery, particularly through a less siloed, multimorbidity lens.

Contributions to the literature:

  • There are a wide range of published instruments designed to inform, understand or evaluate the delivery of personalised care interventions.

  • Most instruments focus on one element of personalised care (e.g., shared decision making) and are designed for a specific clinical population (e.g., diabetes).

  • Across all available instruments, a broad range of constructs are evaluated. These have been summarised as six domains and used to propose the Personalised Care Evaluation Domains model – PCED-6.

  • Future work is required to understand if and how standardised instruments should be operationalised, to improve personalised care delivery in practice.

Supporting information

S1 File. Search Strategy.

(DOCX)

pone.0325833.s001.docx (26.5KB, docx)
S2 File. Summary of Included Papers.

(DOCX)

pone.0325833.s002.docx (76.3KB, docx)

Abbreviations

SDM

Shared Decision Making

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.NHSE. The NHS Long Term Plan. London, UK: NHS England. 2019. [Google Scholar]
  • 2.Joseph-Williams N, Lloyd A, Edwards A, Stobbart L, Tomson D, Macphail S, et al. Implementing shared decision making in the NHS: lessons from the MAGIC programme. BMJ. 2017;357:j1744. doi: 10.1136/bmj.j1744 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.SCIE. Person Centred Care - Evaluating Personalised Care Online. Social Care Institute for Excellence. 2020. https://www.scie.org.uk/person-centred-care/evaluating-personalised-care [Google Scholar]
  • 4.Chowdhury SR, Chandra Das D, Sunna TC, Beyene J, Hossain A. Global and regional prevalence of multimorbidity in the adult population in community settings: a systematic review and meta-analysis. EClinicalMedicine. 2023;57:101860. doi: 10.1016/j.eclinm.2023.101860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Thomas H, Mitchell G, Rich J, Best M. Definition of whole person care in general practice in the English language literature: a systematic review. BMJ Open. 2018;8(12):e023758. doi: 10.1136/bmjopen-2018-023758 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.England N. Personalised Care Operating Model. In: England N, editor. London, UK. 2018. [Google Scholar]
  • 7.Sargeant JM, O’Connor AM. Scoping reviews, systematic reviews, and meta-analysis: applications in veterinary medicine. Front Veterinary Sci. 2020;7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Research Methodol. 2005;8(1):19–32. [Google Scholar]
  • 9.Pollock D, Peters MDJ, Khalil H, McInerney P, Alexander L, Tricco AC, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. 2023;21(3):520–32. doi: 10.11124/JBIES-22-00123 [DOI] [PubMed] [Google Scholar]
  • 10.Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Implement. 2021;19(1):3–10. doi: 10.1097/XEB.0000000000000277 [DOI] [PubMed] [Google Scholar]
  • 11.Universal Personalised Care. Implementing the Comprehensive Model. In: England N, editor. London: UH; 2019. [Google Scholar]
  • 12.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Pollock D, Peters MDJ, Khalil H, McInerney P, Alexander L, Tricco AC, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. 2023;21(3):520–32. doi: 10.11124/JBIES-22-00123 [DOI] [PubMed] [Google Scholar]
  • 14.Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:69. doi: 10.1186/1748-5908-5-69 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hsieh HF, Shannon SE. Three Approaches to Qualitative Content Analysis. Qualitative Health Res. 2005;15(9):1277–88. [DOI] [PubMed] [Google Scholar]
  • 16.Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Res Methodol. 2018;18(1):143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kriston L, Schumacher L, Hahlweg P, Härter M, Scholl I. Application of the skills network approach to measure physician competence in shared decision making based on self-assessment. PLoS One. 2023;18(2):e0282283. doi: 10.1371/journal.pone.0282283 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bomhof-Roordink H, Gärtner FR, van Duijn-Bakker N, van der Weijden T, Stiggelbout AM, Pieterse AH. Measuring shared decision making in oncology: Development and first testing of the iSHAREpatient and iSHAREphysician questionnaires. Health Expect. 2020;23(2):496–508. doi: 10.1111/hex.13015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Elwyn G, Edwards A, Wensing M, Hood K, Atwell C, Grol R. Shared decision making: developing the OPTION scale for measuring patient involvement. Qual Saf Health Care. 2003;12(2):93–9. doi: 10.1136/qhc.12.2.93 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kosmala-Anderson J, Wallace LM, Turner A, Barwell F. Development and psychometric properties of a self report measure to assess clinicians’ practices in self management support for patients with long term conditions. Patient Educ Couns. 2011;85(3):475–80. doi: 10.1016/j.pec.2010.10.007 [DOI] [PubMed] [Google Scholar]
  • 21.Bartlett JA, Peterson JA. Psychometric evaluation of the Shared Decision-Making Instrument--Revised. West J Nurs Res. 2013;35(2):193–213. doi: 10.1177/0193945912463267 [DOI] [PubMed] [Google Scholar]
  • 22.Kim SC, Boren D, Solem SL. The Kim Alliance Scale: development and preliminary testing. Clin Nurs Res. 2001;10(3):314–31. doi: 10.1177/c10n3r7 [DOI] [PubMed] [Google Scholar]
  • 23.Lerman CE, Brody DS, Caputo GC, Smith DG, Lazaro CG, Wolfson HG. Patients’ Perceived Involvement in Care Scale: relationship to attitudes about illness and medical care. J Gen Intern Med. 1990;5(1):29–33. doi: 10.1007/BF02602306 [DOI] [PubMed] [Google Scholar]
  • 24.Glasgow RE, Toobert DJ, Barrera M Jr, Strycker LA. Assessment of problem-solving: a key to successful diabetes self-management. J Behav Med. 2004;27(5):477–90. doi: 10.1023/b:jobm.0000047611.81027.71 [DOI] [PubMed] [Google Scholar]
  • 25.Wang RH, Kao CC, Su YC, Chen SY, Hsu HC, Lu CH, et al. Character strengths use in diabetes self‐management scale: development and psychometric testing. J Adv Nurs. 2023;79(10):4034–43. [DOI] [PubMed] [Google Scholar]
  • 26.Kong L-N, Zhu W-F, He S, Wang T, Guo Y. Development and preliminary validation of the chronic hepatitis B self-management scale. Appl Nurs Res. 2018;41:46–51. doi: 10.1016/j.apnr.2018.03.009 [DOI] [PubMed] [Google Scholar]
  • 27.Mulley AG, Trimble C, Elwyn G. Patients’ preferences matter: stop the silent misdiagnosis. King’s Fund. 2012. [DOI] [PubMed] [Google Scholar]
  • 28.Allen RM, Jones MP. The validity and reliability of an asthma knowledge questionnaire used in the evaluation of a group asthma education self-management program for adults with asthma. J Asthma. 1998;35(7):537–45. doi: 10.3109/02770909809048956 [DOI] [PubMed] [Google Scholar]
  • 29.Devia M, Vesga J, Sanchez R, Sanabria RM, Figueiredo AE. Development of an instrument to assess self-management capacity of patients receiving peritoneal dialysis: CAPABLE. Peritoneal dialysis international. J Int Soc Perit Dial Int. 2022;42(4):370–6. doi: 10.1177/08968608211059897 [DOI] [PubMed] [Google Scholar]
  • 30.Webel AR, Asher A, Cuca Y, Okonsky JG, Kaihura A, Dawson Rose C. Measuring HIV self-management in women living with HIV/AIDS: a psychometric evaluation study of the HIV Self-management Scale. J Acquired Immune Deficiency Syndromes. 2012;60(3):e72–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wegener ST, Castillo RC, Heins SE, Bradford AN, Newell MZ, Pollak AN, et al. The development and validation of the readiness to engage in self-management after acute traumatic injury questionnaire. Rehabil Psychol. 2014;59(2):203–10. doi: 10.1037/a0035693 [DOI] [PubMed] [Google Scholar]
  • 32.Schmitt A, Gahr A, Hermanns N, Kulzer B, Huber J, Haak T. The Diabetes Self-Management Questionnaire (DSMQ): development and evaluation of an instrument to assess diabetes self-care activities associated with glycaemic control. Health Qual Life Outcomes. 2013;11:138. doi: 10.1186/1477-7525-11-138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Escoffery C, Bamps Y, LaFrance WC Jr, Stoll S, Shegog R, Buelow J, et al. Factor analyses of an Adult Epilepsy Self-Management Measurement Instrument (AESMMI). Epilepsy Behav. 2015;50:184–9. doi: 10.1016/j.yebeh.2015.07.026 [DOI] [PubMed] [Google Scholar]
  • 34.Glasgow RE, Strycker LA, Toobert DJ, Eakin E. A social-ecologic approach to assessing support for disease self-management: the Chronic Illness Resources Survey. J Behav Med. 2000;23(6):559–83. doi: 10.1023/a:1005507603901 [DOI] [PubMed] [Google Scholar]
  • 35.Lee E-H, Lee YW, Chae D, Lee K-W, Chung JO, Hong S, et al. A New Self-management Scale with a Hierarchical Structure for Patients with Type 2 Diabetes. Asian Nurs Res (Korean Soc Nurs Sci). 2020;14(4):249–56. doi: 10.1016/j.anr.2020.08.003 [DOI] [PubMed] [Google Scholar]
  • 36.Eton DT, Yost KJ, Lai J-S, Ridgeway JL, Egginton JS, Rosedahl JK, et al. Development and validation of the Patient Experience with Treatment and Self-management (PETS): a patient-reported measure of treatment burden. Qual Life Res. 2017;26(2):489–503. doi: 10.1007/s11136-016-1397-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Planalp EM, Kliems H, Chewning BA, Palta M, LeCaire TJ, Young LA, et al. Development and validation of the self-management Barriers and Supports Evaluation for working-aged adults with type 1 diabetes mellitus. BMJ Open Diabetes Res Care. 2022;10(1):e002583. doi: 10.1136/bmjdrc-2021-002583 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Phillips NM, Street M, Haesler E. A systematic review of reliable and valid tools for the measurement of patient participation in healthcare. BMJ Quality & Safety. 2016;25(2):110–7. [DOI] [PubMed] [Google Scholar]
  • 39.Amano K. Development of a self‐management scale for lower urinary tract symptoms in patients with cancer after radical prostatectomy. International J Urological Nursing. 2023;17(2):103–15. [Google Scholar]
  • 40.Lorig, KR, Holman, H. Self-management education: history, definition, outcomes and mechanisms. Annals of Behavioural Medicine, 2003;26(1):1–7. [DOI] [PubMed] [Google Scholar]
  • 41.Bull C, Byrnes J, Hettiarachchi R, Downes M. A systematic review of the validity and reliability of patient-reported experience measures. Health Serv Res. 2019;54(5):1023–35. doi: 10.1111/1475-6773.13187 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Eikelenboom N, Smeele I, Faber M, Jacobs A, Verhulst F, Lacroix J, et al. Validation of Self-Management Screening (SeMaS), a tool to facilitate personalised counselling and support of patients with chronic diseases. BMC Fam Pract. 2015;16:165. doi: 10.1186/s12875-015-0381-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Bayliss EA, Ellis JL, Steiner JF. Barriers to self-management and quality-of-life outcomes in seniors with multimorbidities. Ann Family Med. 2007;5(5):395–402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Bayliss EA, Steiner JF, Fernald DH, Crane LA, Main DS. Descriptions of barriers to self-care by persons with comorbid chronic diseases. Ann Fam Med. 2003;1(1):15–21. doi: 10.1370/afm.4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Moore L, Britten N, Lydahl D, Naldemirci Ö, Elam M, Wolf A. Barriers and facilitators to the implementation of person-centred care in different healthcare contexts. Scand J Caring Sci. 2017;31(4):662–73. doi: 10.1111/scs.12376 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Neale J, Parkman T, Strang J. Challenges in delivering personalised support to people with multiple and complex needs: qualitative study. J Interprof Care. 2019;33(6):734–43. doi: 10.1080/13561820.2018.1553869 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Sefki Kolozali

PONE-D-24-60456What instruments are available to aid or evaluate personalised care delivery, from the perspectives of healthcare practitioners and service users? A narrative scoping review.PLOS ONE

Dear Dr. Johnson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR:

Thank you for submitting your manuscript to PLOS ONE . After careful consideration and review, I believe your manuscript addresses an important and timely topic. While it has merit, it does not yet fully meet PLOS ONE ’s publication criteria in its current form. Therefore, I am inviting you to submit a major revision that addresses the points raised during the peer review process.

When submitting your revision through Editorial Manager , please include the following:

  1. A rebuttal letter that responds point-by-point to each comment from the academic editor and reviewers. Upload this as Response to Reviewers .

  2. A marked-up version of your revised manuscript showing all changes made (e.g., with Track Changes). Upload this as Revised Manuscript with Track Changes .

  3. A clean version of your revised manuscript without tracked changes. Upload this as Manuscript .

  4. If applicable, an updated financial disclosure statement in your cover letter.

 You may also consider depositing any protocols referenced in your review process to protocols.io , where they can be assigned a DOI and cited independently. For more details about submitting Lab Protocol articles, visit: https://plos.org/protocols .

==============================

Please submit your revised manuscript by May 12 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Sefki Kolozali

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf .

2.  We are unable to open your figure file [Fig 2 Publications per year.eps]. Please kindly revise as necessary and re-upload.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors provided a comprehensive overview of current personalised care delivery and described the existing tools, instruments or methods for assessing, evaluating or measuring personalised care from the perspectives of healthcare practitioners and/or service users. This is an important review for the field of personalised care, as a well-established personalised care will benefit not only individuals, but also the wider health system.

The manuscript is well structured and well-written. It also proposed a framework of six domains for content analysis of health care instruments. The framework is a sensible attempt to evaluate the personalised care instruments. My suggestions are:

1 the Figure 2 is at low resolution and hard to read. It will be easier to understand the figure if move text "domain 3: understanding behaviour" into the magenta bar.

2 A weighing system of the six domains maybe introduced when evaluating the instruments. The importance of the six domains may vary depends on the needs of individuals.

Reviewer #2: The work provides a valuable contribution to the field, offering a comprehensive overview of existing instruments and proposing a conceptual framework. However, there are several areas that require attention to strengthen the manuscript.

While the review is thorough, the narrative occasionally loses focus. The introduction and discussion sections could benefit from more concise and targeted language to better guide the reader through your objectives and findings. For instance, the introduction could more sharply delineate the gap in the literature that your review addresses.

The methodology section is detailed, but the exclusion criteria, particularly the decision to exclude instruments measuring constructs like self-efficacy and patient activation, need more robust justification given their relevance to personalised care.

While the iterative refinement of the data extraction process is commendable, it would be useful to clarify how inter-rater reliability was ensured during study selection and coding. Did reviewers use any specific metrics (e.g., Cohen’s kappa) to measure agreement?

The discussion section effectively summarises the findings but could delve deeper into the implications of the identified gaps. For example, the lack of instruments that consider multiple perspectives (e.g., healthcare professionals, caregivers) is a significant limitation that warrants more extensive discussion regarding its impact on the field.

The manuscript notes that no single instrument captures all aspects of personalised care. Could this indicate a need for a multi-instrument approach rather than a single new tool?

The conclusion hints at future research needs but could be more specific. Highlighting particular areas where new instrument development is most urgently needed, especially in the context of multimorbidity, would provide a clearer roadmap for future studies.

There are instances where the language could be more polished, such as: "prevenance" (line 67) should likely be "prevalence.", "Personlised" (figure 3 legend) should be "Personalised.". Phrases like "a plethora of instruments" could be replaced with more precise language. Additionally, varying sentence length and structure would improve readability and engagement, particularly in the discussion section.

Overall, your manuscript is a strong foundation that, with some refinement, can make a significant impact on the field of personalised care evaluation. I look forward to seeing the revised version.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Jul 10;20(7):e0325833. doi: 10.1371/journal.pone.0325833.r002

Author response to Decision Letter 1


14 Apr 2025

Thank you for inviting us to revise and re-submit this paper, describing the range of instruments available to inform, evaluate or measure personalised care delivery. We appreciate the valuable feedback offered by the reviewers, which has allowed us to further strengthen the paper.

We have responded to each comment raised by the editor and the reviewers in the table below. Each point has been addressed, and the amendments are highlighted as tracked changes in the uploaded manuscript.

Reviewer 1: The Figure 2 is at low resolution and hard to read. It will be easier to understand the figure if move text "domain 3: understanding behaviour" into the magenta bar

Response: Many thanks for highlighting this. We have made the suggested text change to the figure and have uploaded a new version in higher resolution.

Reviewer 1: A weighing system of the six domains maybe introduced when evaluating the instruments. The importance of the six domains may vary depends on the needs of individuals

Response: Thank you for this suggestion. We have considered the weighting of domains, but do not feel it would be within the scope of this review, to make recommendations on this. We have added a line into the second paragraph of the discussion, to highlight this as a concept for future consideration.

“Future development of the framework could consider the weighting of each domain, ideally in a dynamic manner that is tailored to the importance of each domain to the individual person and situation”

Reviewer 2: While the review is thorough, the narrative occasionally loses focus. The introduction and discussion sections could benefit from more concise and targeted language to better guide the reader through your objectives and findings. For instance, the introduction could more sharply delineate the gap in the literature that your review addresses

Response: We have made some changes to both the introduction and the discussion to aid conciseness and focus. We have also added the following text into the second paragraph to add clarity about the gap we are addressing:

“While there are various instruments available in the field of personalised care, there is a lack of guidance on how to effectively select and utilise these tools. Enhancing this understanding is essential to improving clinical delivery and systematic implementation. This scoping review will compile and categorise the existing instruments, laying the groundwork for a deeper understanding of their role in personalised care delivery, and identifying areas that require further research.”

Reviewer 2: The methodology section is detailed, but the exclusion criteria, particularly the decision to exclude instruments measuring constructs like self-efficacy and patient activation, need more robust justification given their relevance to personalised care

Response: Thank you for this comment. Defining the scope of this review was challenging, given the wide range of constructs that are linked to personalised care. We decided to exclude these types of instruments as they are related to personalised care but also existed independently of it. There is a significant volume of instruments in these wider fields (e.g. self-efficacy scales). Including them would risk the scale of the review becoming unmanageable; but also loosing focus. These concepts were identified within our content analysis of the included tools, where they were sitting within an instrument that evaluated a wider range of constructs. Therefore, their role in personalised care is acknowledged through the results of this paper, but instruments that were solely designed to evaluate one characteristic, were excluded. We have added clarity to the text regarding this:

“Instruments that sought to specifically and solely understand personal characteristics, such as self-efficacy, coping behaviours or activation, were excluded from the review. Whilst we recognise these are important characteristics that influence personalised care, they also play an independent role in overall health outcomes, beyond personalised approaches. To maintain a direct focus on personalised care delivery, we only included instruments that explored two or more personal characteristics, with the explicit intention of informing personalised care delivery. Given the significant number of related constructs and the high volume of instruments in these related fields, this was also important to maintain a realistic scope for the review.”

Reviewer 2: While the iterative refinement of the data extraction process is commendable, it would be useful to clarify how inter-rater reliability was ensured during study selection and coding. Did reviewers use any specific metrics (e.g., Cohen’s kappa) to measure agreement?

Response: We did not use any statistical methods to establish inter-rater reliability; which we believe is acceptable for a scoping review of this nature.

Reviewer 2: The discussion section effectively summarises the findings but could delve deeper into the implications of the identified gaps. For example, the lack of instruments that consider multiple perspectives (e.g., healthcare professionals, caregivers) is a significant limitation that warrants more extensive discussion regarding its impact on the field.

Response: Thank you for highlighting this. We have reviewed the discussion and added depth to aspects, including the point raised about multiple perspectives.

The manuscript notes that no single instrument captures all aspects of personalised care. Could this indicate a need for a multi-instrument approach rather than a single new tool?

Response: We agree, and have added a sentence within the discussion to reflect this:

“Capturing all aspects of personalised care within a single tool presents conceptual challenges, reflecting the potential for multi-instrument approaches. The PCED-6 framework could be used to guide such approaches, aiding instrument selection in line with the intended purpose of the evaluation”.

Reviewer 2: The conclusion hints at future research needs but could be more specific. Highlighting particular areas where new instrument development is most urgently needed, especially in the context of multimorbidity, would provide a clearer roadmap for future studies

Response: The following changes have been made to the conclusions to address this feedback:

“We found a wide range of instruments that have been developed and tested within a range of populations and settings. However, this review confirms that we currently lack instruments that a) can be broadly applied, including to understand the experiences of people living with multiple, long-term conditions and b) provide insights into the multiple perspectives (e.g. healthcare professionals, family carers) that have a role in personalised care delivery. Future studies should focus efforts on the development and use of single or multi-instruments that address these gaps, allowing for comparison and shared learning across and between services and health systems. This is increasingly important given the growing number of people living with multimorbidity, and recognition of the burden of this for healthcare delivery worldwide. Future work should also focus on how instruments are used to improve personalised care delivery, particularly through a less siloed, multimorbidity lens.”

Reviewer 2: There are instances where the language could be more polished, such as: "prevenance" (line 67) should likely be "prevalence.", "Personlised" (figure 3 legend) should be "Personalised.". Phrases like "a plethora of instruments" could be replaced with more precise language. Additionally, varying sentence length and structure would improve readability and engagement, particularly in the discussion section

Response: Thank you – we hope we have addressed this throughout.

Attachment

Submitted filename: Rebuttal Letter for PlosOne Scoping Review.docx

pone.0325833.s004.docx (40KB, docx)

Decision Letter 1

Sefki Kolozali

What instruments are available to aid or evaluate personalised care delivery, from the perspectives of healthcare practitioners and service users? A narrative scoping review.

PONE-D-24-60456R1

Dear Dr. Johnson

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sefki Kolozali

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed most of the comments raised in their revision. They also explained the reason why one of the suggestions were not included in the revision. Based on the quality of the revised manuscript. I think it is acceptable for publication.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: Yes:  Basma A. Al-Ghali

**********

Acceptance letter

Sefki Kolozali

PONE-D-24-60456R1

PLOS ONE

Dear Dr. Johnson,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sefki Kolozali

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Search Strategy.

    (DOCX)

    pone.0325833.s001.docx (26.5KB, docx)
    S2 File. Summary of Included Papers.

    (DOCX)

    pone.0325833.s002.docx (76.3KB, docx)
    Attachment

    Submitted filename: Rebuttal Letter for PlosOne Scoping Review.docx

    pone.0325833.s004.docx (40KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES