Skip to main content
BMJ Open Quality logoLink to BMJ Open Quality
. 2020 Mar 19;9(1):e000789. doi: 10.1136/bmjoq-2019-000789

Measure what we want: a taxonomy of short generic person-reported outcome and experience measures (PROMs and PREMs)

Tim Benson 1,2,
PMCID: PMC7103852  PMID: 32198234

Abstract

Introduction

Health and care systems are complex and multifaceted, but most person-reported outcome and experience measures (PROMs and PREMs) address just one aspect. Multiple aspects need measuring to understand how what we do impacts patients, staff and services, and how these are affected by external factors. This needs survey tools that measure what people want, are valid, sensitive, quick and easy to use, and suitable for people with multiple conditions.

Methods

We have developed a coherent family of short generic PROMs and PREMs that can be used in combination in a pick-and-mix way. Each measure has evolved iteratively over several years, based on literature review, user inputs and field testing. Each has has a common format with four items with four response options and is designed for digital data collection with standardised analytics and data visualisation tools. We focused on brevity and low reading age.

Results

The results are presented in tabular format and as a taxonomy. The taxonomy is categorised by respondent type (patient or staff) and measure type. PROMs have subdomains: quality of life, individual care and community; PREMs have subdomains: service provided, provider culture and innovation. We show 22 patient-reported measures and 17 staff-reported measures. Previously published measures have been validated. Others are described for the first time.

Discussion and conclusions

This family of measures is broad in scope but is not claimed to be comprehensive. Measures share a common look and feel, which enables common methods of data collection, reporting and data visualisation. They are used in service evaluation, quality improvement and as key performance indicators. The taxonomy helps to organise the whole, explain what each measure does and identify gaps and overlaps.

Keywords: patient-reported outcome measures, patient satisfaction, surveys, attitude of health personnel, diffusion of innovation

Introduction

Surveys, completed by patients or staff, are widely used in tailoring care, quality improvement, evaluation and population health management. They need to cover the things that matter most to those completing them and other stakeholders. The challenge is to do this in a simple easy-to-use way, while recognising the complexity inherent in the health domain.1

Person-reported outcome measures (PROMs)2–4 and person-reported experience measures (PREMs) measure different things,5 with only weak correlation. PROMs measure people’s perception of their own situation; PREMs measure their perception of services provided. PROMs are a form of personal history and are of clinical value, but PREMs are usually anonymous, because people can be reluctant to criticise those they depend on. Individuals may choose to identify themselves in PREMs, but the default is not to.

PROMs and PREMs may be condition-specific or generic. Two-thirds of health and care expenditure is for people living with three or more chronic conditions,6 but most PROMs apply to only one condition, which limits their use. Different measures have been developed independently and do not work well together.7 For example, in some measures a high score is good, in others high is bad. Scale ranges vary, such as 0–1, 0–10, 0–48 or 0–100.

Generic measures work for all types of patients, treatments and conditions. They are based on the idea that people want similar things, such as good health and well-being, excellent service, supportive communities and organisations, care and innovations that meet their needs.

Care quality is assessed in terms of structure, process and outcome.8 Our focus is on outcome as perceived by patients and staff. Perceived outcome is only one aspect of a complex whole, although broader than the traditional definitions of PROMs and PREMs.9 However, it does not cover all aspects of health outcomes, experience and patient-centred care.10

Response rates are affected by perceived relevance and ease of use.11 Most measures require a higher reading age12 than the average reading age of the UK population, which is about 9 years.13

Background

This work has had a long gestation. During the 1970s, the author worked with Rachel Rosser to evaluate computer systems in a London hospital using a short staff-reported classification of disability and distress.14 Inter-rater reliability studies identified the importance of using clear, unambiguous wording.15

During the mid-2000s, interest in PROMs and PREMs increased, as exemplified by Darzi’s NHS Next Stage Review High Quality Care for All, which recommended their wide use.16 Unfortunately, existing tools were not well suited to routine use, having been used mainly in pharmaceutical clinical trials, where respondents have few time limitations and only one condition.

The author identified a need for a simple PROM that could be used on smartphones and tablets. This led to the development of the howRu health status measure, which evolved from Rosser’s classification. This was tested in a telephone survey of 2751 people living with long-term conditions, in comparison with 12-item Short Form Survey.17 It was also tested in comparison with 3-level version of EQ-5D in a hospital cardiovascular clinic,18 and in hip and knee replacement surgery.19

After the Stafford Hospital scandal, the financial crash and change of government, political interest turned to patient experience (PREMs). The howRwe patient experience measure was developed along the same lines as howRu to be quick and easy-to-use routinely. It was tested in an orthopaedic presurgical assessment unit.20 The howRu and howRwe measures were both used in a census of 24 000 care home residents in the UK, Australia and New Zealand.21 22

Person-centred care and new care models became a key focus during the mid-2010s. Wessex AHSN selected howRu and howRwe for use in the evaluation of the North East Hampshire and Farnham NHS Vanguard project, also known as Happy, Healthy at Home. This eventually used 17 different surveys with more than 2800 respondents. Explicit objectives included improved personal well-being and health confidence, which led to the development of the Personal Well-being Score (PWS) based on ONS4,23 and the Health Confidence Score (HCS).24Social prescribing and care navigation also attracted attention and evaluation funding, leading to related measures of loneliness, community cohesion and social determinants of health.

During the same period, Wessex AHSN was tasked with evaluating and promoting the spread of digital health innovation, which stimulated the development of innovation adoption measures.25 These built on the author’s prior work about how spread26 27 and interoperability28 are impacted by both technical and non-technical factors (eg, culture).

The aim of this paper is to describe the resulting family of generic measures, organised as a taxonomy. A taxonomy allows for measures to be viewed and compared, gaps identified and the body of work improved and developed further.

Methods

The author with colleagues has developed a family of short generic PROMs and PREMs to capture a broad range of patient and staff perceptions of quality of life, healthcare services, wider determinants of health, digital and service innovations. These measures share a common format and scoring scheme. They are picked and mixed as required to create longer surveys for different purposes in quality improvement, impact evaluation and as key performance indicators (KPIs).

All measures are generic, suitable for most situations and clinical conditions, irrespective of case-mix, across health and social care. They can be completed on paper, smartphone, tablet, PC or via text message or voice.

Using criteria set out in the literature,29–32 each measure was developed in a similar way.17 None of the work was commissioned formally or grant-funded. The author had full editorial control.

In outline, the approach used was as follows:

  1. Recognise the need for a new measure, based on user feedback and other insights. All measures were developed to meet actual or perceived needs.

  2. Review the relevant literature and identify key themes.

  3. Develop prototypes, based on a common format of four items per measure and four response options per item.

  4. Discuss, revise and field test with users, colleagues and other stakeholders.

  5. Iterate, adapt, evolve and further test. This involved dozens or in some cases hundreds of iterations before all issues were resolved.

  6. Evaluate the measure for distribution (eg, skewness and kurtosis), internal reliability and construct validity.

  7. Publish in peer-review journal.

The common format, with four items (questions) and four response options, is not a rigid rule and exceptions may be allowed to the number of items or options, although none is shown in this paper.

The Health Confidence Score (figure 1) provides an example of the look and feel, showing the title, preamble and instructions, items (lines), options (columns), colour and emojis.24

Figure 1.

Figure 1

The Health Confidence Score: an example of a measure.

Items

Each item measures perception of one characteristic or theme in a measurement domain. Most domains have a well-understood ideal. Item wording needs to capture different aspects of the domain in ways that people readily understand.

Particular attention was given to word count and readability. These were calculated using the word count and readability statistics included in Microsoft Word. In this paper, we use the text in the tables herein, including footnotes, with each item label treated as a separate sentence. The survey preamble and options are excluded, because the preamble is usually tailored to the local context and option repetition depends on administration mode (eg, the options should always be visible to the user). The readability measure is the Flesch Kincaid Grade (FKG), which estimates US school grade.33 As a guide, the reading age of a text is FKG plus five.

Options

The following option sets are used:

  • None, a little, quite a lot, extreme (none–extreme)

  • Strongly agree, agree, neutral, disagree (strongly agree–disagree)

  • Hardly ever, occasionally, sometimes, always (hardly ever–always)

  • Excellent, good, fair, poor (excellent–poor).

This list is extensible. For example, we could also use:

  • Agree, neutral, disagree, strongly disagree (agree–strongly disagree)

  • Strongly agree, agree, disagree, strongly disagree (strongly agree–strongly disagree).

Options are usually ordered left to right, from best to worst. We use colour coding and emoji (both of which are optional), from best (eg, green smiley face) to worst (eg, red sad face). Emoji are tailored to the meaning of each option set, using a choice from: grin, smile, neutral (straight mouth), unhappy and miserable.

All items are optional. In most cases the recall period is now. Many PROMs use recall periods with questions such as: “how often have you experienced X” during the last week or month. However, many people find recall difficult (eg, most people find it hard to remember what they had for dinner 2 or 3 days ago).34 These measures avoid specifying a recall period other than today or yesterday.

Scoring

A high score is always good, which aids consistent understanding of results. This rule is followed even when the name of an item or measure implies that it measures something undesirable.

For items about individuals, the scoring system is from 0 (worst) to 3 (best). For populations, the mean item score is transformed to a 0–100 scale using the formula: (mean item score)×100/3. For example, responding strongly agree to I know enough about my health scores 3 on the 0–3 individual scale and 100 on 0–100 population scale; disagree scores 0 on both scales.

Most measures comprise a group of four items. A summary score is calculated for each measure as the sum of the item scores. Assuming four items, at the individual level this gives a 13-point scale from 0 (4×worst) to 12 (4×best). For populations, the mean summary score is shown on a scale from 0 to 100, using the formula: (mean summary score)×100/12. A summary score is not calculated if any item score is missing.

Using a common 0–100 scale for item and summary mean scores enables direct comparison of the results. A mean score 100 occurs if all respondents chose the best option (the ceiling) and 0 if all chose the least desirable option (the floor). It is unlikely that an individual score will be confused with a population mean score, because they use different ranges.

Taxonomy

A taxonomy was developed as a way of organising and classifying the measures, to explain the range and scope of measures to others and to identify gaps and overlaps.

A taxonomy is a system for classifying multifaceted, complex phenomena according to common conceptual domains and dimensions.35 It is a hierarchy of things or concepts in which each node (other than the root) has a single parent and any number of sibling and child nodes. Each node is a specialisation or sub-class of its parent (inheritance).

The development of the taxonomy followed an iterative process similar to that used to develop its components. Key criteria were simplicity, coherence and inheritance.

Patient and public involvement

Many patients, health staff and members of the public took part in focus groups during the development of these measures. They helped test and refine early versions of the measures. Most focus groups were informal. Papers which describe the development and validation of specific measures provide more details of patient and public involvement for those measures.

This paper does not report identifiable data about any individuals or groups.

Results

The taxonomy

The results use the taxonomy as an organising principle or framework. Figure 2 shows the top levels.

Figure 2.

Figure 2

Top levels of the taxonomy. PREMs, person-reported experience measures; PROMs, person-reported outcome measures.

Patient-reported and staff-reported measures cover the same domains, but there are important differences between them. It helps to consider these roles separately. Patients are subjects of care, but staff provide care (eg, clinicians, admin staff and volunteers) within an organisational structure. Staff see many patients and the data collection process is usually simpler. Many staff-reported measures were adapted from patient-reported measures.

At the next level, the two broad categories of measure are person-reported outcome measures and person-reported experience measures.

Person-reported outcome measures

PROMs refer to the impact on individuals as perceived by the rater. They include measures of:

  • Quality of life

  • Individual care

  • Community

Quality of life measures include people’s health status, personal wellbeing, fatigue and sleep patterns. These are usually about patients, recorded by patients themselves or proxies on their behalf.

Individual care measures include health confidence, shared decision-making, self-care, behaviour change, adherence to treatment (eg, medication) and acceptance of loss. Individual care is typically based on interactions between patient and clinician (staff); both groups have their own perception of the outcome, which may differ.

Community measures include external and environmental factors such as social determinants of health, loneliness, neighbour relationships and personal safety. This is mainly related to how and where people live.

Person-reported experience measures

PREMs measure people’s perception of the service provided. There are three domains:

  • Care provided

  • Provider culture

  • Innovation

Care provided covers both individual services and the way that services work together. Patients and staff have views about the quality of care provided.

Provider culture measures aspects of each health and care organisation’s policies and practice. Staff have more direct knowledge and experience of the culture than patients.

Innovation focuses on the impact of specific innovations, such as digital health applications and new ways of working. Staff are invariably involved and patients less frequently.

Tables

Details of each measure are shown in tables 1–4.

Table 1.

Patient-reported outcome measures

Name Options Text used in survey Alias Words
(FKG)
Notes
Quality of life
Health status None– extreme



How are you today? (past 24 hours) howRu 24 (2.6) Health status (howRu) is sometimes referred to as health-related quality of life. This was the first in the family.17
 Pain/discomfort Pain or discomfort Pain
 Distress Feeling low or worried Distress
 Disability Limited in what you can do Disability
 Dependence Require help from others Dependence
Personal well-being Strongly agree–disagree How are you feeling in general? PWS 29 (3.7) Personal Well-being Score (PWS) is based on the Office of National Statistics ONS4. Unlike ONS4 all items are worded positively, and it has a summary score.23
 Life satisfaction I am satisfied with my life Satisfaction
 Worthwhile What I do in my life is worthwhile Worthwhile
 Happy I was happy yesterday Happy
 Not anxious I was NOT anxious yesterday NotAnxious
Sleep Strongly agree– disagree Thinking about your recent sleep pattern Sleep 29 (0.9) Sleep hygiene is an important determinant of health and well-being.50
 Sleep at same time I go to sleep at the same time SleepTime
 Wake at same time I wake up at the same time WakeTime
 Wake refreshed I wake up feeling refreshed Refreshed
 Sleep well I sleep well SleepWell
Fatigue Strongly agree–disagree Thinking about getting tired Fatigue 27 (3.9) Fatigue is a common presenting complaint in primary care and can have a large impact on quality of life.51
 Energy level I usually have enough energy Energy
 Tire quickly I do not tire too quickly TireFast
 Able to concentrate I can usually concentrate well Concentrate
 Stamina I can keep going if I need to Stamina
Individual care
Health confidence Strongly agree–disagree How do you feel about caring for your health? HCS 38 (1.9) Health Confidence Score (HCS covers people’s confidence about looking after their own health.24
 Knowledge I know enough about my health Knowledge
 Self-management I can look after my health SelfManage
 Access to help I can get the right help if I need it GetHelp
 Shared decisions I am involved in decisions about me ShareDecision
Self-care Strongly agree–disagree



How well do you look after yourself? SelfCare 28 (4.2) Self-care, includes self-management of diet, physical activity, weight and medication.52
 Diet management I manage my diet well Diet
 Exercise management I manage my physical activity well Exercise
 Weight management I manage my weight well Weight
 Meds management I manage my medication well MedsMan
Shared decisions Strongly agree–disagree



Thinking about your plan SDM 28 (3.8) Shared decisions (SDM) covers patients’ involvement in clinical decisions, including their understanding of the choices and the risks and benefits of each.53
 Know benefits I know the possible benefits Benefits
 Know downside I know the possible downside Downside
 Know choices I know that I have choices Choices
 Fully involved I feel fully involved Involved
Behaviour change Strongly agree–disagree



Thinking about this behaviour Behaviour 29 (1.0) Behaviour change covers capability, opportunity and motivation (conscious and unconscious) to change behaviour based on Michie’s COM-B model.54
 Capability I am able to do it (skills and tools) Capability
 Opportunity Nothing prevents me from doing it Opportunity
 Conscious motive I choose to do it Motivation
 Automatic motive I do it without thinking AutoMotive
Adherence Strongly agree–disagree



Do you follow treatment instructions? Adherence 32 (3.1) Adherence includes remembering to take medications, have treatment and to follow instructions, given side effects or recovery, and satisfaction.55
 Remember I remember to do it Remember
 Go on if I feel bad I do not stop if I feel bad TakeIfBad
 Go on if I feel better I do not stop if I feel better TakeIfGood
 Treatment satisfaction I am happy with my treatment TreatSatis
Acceptance of loss Strongly agree–disagree



Have you learnt to live with what’s happened? Loss 32 (0.5) Acceptance of loss covers how people cope with loss, learn to live with events, including recognition of capabilities and change, how to do things differently and to move on with life, along the lines of the grief cycle.56
 New capability I know what I can and cannot do CanDo
 Recognise loss I see how my life has changed Recognition
 Change activity I do things differently now Activity
 Move on I have moved on MoveOn
Community
Social determinants Strongly agree–disagree



Thinking about how you live SDOH 31 (2.4) Social determinants of health impact health and care outcomes but are outside the clinical system. Education, self-esteem, housing and poverty play a major role in determining peoples’ health outcomes.57
 Education I have had a good education Education
 Social status I am valued for what I do Status
 Housing I am happy about where I live Housing
 Enough money I have enough money to cope Poverty
Loneliness Strongly agree–disagree



Thinking about your friends and family Loneliness 31 (2.4) Loneliness is an important determinant of health and well-being. This measure focuses on peoples’ perception of loneliness and their social relationships in a positive way.58
 People to talk to I have people to talk to Companion
 People to confide in I have someone I can confide in Confidant
 People to help I have people who will help me PeopleHelp
 Do things with others I do things with others JoinIn
Neighbour relationships Strongly agree–disagree



Thinking about your neighbours Neighbours 19 (3.2) Neighbour relationships, community cohesion and social capital are impacted by how well people know, trust and help each other.59
 Know each other We know each other KnowNeighs
 Trust each other We trust each other TrustNeighs
 Share information We share information NeighsShareInfo
 Help each other We help each other NeighsAssist
Personal safety Strongly agree–disagree



Thinking about your personal safety PersSafety 30 (4.8) Personal safety covers physical safety (eg, from injury) and emotionally safety (from verbal abuse or discrimination), which may occur either inside your own home or when you go out.60
 Safe at home I feel safe at home SafeHome
 Respected at home I feel respected at home HomeRespect
 Safe outside I feel safe outside home SafeOut
 Respected outside I feel respected outside home RespectOut
Loneliness (ONS) Hardly ever–always



How often do you LonelinessONS 17 (0.0) This measure is included as an alternative to loneliness (above), based on guidance from the Office of National Statistics (ONS).61
 No one to talk to Have no one to talk to? NoFriends
 Feel left out Feel left out? Isolated
 Feel alone Feel alone? Alone
 Feel lonely Feel lonely? Lonely

FKG, Flesch Kincaid Grade.

Table 2.

Patient-reported experience measures

Name Options Text used in survey Alias Words
(FKG)
Notes
Care provided
Patient experience Excellent–poor How are we doing? howRwe 18 (2.2) Patient experience (howRwe) covers peoples’ perception of the care and service provided by a specific service in terms of compassion, communication, access and organisation.20
 Kindness Treat you kindly Kind
 Listen/explain Listen and explain Talk
 Prompt See you promptly Prompt
 Organised Well organised Organised
Service integration Strongly agree–disagree How well do services work together? Integration 35 (2.9) Service integration captures how well services collaborate.62
 Services talk together Services talk to each other Talk
 Service knowledge Staff know what other services do Aware
 Repeat story I do not have to repeat my story Repeat story
 Services work together Different services work well together PartOfTeam
Provider culture
Privacy Strongly agree–disagree Thinking about how we use your data Privacy 37 (4.5) Privacy covers patients’ perceptions of data protection, sharing and information governance.63
 Data are safe My data is kept safe and secure SecureData
 Data shared as needed My data is only shared as needed ShareData
 Can see/check data I can see and check my data CheckData
 Happy about data use I am happy about how my data is used DataSatis
Innovation
Digital confidence Strongly agree–disagree Digital devices include computers, smartphones and tablets DCS 36 (6.8) Digital confidence assesses people’s confidence in using digital apps and similar devices.64
 Digital usage I use a digital device frequently DigitalUse
 Peer usage Most of my friends use digital devices PeerUse
 Access to help I can usually get help if I am stuck Supported
 Confident digitally I feel confident using most digital devices DigitalConf
Product confidence Strongly agree–disagree How do you feel about (this product)? PCS 25 (4.7) Product confidence coversunderstanding of and confidence in using a specific innovation, application or product.65
 Frequent user I use it frequently ProductUse
 Confident user I feel confident using it SelfAssured
 Know benefits I know the potential benefits Positives
 Know problems I know potential problems Negatives
User satisfaction Strongly agree–disagree What do you think of (this product)? UX 33 (0.5) User satisfaction focuses on people’s perception of how much an innovation is useful and easy to use, availability of help and overall satisfaction.66
 Helps me It helps me do what I want HelpsMe
 Easy to use It is easy to use EasyToUse
 Can get help I can get help if I need it Support
 Product satisfaction I am satisfied with this product ProdSatis
Digital readiness Strongly agree–disagree New ideas in this field of work DigitalReady 30 (4.4) Digital readiness covers how ready people are to use digital innovations and their innovativeness.67
 Digital use I use a digital device frequently DigitalUser
 Confidence I feel confident using most digital devices DigitalConf
 New ideas needed New ideas are needed OpenToIdeas
 Keep up to date I keep up with new ideas WellInformed

FKG, Flesch Kincaid Grade.

Table 3.

Staff-reported outcome measures

Name Options Text used in survey Alias Words
(FKG)
Notes
Quality of life
Health status None–extreme How are you today? (past 24 hours) howRu 24 (2.6) Health status (howRu), when reported by staff is the same as when reported by patients.17
 Pain/discomfort Pain or discomfort Pain
 Distress Feeling low or worried Distress
 Disability Limited in what you can do Disability
 Dependence Require help from others Dependence
Work well-being Strongly agree–disagree How content are you in your job? WWS 36 (2.7) Work Well-being (WWS) was adapted from the personal well-being score,20 focusing on the job people do. It measures job satisfaction.
 Job satisfaction I am satisfied with my job JobSatis
 Worthwhile job I am valued for what I do WorthwhileWork
 Happy at work I was happy yesterday* at work HappyAtWork
 Not anxious at work I was NOT anxious yesterday* at work NotAnxiousAtWork
Assessed need None–extreme†



How are they doing? howRthey 34 (3.5) Staff or carer assessment of patients with dementia and frailty being cared for at home or in residential care homes.68
 Physical needs Physical care needs PhysicalNeed
 Distress Pain and/or distress Distressed
 Unpredictable Unpredictable needs Unpredictable
 Challenging Behaviour problems Challenging
Individual care
Job confidence Strongly agree–disagree How confident do you feel in your job? JCS 35 (1.9) Job confidence (JCS) was adapted from the health confidence score,24 focusing on how confident people feel in their work role.
 Knowledge I know enough about my job JobKnow
 Self-management I can manage my work JobManage
 Access to help I can get help if I need it JobHelp
 Shared decisions I am involved in decisions that affect me JobDecisions

*Work wellbeing: previous working day.

†Assessed need: quite a lot needs one person most of the time; extreme needs two people.

Table 4.

Staff-reported experience measures

Name Options Text used in survey Alias Words
(FKG)
Notes
Care provided
Service provided Excellent–poor What do you think about the service we provide? StaffHowRwe 20 (4.2) Service experience (staff) asks how staff perceives the service their team provides. Adapted from the howRwe experience measure.20
 We are kind Treat people kindly StaffKind
 We listen/explain Listen and explain StaffTalk
 We are prompt See people promptly StaffPrompt
 Well organised Well organised StaffOrganised
Service integration Strongly agree–disagree How do you work with other services? IntegratStaff 35 (2.9) Service integration (staff) asks how staff perceive collaboration with other services. Staff perceptions often differ from those of patients.
 Services talk together Services talk to each other TalkStaff
 Service knowledge We know what other services do AwareStaff
 Care planning We consider other services when planning care CarePlanning
 Part of team We feel part of the overall care team PartOfTeam
Patient confidence Strongly agree–disagree How confident are patients in caring for their health? PatHCS 38 (2.9) Patient confidence asks how staff perceive patients’ health confidence as a population. If staff report on individuals, they should use HCS as a proxy.24
 Patient knowledge They know enough about their health PatKnowledge
 Self-management They can look after their health PatSelfMan
 Patient access They can get the help they need PatGetHelp
 Shared decisions They are involved in decisions about themselves PatSDM
Provider culture
Staff relationships Strongly agree–disagree Thinking about colleagues in other services StaffRelns 21 (2.9) Staff relationships impact on how well different groups of people work together for a common good, as explored by Gittell’s work on relational coordination.69
 We know each other We know each other KnowOthers
 Rely on each other We rely on each other Rely
 Share information We share information ShareData
 Help each other We help each other HelpOthers
Shared decisions Strongly agree–disagree Thinking about your patients’ choices StaffSDM 26 (3.7) Shared decisions (staff) address staff perceptions of shared decision-making in general, as opposed to that for individual patients.53
 Patients know benefits They know the possible benefits BenefitStaff
 Patients know risks They know the possible downside DownsideStaff
 Patients know choices They know that they have choices ChoicesStaff
 Fully involved They are fully involved InvolvementStaff
Patient safety Strongly agree–disagree Thinking about patient safety PatSafety 25 (3.3) Patient safety focuses on clinical aspects of safety including adverse events and cultural attitudes towards safety and learning from incidents.70
 Adverse events Adverse events are rare AdverseEvents
 Systems are safe Our systems are safe SafeSystems
 Open about errors We are open if things go wrong Honest
 Learn from mistakes We learn from our mistakes LearnMistakes
Staff safety Strongly agree–disagree Thinking about your own safety StaffSafety 25 (1.7) Staff safety. Staff need to feel safe from being attacked by patients or bullied by managers within the organisation and outside.71
 Safe at work I feel safe at work SafeAtWork
 Respected at work I feel respected at work WorkRespect
 Safe outside I feel safe outside work StaffSafeOut
 Respected outside I feel respected outside work StaffRespectOut
Privacy Strongly agree–disagree Thinking about how we use patient data Privacy 37 (4.5) Privacy covers patients and staff perceptions of information governance including data protection, data sharing, subject access and satisfaction.63
 Data are safe Patient data kept safe and secure SecureData
 Shared as needed Patient data only shared as needed ShareData
 Patients check data Patients can see and check their data CheckData
 Happy about data use I am happy about how patient data used DataSatis
Innovation
IT capability Strongly agree–disagree Using information technology (IT) at work. ITC 31 (4.7) Staff IT capability assesses how staff feel about using IT at work, in terms of confidence, learning, getting help and solving problems.
 IT confidence I feel confident using IT ITconfidence
 Learning apps I enjoy learning new applications LearnApps
 Can get help I can get help if I am stuck CanGetHelp
 Solve IT problems I can solve most problems if stuck SolveITproblems
Product confidence Strongly agree–disagree How do you feel about this product? PCS 25 (4.7) Product confidence covers staff understanding of and confidence to use a specific innovation, application or product.
 Frequent user I use it frequently ProductUse
 Confident user I feel confident using it SelfAssured
 Know benefits I know the potential benefits Positives
 Know problems I know potential problems Negatives
User satisfaction Strongly agree–disagree What do you think of this product? UX 33 (0.5) User satisfaction focuses on people’s perception of how much an innovation is useful and easy to use, availability of help and overall satisfaction.
 Helps me It helps me do what I want HelpsMe
 Easy to use It is easy to use EasyToUse
 Can get help I can get help if I need it Support
 Product satisfaction I am satisfied with this product ProdSatis
Innovation readiness Strongly agree–disagree New ideas at work Innovativeness 28 (4.3) Innovation readiness (staff) covers where people and organisations fall on the innovativeness spectrum.
 New ideas needed New ideas are needed in my field Open
 Keep up to date I keep up with new ideas Informed
 We back new ideas My organisation supports new ideas Receptive
 We make ideas work My organisation makes new ideas work Capable
Innovation process Strongly agree–disagree Thinking about this project NPT 35 (2.3) Innovation process is based on Normalisation Process Theory (NPT) in terms of how well innovations are implemented.72
 Vision is followed The original vision is being followed Vision
 Plan to make it work We all thought about how to make it work Planning
 We work together We all act to make it work Collaboration
 Reflection We all think about how to keep it going Reflection

HCS, Health Confidence Score.

Each table is set out with six columns:

  1. Name: a short easy to understand name or label. The name is usually positively worded, but not always. For example, the health status (howRu) measure has an item for pain or discomfort. Here, the best (highest) score comes from having no pain. The English language is better at describing some aspects negatively.

  2. Options: the response options easure how much the respondent currently perceives some thing to be a problem. Many measures ask about agreement with positively worded statements using a scale from strongly agree to disagree.

  3. Text used in survey: text as presented to the respondent. In practice each survey also contains a preamble. This is not shown here, because it is usually context-specific and contains locally-specific instructions and context.

  4. Alias: a short unique alias name used in computer processing. This does not contain spaces; it uses UpperCamelCase to separate natural words and component parts.

  5. Words (FKG): the number of words and Flesch Kincaid readability grade.

  6. Notes: brief description and reference to a publication about each measure or the most influential source that influenced its development.

Patient-reported measures

Figure 3 shows patient-reported outcome and experience measures.

Figure 3.

Figure 3

Summary of patient-reported outcome (PROMs) and experience (PREMs) measures.

Table 1 describes PROMs; table 2 describes PREMs.

Staff-reported measures

Staff-reported outcome and experience measures are summarised in figure 4.

Figure 4.

Figure 4

Summary of staff-reported outcome and experience measures. PROMs, patient-reported outcome measures; PREMs, patient-reported experience measures.

Staff-reported outcome measures are described in table 3. Staff-reported experience measures are described in table 4.

Table 5 summarises the number of measures by rater (patient-reported and staff-reported) and type (PROM or PREM). The expanded taxonomy is provided as a online supplementary file 1.

Table 5.

Summary count of measures

PROMs PREMs Total
Patient-reported 15 7 22
Staff-reported 4 13 17
Total 19 20 39

PREMs, patient-reported experience measures; PROMs, patient-reported outcome measures.

Supplementary data. : Expanded taxonomy of patient-reported and staff-reported R-Outcomes measures.

bmjoq-2019-000789supp001.pdf (347.4KB, pdf)

Discussion

The need for generic measures with a broad scope is increasingly recognised, in particular for older people with long-term conditions.36 This taxonomy is, as far as we know, the most comprehensive, coherent framework or taxonomy of short generic measures that has been published. It is unusual in covering both patient-reported and staff-reported measures as well as PROMs and PREMs. It also covers external factors that affect health and well-being, and those that affect the spread of health innovations.

A possible limitation of our approach is that it is based primarily on the work of a single author. The measures were not developed as part of a grant-funded research programme in an academic setting, nor for use in clinical trials. Some people may consider this to be a strength on the basis that theories should emerge from bottom-up, empirical experimentation. However, each measure has been strongly influenced by existing theories and paradigms.

Four response options may also be regarded as a limitation, but this is not our experience. The best option (the ceiling) can be thought of as being as good as it gets. If used appropriately this does not produce a ceiling effect, whereby the measure is unable to detect valuable improvements. A floor effect (the worst option) is more problematic, because things can always get worse. In general, if a respondent is at the floor, this calls for remedial action. Intermediate options can be regarded as being less good than the ceiling and less bad than the floor, respectively.

Answering any survey question involves four cognitive steps: (1) understand the question; (2) retrieve relevant information from memory; (3) judge which response option fits best and (4) responding in a way that fits the judgement. There is always a risk that raters may satisfice by doing one or more of these suboptimally, to save effort. This can give rise to a number of effects such as acquiescence bias, primacy effect and non-differentiation. This risk is greater in surveys answered in private, where there is no other person present to sense-check the responses, if a survey is long or difficult, seen as a chore or not regarded as relevant.37

The response options form an ordinal scale, which suggests that non-parametric statistics should be used.38 However, interval or ratio scales ares needed for health economic calculations, such as quality-adjusted life year or Load calculations.39 We have explored the generation of multi-attribute interval weightings using pairwise comparisons with the PAPRIKA(Potentially All Pairwise RanKings of all possible Alternatives) method.40 In the absence of such weightings, we ascribe unweighted integer values to these options to calculate mean scores for item and summary scores of populations. In ideal situations (eg, people in good health), the distributions of these measures are skewed to the top, but summary scores for people with long-term conditions show a distribution which is close to normal.17 23 24 In practice, we find that parametric and non-parametric statistical tests produce very similar results.

It is useful to identify the minimally important difference (MID) between two sets of measurements. Half a SD is a widely used criterion at the individual level.41 So, for a summary score, if SD=20 on 0–100 scale (which is typical), the MID=0.5 (SD)=10. For populations, sample size (n) is a key variable, so if n=64 and SD=20 the 95% CI is ±1.96(SD/√n)=±4.9.

Carers or informal care givers form a special case, sharing aspects of both patients and staff; they are not discussed here, but will be considered in a future paper. There is no prohibition on people using measures that are not explicitly designed for them. For example, any measure may be completed by a proxy, but if so this should be recorded.

Four of these measures (health status, health confidence, personal well-being and experience) have been validated psychometrically at the time of writing.17 20 24 24 Five have been described in the literature (digital confidence, user satisfaction, innovation readiness, innovation process and behaviour change),25 three have been described in the specific context of residential care homes (work well-being, job confidence and service provided)42 and the process is underway for others. We encourage other validation studies.

Practical implementation always needs to consider the whole end-to-end process, not only what measures to use and why, but who, where, when and how.43 This includes ensuring that people are asked to complete surveys, that all the stakeholders involved understand what is being asked and why and that all aspects of survey management, including supporting technology and analytics, are properly resourced. Results may be reported at the individual level to tailor individual care, or aggregated to measure the performance of specific services or user needs.

These measures have been used with success in commissioning services and in the evaluation of new care models,44 social prescribing,23 care home services21 22 42 and in digital health evaluation, including self-care for people with diabetes and detection of atrial fibrillation (AF).

Innovation measures have been mapped to the Nonadoption, Abandonment and failure to Scale-up, Spread and Sustain framework (NASSS), which uses the lens of complexity theory to explain and avoid failures of digital health innovations.45

PROMs may be thought of as patient history, form part of the clinical record and inform patient care. However, identifiable data are subject to strict information governance, requiring compliance with the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA) and similar laws and regulations.46 In practice, to avoid issues of information governance, many PROMs are collected anonymously.

Widespread use of PROMs and PREMs requires integration with electronic health records and other health IT systems. This needs semantic interoperability using standards such as Fast Health Interoperability Resources (FHIR) and coding schemes such as Logical Observation Identifiers and Codes (LOINC) and Systematised Nomenclature of Medicine Clinical Terms (SNOMED CT).28 47 FHIR Questionnaire and Questionnaire Response resources support the use of surveys in day-to-day care and clinical research.48 LOINC supports the structure and content of assessment surveys.49 LOINC and SNOMED CT (UK Edition) codes have been allocated for some measures (eg, howRu and HCS)17 24 and applications for the others are underway.

Conclusions

This paper describes a family of generic PROMs and PREMs for routine use and in evaluation. This family of measures has a broad scope but is not claimed to be comprehensive.

The measures are described in tables and organised as a taxonomy. The taxonomy is categorised by respondent (patient or staff) and type (PROMs or PREMs). We describe 22 patient measures and 17 staff measures. Some are described here for the first time. These measures may be used to help tailor individual care, and at aggregate level for evaluation and accountability.

PROMs are grouped under categories for quality of life, individual care and community. PREMs have categories for service provided, provider culture and innovation. All of the measures share the same form, with four items with four response options. The measures are short with low reading age. They can be used to build short questionnaires for different purposes, using common survey management, data analyics, data visualisation and reporting tools. This flexibility allows practitioners to select measures on a pick-and-mix basis to meet their local needs.

Lay summary

This paper describes a family of short generic PROMs and PREMs, designed for use in combinations in a pick-and-mix way. PROMs cover quality of life, individual care and community; PREMs cover service provided, provider culture and innovation. Common properties of these measures include specialty-independence, brevity, ease of use, low reading age, a common format, data collection, reporting and data visualisation methods. They are used in tailoring care, quality improvement, service evaluation and as KPIs.

Acknowledgments

The author would like to thank all the people who have contributed to the development of these measures and this taxonomy. In particular, to members of the Insight team at Wessex AHSN, Dr Helen Seers and the anonymous reviewers who have helped improve this paper greatly.

Footnotes

Twitter: @timbenson

Contributors: The author takes full responsibility for this manuscript.

Funding: There was no specicific grant for this research from any funding agency in the public, commercial or non-profit sectors.

Competing interests: TB is a director and shareholder in R-Outcomes Ltd, which provides survey and evaluation services using these measures. Please contact R-Outcomes Ltd if you wish to use these measures (https://r-outcomes.com).

Patient and public involvement: Patients and/or the public were involved in the design, conduct, reporting or dissemination plans of this research. Refer to the 'Methods' section for further details.

Patient consent for publication: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data sharing not applicable as no datasets generated and/or analysed for this study.

References

  • 1.Plsek PE, Greenhalgh T. The challenge of complexity in health care. Br Med J 2001;323:625–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Kingsley C, Patel S. Patient-reported outcome measures and patient-reported experience measures. BJA Education 2017;17:137–44. 10.1093/bjaed/mkw060 [DOI] [Google Scholar]
  • 3.Black N. Patient reported outcome measures could help transform healthcare. BMJ 2013;346:f167 10.1136/bmj.f167 [DOI] [PubMed] [Google Scholar]
  • 4.Calvert M, Kyte D, Price G, et al. Maximising the impact of patient reported outcome assessment for patients and society. BMJ 2019;364:k5267 10.1136/bmj.k5267 [DOI] [PubMed] [Google Scholar]
  • 5.Black N, Varaganum M, Hutchings A. Relationship between patient reported experience (PREMs) and patient reported outcomes (PROMs) in elective surgery. BMJ Qual Saf 2014;23:534–42. 10.1136/bmjqs-2013-002707 [DOI] [PubMed] [Google Scholar]
  • 6.Buttorff C, Ruder T, Bauman M. Multiple chronic conditions in the United States. Santa Monica: RAND Health, 2017. [Google Scholar]
  • 7.Macefield RC, Jacobs M, Korfage IJ, et al. Developing core outcomes sets: methods for identifying and including patient-reported outcomes (PROs). Trials 2014;15:49 10.1186/1745-6215-15-49 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Donabedian A. The quality of care. How can it be assessed? JAMA 1988;260:1743–8. 10.1001/jama.260.12.1743 [DOI] [PubMed] [Google Scholar]
  • 9.Devlin NJ, Appleby J. Getting the most out of PROMs. Putting health outcomes at the heart of NHS decision-making. London: The King's Fund, 2010. [Google Scholar]
  • 10.de Silva D. Helping measure person-centred care. London: Health Foundation, 2014. [Google Scholar]
  • 11.Jahagirdar D, Kroll T, Ritchie K, et al. Using patient reported outcome measures in health services: a qualitative study on including people with low literacy skills and learning disabilities. BMC Health Serv Res 2012;12:431 10.1186/1472-6963-12-431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Paz SH, Liu H, Fongwa MN, et al. Readability estimates for commonly used health-related quality of life surveys. Qual Life Res 2009;18:889–900. 10.1007/s11136-009-9506-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Richards S. Specifying a reading age for web content. Content design London, 2016. Available: https://contentdesign.london/reading/specifying-a-reading-age-for-web-content [Accessed 3 Sep 2019].
  • 14.Rosser R, Benson T. New tools for evaluation: their application to computers : Anderson J, Medical informatics Europe 78, first Congress of the European Federation for medical informatics: proceedings. Cambridge, England: Springer Verlag, 1978: 701–10. [Google Scholar]
  • 15.Benson TJ. Classification of disability and distress by ward nurses: a reliability study. Int J Epidemiol 1978;7:359–61. 10.1093/ije/7.4.359 [DOI] [PubMed] [Google Scholar]
  • 16.Darzi A. High quality care for all: NHS next stage review final report. London: Stationery Office, 2008. [Google Scholar]
  • 17.Benson T, Sizmur S, Whatling J, Arikan S, et al. Evaluation of a new short generic measure of HRQoL: howRu. Inform Prim Care 2010;18:89–101. 10.14236/jhi.v18i2.758 [DOI] [PubMed] [Google Scholar]
  • 18.Benson T, Potts HWW, Whatling JM, et al. Comparison of howRU and EQ-5D measures of health-related quality of life in an outpatient clinic. Inform Prim Care 2013;21:12–17. 10.14236/jhi.v21i1.9 [DOI] [PubMed] [Google Scholar]
  • 19.Benson T, Williams DH, Potts HWW. Performance of EQ-5D, howRu and Oxford hip & knee scores in assessing the outcome of hip and knee replacements. BMC Health Serv Res 2016;16:512 10.1186/s12913-016-1759-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Benson T, Potts HWW. A short generic patient experience questionnaire: howRwe development and validation. BMC Health Serv Res 2014;14:499 10.1186/s12913-014-0499-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Benson T, Bowman C. Health status of care home residents: practicality and construct validity of data collection by staff at scale. BMJ Open Qual 2019;8:e000704 10.1136/bmjoq-2019-000704 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Benson T, Bowman C. Comparison of staff and resident health status ratings in care homes. BMJ Open Qual 2020;9:e000801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Benson T, Sladen J, Liles A, et al. Personal Wellbeing Score (PWS)-a short version of ONS4: development and validation in social prescribing. BMJ Open Qual 2019;8:e000394 10.1136/bmjoq-2018-000394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Benson T, Potts HWW, Bark P, et al. Development and initial testing of a Health Confidence Score (HCS). BMJ Open Qual 2019;8:e000411 10.1136/bmjoq-2018-000411 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Benson T. Digital innovation evaluation: user perceptions of innovation readiness, digital confidence, innovation adoption, user experience and behaviour change. BMJ Health Care Inform 2019;26:e000018 10.1136/bmjhci-2019-000018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Benson T. Why general practitioners use computers and hospital doctors do not--Part 1: incentives. BMJ 2002;325:1086–9. 10.1136/bmj.325.7372.1086 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Benson T. Why general practitioners use computers and hospital doctors do not--Part 2: scalability. BMJ 2002;325:1090–3. 10.1136/bmj.325.7372.1090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Benson T, Grieve G. Principles of health Interoperability: SNOMED CT, HL7 and FHIR. 3rd edn London: Springer, 2016. [Google Scholar]
  • 29.Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th edn Oxford University Press, 2015. [Google Scholar]
  • 30.Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: the tailored design method. 4th edn Hoboken NJ: John Wiley & Sons, 2014. [Google Scholar]
  • 31.Boateng GO, Neilands TB, Frongillo EA, et al. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health 2018;6:149 10.3389/fpubh.2018.00149 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Mokkink LB, Terwee CB, Patrick DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res 2010;19:539–49. 10.1007/s11136-010-9606-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Kincaid JP, Fishburne Jr RP, et al. Derivation of new readability formulas (automated readability index, fog count and Flesch reading ease formula) for Navy enlisted personnel. naval technical training command Millington Tn research branch 1975.
  • 34.Stull DE, Leidy NK, Parasuraman B, et al. Optimal recall periods for patient-reported outcomes: challenges and potential solutions. Curr Med Res Opin 2009;25:929–42. 10.1185/03007990902774765 [DOI] [PubMed] [Google Scholar]
  • 35.Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: developing taxonomy, themes, and theory. Health Serv Res 2007;42:1758–72. 10.1111/j.1475-6773.2006.00684.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Murphy M, Hollinghurst S, Cowlishaw S, et al. Primary care outcomes questionnaire: psychometric testing of a new instrument. Br J Gen Pract 2018;68:e433–40. 10.3399/bjgp18X695765 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Krosnick JA. Response strategies for coping with the cognitive demands of attitude measures in surveys. Appl Cogn Psychol 1991;5:213–36. 10.1002/acp.2350050305 [DOI] [Google Scholar]
  • 38.Siegel S. Nonparametric statistics for the behavioural sciences. New York: McGraw-Hill, 1956. [Google Scholar]
  • 39.Benson T. The load model: an alternative to QALY. J Med Econ 2017;20:107–13. 10.1080/13696998.2016.1229198 [DOI] [PubMed] [Google Scholar]
  • 40.Hansen P, Ombler F. A new method for scoring additive multi-attribute value models using pairwise rankings of alternatives. Journal of Multi-Criteria Decision Analysis 2008;15:87–107. 10.1002/mcda.428 [DOI] [Google Scholar]
  • 41.Norman GR, Sloan JA, Wyrwich KW. Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation. Med Care 2003;41:582–92. 10.1097/01.MLR.0000062554.74615.4C [DOI] [PubMed] [Google Scholar]
  • 42.Benson T, Sladen J, Done J, et al. Monitoring work well-being, job confidence and care provided by care home staff using a self-report survey. BMJ Open Qual 2019;8:e000621 10.1136/bmjoq-2018-000621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Basch E. Patient-Reported Outcomes - Harnessing Patients' Voices to Improve Clinical Care. N Engl J Med 2017;376:105–8. 10.1056/NEJMp1611252 [DOI] [PubMed] [Google Scholar]
  • 44.Liles A, Darnton P, Sibley A, et al. How we are evaluating the impact of new care models on how people feel in Wessex, 2017. Available: http://wessexahsn.org.uk/img/news/EvaluatingPatientOutcomesinWessex.pdf
  • 45.Greenhalgh T, Wherton J, Papoutsi C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017;19:e367 10.2196/jmir.8775 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Phillips M. International data-sharing norms: from the OECD to the general data protection regulation (GDPR). Hum Genet 2018;137:575–82. 10.1007/s00439-018-1919-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Mandl KD, Gottlieb D, Ellis A. Beyond one-off integrations: a commercial, substitutable, reusable, Standards-Based, electronic health Record-Connected APP. J Med Internet Res 2019;21:e12902 10.2196/12902 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Leroux H, Metke-Jimenez A, Lawley MJ. Towards achieving semantic interoperability of clinical study data with FHIR. J Biomed Semantics 2017;8:41 10.1186/s13326-017-0148-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Bakken S, Cimino JJ, Haskell R, et al. Evaluation of the clinical LOINC semantic structure as a terminology model for standardized assessment measures. J Am Med Inform Assoc 2000;7:529–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Walker M. Why we sleep. London: Allen Lane, 2017. [Google Scholar]
  • 51.Sharpe M, Wilks D. Fatigue. Br Med J 2002;325:480–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Shrivastava SR, Shrivastava PS, Ramasamy J. Role of self-care in management of diabetes mellitus. J Diabetes Metab Disord 2013;12:14 10.1186/2251-6581-12-14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Barry MJ, Edgman-Levitan S. Shared decision making--pinnacle of patient-centered care. N Engl J Med 2012;366:780–1. 10.1056/NEJMp1109283 [DOI] [PubMed] [Google Scholar]
  • 54.Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 2011;6:42 10.1186/1748-5908-6-42 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Lam WY, Fresco P. Medication adherence measures: an overview. Biomed Res Int 2015;2015:1–12. 10.1155/2015/217047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Kübler-Ross E. On death and dying. New York: Simon & Schuster, 1969. [Google Scholar]
  • 57.Marmot M. The health gap: the challenge of an unequal world. Lancet 2015;386:2442–4. 10.1016/S0140-6736(15)00150-6 [DOI] [PubMed] [Google Scholar]
  • 58.Goodman A. Measuring your impact on loneliness in later life. London: Campaign to end loneliness, 2015. [Google Scholar]
  • 59.Putnam RD. Bowling alone: the collapse and revival of American community. New York: Simon and Schuster, 2001. [Google Scholar]
  • 60.Bilsky W. Fear of crime, personal safety and well-being: a common frame of reference. Universitäts-und Landesbibliothek Münster, 2017. [Google Scholar]
  • 61.Snape D, Martin G. Measuring loneliness – guidance for use of the national indicators on surveys. Office for National Statistics, 2018. [Google Scholar]
  • 62.NHS The NHS long term plan. London: NHS, 2019. www.longtermplan.nhs.uk [Google Scholar]
  • 63.van Staa T-P, Goldacre B, Buchan I, et al. Big health data: the need to earn public trust. Br Med J 2016;354:i3636 10.1136/bmj.i3636 [DOI] [PubMed] [Google Scholar]
  • 64.Prensky M. Digital natives, digital immigrants Part 1. On the Horizon 2001;9:1–6. 10.1108/10748120110424816 [DOI] [Google Scholar]
  • 65.Pappas N. Marketing strategies, perceived risks, and consumer trust in online buying behaviour. Journal of Retailing and Consumer Services 2016;29:92–103. 10.1016/j.jretconser.2015.11.007 [DOI] [Google Scholar]
  • 66.Stoyanov SR, Hides L, Kavanagh DJ, et al. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth 2015;3:e27 10.2196/mhealth.3422 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Rogers EM. Diffusion of innovations. 5th edn The Free Press, 2003. [Google Scholar]
  • 68.Algase DL, Beck C, Kolanowski A, et al. Need-driven dementia-compromised behavior: an alternative view of disruptive behavior. Am J Alzheimers Dis 1996;11:10–19. 10.1177/153331759601100603 [DOI] [Google Scholar]
  • 69.Gittell JH. Transforming relationships for high performance: the power of relational coordination. Stanford Business Books, 2016. [Google Scholar]
  • 70.Gandhi TK, Kaplan GS, Leape L, et al. Transforming concepts in patient safety: a progress report. BMJ Qual Saf 2018;27:1019–26. 10.1136/bmjqs-2017-007756 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Privitera M, Weisman R, Cerulli C, et al. Violence toward mental health staff and safety in the work environment. Occup Med 2005;55:480–6. 10.1093/occmed/kqi110 [DOI] [PubMed] [Google Scholar]
  • 72.May CR, Mair F, Finch T, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci 2009;4:29 10.1186/1748-5908-4-29 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data. : Expanded taxonomy of patient-reported and staff-reported R-Outcomes measures.

bmjoq-2019-000789supp001.pdf (347.4KB, pdf)


Articles from BMJ Open Quality are provided here courtesy of BMJ Publishing Group

RESOURCES