Abstract
Background Public involvement is central to health and social research policies, yet few systematic evaluations of its impact have been carried out, raising questions about the feasibility of evaluating the impact of public involvement.
Objective To investigate whether it is feasible to evaluate the impact of public involvement on health and social research.
Methods Mixed methods including a two‐round Delphi study with pre‐specified 80% consensus criterion, with follow‐up interviews. UK and international panellists came from different settings, including universities, health and social care institutions and charitable organizations. They comprised researchers, members of the public, research managers, commissioners and policy makers, self‐selected as having knowledge and/or experience of public involvement in health and/or social research; 124 completed both rounds of the Delphi process. A purposive sample of 14 panellists was interviewed.
Results Consensus was reached that it is feasible to evaluate the impact of public involvement on 5 of 16 impact issues: identifying and prioritizing research topics, disseminating research findings and on key stakeholders. Qualitative analysis revealed the complexities of evaluating a process that is subjective and socially constructed. While many panellists believed that it is morally right to involve the public in research, they also considered that it is appropriate to evaluate the impact of public involvement.
Conclusions This study found consensus among panellists that it is feasible to evaluate the impact of public involvement on some research processes, outcomes and on key stakeholders. The value of public involvement and the importance of evaluating its impact were endorsed.
Introduction
Public involvement is firmly established in health and social research policies in the UK and internationally [National Institutes of Health Director’s Council of Public Representatives (http://copr.nih.gov/); Consumers’ Health Forum of Australia (https://www.chf.org.au/)]. 1 It is said to be of intrinsic value, reflecting democratic aspirations of accountability and transparency. 2 Public perspectives can complement those of researchers, 3 raising awareness of health, social and ethical issues that reflect wider community values. 4 , 5 , 6 Has public involvement made a difference to research processes, outcomes and key stakeholders? Few impact studies have been carried out, but there is an increasing number of reports showing the potential for public involvement to enhance the quality of research, to make it more relevant to those who use services 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 and to improve the evidence‐practice gap. 16
Given the growing importance of public involvement policies 17 and associated requirements for researchers to comply, 18 the dearth of supporting evidence is striking. Possible reasons for this include: public involvement is perceived to be relatively recent, as a concept and practice in research 19 ; evaluating the impact is seen as too difficult and public involvement is considered to be of intrinsic value and therefore does not require evaluation. 20 This study explored the last two of these potential explanations, acknowledging that public involvement can have different types of impact and that some impacts are likely to be more amenable to evaluation than others. We sought to establish whether consensus could be reached that it is feasible to evaluate the impact of public involvement on research processes, outcomes and on key stakeholders in the research process, anticipating that this would help to clarify theoretical and practical issues that could guide future impact studies.
Methods
We used the INVOLVE definition of public involvement [INVOLVE (http://www.invo.org.uk)]: ‘Many people define public involvement in research as doing research “with” or “by” the public, rather than “to”, “about” or “for” the public’. We used the term ‘public’ to include patients, users of health and social services, informal carers and organizations representing people who use services. Public involvement in this study was provided by one author offering a public perspective and another providing a perspective from working in the field of public involvement in research. We received a favourable ethical opinion from the North Trent Research Ethics Committee.
A sequential mixed methods design was chosen 21 with three stages: (i) an Expert Workshop of researchers and the public 22 that generated issues concerning the feasibility of evaluating the impact of public involvement; (ii) a two‐round Delphi process 23 to investigate whether or not there was consensus on these issues and (iii) telephone follow‐up interviews of a purposive sample of Delphi panellists to explore their responses to the Delphi process in more depth and to seek their views on the implications of the findings. This paper focuses on the Delphi process and interviews. The Delphi rounds took place between November 2007 and April 2008, and the interviews were undertaken between June and October 2008.
Delphi process and follow‐up interviews
The Delphi process is a structured interactive method for exploring consensus among a group of experts through a series of questionnaires, interspersed by controlled feedback. This method has been used in health and social care when there is a limited evidence base. 23 , 24 , 25 , 26 , 27 Typically, a panel of experts from a geographically dispersed population completes two or more rounds of email or postal questionnaires, with the aim of clarifying issues of uncertainty. No particular size of panel is recommended, and sample sizes of four ranging up to 3000 have been reported. 26 The composition of the panel and how ‘experts’ are defined is important and will depend on the aims of the Delphi process being undertaken. 23 , 28 In this study, the intention was to recruit a diverse Delphi panel of: (i) members of the public, (ii) researchers and (iii) ‘others’ (research managers, commissioners, policy makers and analysts). We aimed to attain a range of perspectives from international as well as UK panellists. Our criterion for being an expert was to have knowledge and/or experience of public involvement in health and/or social research (self‐defined).
Recruitment to the Delphi panel and follow‐up interviews
A purposive sampling strategy was used to recruit the Delphi panel, by sending invitations to:
-
1
People who had published in the area of public involvement in research.
-
2
Directors, Chief Executives and Heads of major health and social organizations with policies on public involvement in research.
-
3
Directors, Chief Executives and Heads of major health and social charities advocating public involvement in research.
-
4
Public involvement advocates.
-
5
Public involvement health and social care leads.
-
6
UK research managers and commissioners.
We also used ‘snowballing’ techniques, inviting individuals and people from different organizations to contact others who might meet our inclusion criteria. We do not know how many people forwarded our invitation, but estimate that approximately 395 invitations were sent. As this was a Delphi process, our aim was not to recruit a representative sample, but a diverse panel of experts. We stopped recruiting when we had achieved this. People decided themselves whether they had knowledge and/or experience of public involvement in health and/or social research and were offered the INVOLVE definition for guidance.
Panellists were asked to select the perspectives that they would be providing in the Delphi process from six categories: (i) member of the public (with the INVOLVE definition provided); (ii) researcher; (iii) research manager; (iv) research commissioner or funder; (v) policy maker or analyst; (vi) another or multiple perspectives (e.g. a researcher who is also a member of the public through being a carer).
Those who provided the perspective of a member of the public were asked to indicate the group(s) that best described them from five categories: (i) patient or long‐term user of services; (ii) informal (i.e. unpaid) carer; (iii) advocate/activist/representative of members of the public; (iv) employee of an organization for members of the public (e.g. a charity); (v) member of an organization of members of the public (where the organization is managed by more than 50% of people with that experience or health condition).
We invited a purposive sample of 17 panellists to take part in follow‐up interviews, to explore their responses to the Delphi questionnaires in more detail and to seek their views on the implications of the findings. The panellists were selected by their contributions to the Delphi questionnaires, where their responses appeared to add substantially to the debate. We also took into account the need to reflect the diversity of perspectives in the panel, the different research topics and methods that panellists reported themselves engaged in. Consent was sought to tape‐record all interviews that were transcribed verbatim. The transcripts were returned to the interviewees to check for accuracy.
Impact issues
At Round 1, panellists were invited to rate the feasibility of evaluating impacts of public involvement on research processes, outcomes and on stakeholders, using nine‐point scales anchored by ‘not feasible’ and ‘very feasible’ (see Table 1). We defined ‘feasible’ as ‘can it be done’? There is no agreed level of consensus to employ, and published Delphi studies have used 51%, 70%, 80% and 85%. 28 , 29 The level of consensus in this study was set in advance at 80% or over, consistent with that of the earlier Expert Workshop, 22 and with the aim of achieving robust findings. Sixteen impact issues were developed by the research team from outcomes generated at the Expert Workshop and from their detailed knowledge of the literature. Impact issues were sub‐divided into three groups: (i) research processes, n = 8; (ii) research outcomes, n = 6 and (iii) key stakeholders, n = 2 (see Table 1). At Round 2, panellists were asked to re‐rate those impact issues where consensus was not achieved at Round 1. One reminder was used for both Rounds. Text boxes were provided for panellists to comment on Round 1 and 2 questionnaires.
Table 1.
Panel ratings on the feasibility of evaluating the impact of public involvement on research processes, outcomes and stakeholders
| Impact issue: How feasible do you think it would be to evaluate the impact of public involvement on… 1 | Percentage of Panel rating the impact issue between the three main tertiles2 | Mean | Feasible to evaluate (defined as 80% or more of Panel providing a 7–9 rating) | ||
|---|---|---|---|---|---|
| 1–3 | 4–6 | 7–9 | |||
| Research processes | |||||
| Identifying topics to be researched | 1.6 | 24.5 | 83.0 | 7.39 | Yes |
| Prioritizing topics to be researched | 1.6 | 12.0 | 86.3 | 7.54 | Yes |
| Commissioning research | 0.8 | 29.8 | 67.6 | 6.98 | No |
| Research design | 1.6 | 31.4 | 66.1 | 6.87 | No |
| Managing research | 4.0 | 52.4 | 42.7 | 6.19 | No |
| Collecting data | 2.4 | 26.6 | 69.3 | 6.95 | No |
| Analysing research findings | 5.6 | 50.0 | 42.7 | 6.16 | No |
| Interpreting research findings | 5.6 | 52.4 | 39.5 | 6.13 | No |
| Research outcomes | |||||
| Disseminating research | 0.8 | 10.4 | 87.9 | 7.40 | Yes |
| Determining the usefulness of research findings | 4.0 | 33.1 | 60.5 | 6.55 | No |
| Implementing research findings | 8.9 | 47.7 | 42.7 | 6.02 | No |
| The overall quality of public involvement in a research study or research‐related activity | 4.0 | 29.0 | 64.6 | 6.76 | No |
| The overall quality of the research | 8.9 | 49.9 | 37.9 | 5.85 | No |
| The overall impact of the research | 7.2 | 69.3 | 21.8 | 5.35 | No |
| Stakeholders | |||||
| The member(s) of the public involved in the research | 0.8 | 4.8 | 91.9 | 7.93 | Yes |
| The member(s) of the research team | 5.5 | 10.3 | 81.4 | 7.45 | Yes |
1Impact issues where consensus was reached on feasibility are in bold.
2Note that the percentages for each impact issue may not add up to 100% because some panel members may not have provided a rating. Tertile percentage figures where consensus was reached on feasibility (i.e. 80% or over) given in bold.
Value statement
Public involvement is strongly associated with moral and ethical issues, public accountability and transparency, encapsulated in the World Health Organisation’s declaration of Alma‐Ata: ‘the people have the right and duty to participate individually and collectively in the planning and implementation of their health care’. 30 Therefore, Delphi panellists were asked at Round 1 whether they agreed or not with the following statement: I believe that public involvement in health and social research is of ethical and moral value in itself, regardless of its impact on research. Consensus was not sought on this statement, and the question was not repeated in Round 2. It was included as we wished to explore whether or not the pattern of responses to this statement would be associated with patterns of responses to the impact issues included in the Delphi questionnaires.
Analysis
Quantitative analysis
Data from the Round 1 questionnaires were summarized and the following conveyed to panellists at Round 2: (i) the median rating of each impact issue; (ii) distribution data relating to each scale point on each scale and (iii) whether or not consensus was achieved. A subgroup analysis (Mann–Whitney U and Kruskall–Wallis tests) was undertaken to explore differences between the ratings of three groups of panellists: members of the public, researchers and ‘others’.
Qualitative analysis
Qualitative analysis of responses in the text boxes of both Delphi questionnaires and the follow‐up interviews allowed exploration of the quantitative findings. The data were analysed separately by two researchers (RB and JB). Codes and categories were refined collaboratively using an interpretative analysis approach, 31 based on open coding and categorization 32 , 33 during the examination of the data. Categories within and between the data were compared, looking for similarities and differences, using the constant comparative method. Any contradictions between the main themes identified by the two analysts were considered informative and enlightening and were used in the interpretation of the findings. Other team members and the advisory panel participated in discussions about the qualitative analysis and interpretation of the findings at key stages.
Results
Participants
Delphi panellists
Using our sampling strategy, approximately 395 invitations were sent, and 175 people agreed to take part. Reasons for non‐response/non‐participation included: incorrect email or postal address; potential panellists on study/maternity/sick leave; changed job or role; pressure of work or family circumstances; not being funded to take part and not meeting the inclusion criterion. The 175 people who agreed to take part included people who were unsure whether they met the criterion and chose to see the questionnaire before deciding to participate. Of these, 145 returned their Round 1 questionnaire, giving an attrition rate of 17%. We received 124 completed Round 2 questionnaires, yielding an attrition rate of 14%. Of the 124 panellists completing both Rounds, 50 were members of the public (including patients/service users, patient/service user researchers, advocates, carers, members of charities and those with ‘multiple perspectives’), 37 were researchers and 36 were ‘others’ (research commissioners, managers, policy makers and analysts). One person was not classified. There were 108 participants from the UK and 16 from other countries. The types of research most frequently engaged in were service delivery (n = 83), public health/preventive health (n = 45), clinical trials (n = 43) and health technology assessment (n = 31). The research topics that panel members had most experience of were mental health (n = 30), cancer (n = 27), public involvement in research (n = 12) and older people (n = 10). Panellists were able to provide more than one category for ‘types of research’ and ‘research topics’.
Telephone interviewees
Seventeen Delphi panellists were invited to be interviewed. Three declined; one because of health reasons and two were too busy. Of the 14 interviewees, 12 were from the UK, one was from Australia and another from the US. Nine of the interviewees were researchers (of whom one was a user of multiple services and the other brought multiple perspectives); two were policymakers or policy analysts (of whom one brought the perspective of a carer); two brought multiple perspectives and one described themself as a member of the public.
The types of research interviewees were most frequently engaged in were service delivery (n = 4), clinical trials (n = 3), social care research (n = 3) and basic science (n = 2). The research topics that interviewees had most experience of were public involvement in research (n = 6) and cancer (n = 3). In the quotes below, ‘q’ refers to quotes from the Delphi questionnaires, while ‘i’ refers to quotes from the interviews.
Qualitative findings
The results are presented in an integrated manner that reflects the mixed methods approach. The qualitative findings helped to clarify and elaborate the quantitative results (see 1, 2) and also revealed additional information.
Table 2.
The feasibility of evaluating the impact of public involvement on research processes, outcomes and stakeholders: Kruskall–Wallis tests on panel subgroup mean ratings
| Impact issue: How feasible do you think it would be to evaluate the impact of public involvement on… 1 | Subgroup mean ratings | P | ||
|---|---|---|---|---|
| Public | Researchers | Others | ||
| Identifying topics to be researched | 7.69 | 7.33 | 7.03 | 0.0152 |
| Prioritizing topics to be researched | 7.77 | 7.51 | 7.25 | 0.0073 |
| Commissioning research | 7.21 | 7.05 | 6.57 | 0.065 |
| Research design | 7.13 | 6.73 | 6.63 | 0.181 |
| Managing research | 6.16 | 6.49 | 5.92 | 0.299 |
| Collecting data | 7.17 | 7.06 | 6.53 | 0.084 |
| Analysing research findings | 6.35 | 6.50 | 5.54 | 0.0044 |
| Interpreting research findings | 6.33 | 6.22 | 5.77 | 0.203 |
| Disseminating research | 7.49 | 7.35 | 7.31 | 0.242 |
| Determining the usefulness of research findings | 6.70 | 6.73 | 6.12 | 0.091 |
| Implementing research findings | 6.38 | 6.11 | 5.42 | 0.0415 |
| The overall quality of public involvement in a research study or research‐related activity | 6.96 | 6.72 | 6.51 | 0.266 |
| The overall quality of the research | 6.47 | 5.76 | 5.17 | 0.0006 |
| The overall impact of the research | 5.61 | 5.41 | 4.97 | 0.370 |
| The member(s) of the public involved in the research | 8.02 | 7.92 | 7.83 | 0.613 |
| The member(s) of the research team | 7.35 | 7.51 | 7.36 | 0.679 |
1Impact issues where consensus was reached on feasibility are in bold.
2Significant difference between the ratings of members of the public and others (P = 0.004; Mann–Whitney U‐test).
3Significant difference between the ratings of members of the public and others (P = 0.002; Mann–Whitney U‐test).
4Significant difference between the ratings of: (i) researchers and others (P = 0.004; Mann–Whitney U‐test); (ii) members of the public and others (P = 0.003; Mann–Whitney U‐test).
5Significant difference between the ratings of members of the public and others (P = 0.013; Mann–Whitney U‐test).
6Significant difference between the ratings of: (i) researchers and members of the public (P = 0.015; Mann–Whitney U‐test); (ii) members of the public and others (P = 0.000; Mann–Whitney U‐test).
Perceived importance of evaluating the impact of public involvement
Many panellists highlighted the importance of evaluating the impact of public involvement, while acknowledging the complexity of the process:
‘Well, I think at the moment it is actually very important because, you know, clearly there is this confusion as to whether the public do actually make an important contribution and we need, we need whatever evidence is available’. (35i Person with multiple perspectives)
‘We do need to develop knowledge on user involvement but we don’t need to necessarily say whether it’s a good or a bad thing. We need to explore what’s good about it and what’s bad about it in different contexts. It can’t possibly be a wholly positive or negative thing, we need to be more critical than that and really look at different research contexts and different people in different research contexts as well.’(81i Researcher)
The impetus for evaluation appeared to be linked to accountability: ‘In short, I think you can’t do, sort of, science that’s funded by national government without some accountability to the public purse.’(26i Researcher).
Impact issues that were considered feasible to evaluate
As Table 1 shows, consensus was reached among panellists that it is feasible to evaluate the impact of public involvement on five of the 16 impact issues. They are presented below with illustrative quotes:
1. Identifying topics to be researched
‘This question seems to be about asking new questions, which public engagement is very good at. My guess would be that researchers would be reasonably good at tracking where these new questions have come from.’(28q Policy maker)
2. Prioritising topics to be researched
‘This is highly feasible and should be a regular part of the process for identifying research strategy.’(31q Member of the public)
3. Disseminating research
‘One could evaluate levels of understanding and awareness based upon the involvement, or non‐involvement, of the public in the dissemination of research.’(37q Research commissioner)
4. Members of the public involved in the research
‘Satisfaction, understanding, capacity, confidence etc. could all reasonably be evaluated.’(10q Multiple perspectives)
5. Members of the research team.
‘I think it would be best done longitudinally in order to capture the changing nature of impact, rather than as interviews/questionnaires conducted at set times.’(63q Researcher)
A subgroup analysis of ratings of the impact factors was carried out, with the panel divided into three groups: members of the public, researchers and others (see Table 2). Of the five impact issues where significant differences were found, two related to impact issues that were considered feasible to evaluate: identifying and prioritizing topics to be researched. In each case, significant differences were found between the ratings of members of the public and others, with the public rating the impact as more feasible to evaluate than others.
Impact issues not considered feasible to evaluate and wider issues
Eleven out of 16 impact issues were not considered feasible to evaluate (see Table 1), and it is interesting to consider the comments made on some of these, particularly when they also refer to wider aspects of public involvement. The quote below, about commissioning research, draws attention to the high costs of evaluation, which emerged as a recurring theme:
‘I don’t think an evaluation is impossible, it is just that it unlikely to be feasible within time and budgetary constraints. Such an evaluation will need comparisons, before and after, individual feedback from the commissioning body, close scrutiny of the commissioning process – I’m not convinced how feasible this may be, no matter how ideal it is.’(56i Researcher)
Several panellists had reservations about public involvement in basic science, expressed here in relation to the feasibility of evaluating the impact of public involvement on research design:
‘More difficult to be as confident this could be done overall as the scope for public involvement to have an impact on research design depends on the design itself and the area of investigation, e.g. harder for there to be scope to influence basic laboratory science than a patient survey for instance.’(49q Research Commissioner)
However, the potential for the public to contribute to wider aspects of basic research, such as ethical issues, was acknowledged:
‘Most people really do accept a division of labour. You know there are places where one’s expertise just doesn’t go… If you were looking at something like GM foods, you know, the actual kind of, the kind of biology of it, you know, it’s really, you don’t want to ask the public about that because, you know, how would they know? But the politics of it, you would, right? You know, the values or the impact that, you know, GM foods have on food supply to the third world or, you know, those kind, those are the kind of things when I think the public involvement is crucial…’(26i Researcher)
Ethical and moral issues
At Round 1, 109/145 (75.2%) panellists agreed (33/145, 22.8%) or strongly agreed (76/145, 52.4%) that public involvement is of intrinsic value. No associations were found between responses to the value statement and patterns of ratings on the impact issues. This data analysis (consisting of a series of nonparametric statistical tests) is available on request from the first author. Qualitative analysis revealed enthusiasm for public involvement in terms of it being of ethical and moral value, yet many participants asserted the need to evaluate the impact:
‘There may be a moral imperative for public involvement in research in terms of citizenship, accountability, rights etc. but if it is not having an impact it is a pretty pointless waste of time. Involvement must be meaningful. There is no point in going through the motions because it is the right thing to do’. (89q (Person with multiple perspectives)
‘Then why evaluate it? You know, why would one evaluate something that is just intrinsically, morally right and, I mean I think one should try and evaluate it because there are lots of people who don’t think it’s intrinsically right. And also, it’s not quite just public involvement, it’s what kind of public involvement when and how, I think one would want to evaluate the impact.’(4i Researcher)
Quality issues
The question about the intrinsic value of public involvement prompted some to reflect on the quality that public involvement adds to research:
‘I can equally well see arguments for and against that statement [value statement], depending on the nature of the research. However, I think its impact on research is the most important consideration and the fact that it is likely to improve the quality of the research is the strongest argument for advocating it.’(91q Researcher)
Few panellists believed it was feasible to evaluate the impact of public involvement on the quality of research, and most drew attention to the problems in defining ‘quality’:
‘Very very difficult – I expect a number of different definitions of quality would compete, for example value for money vs research relevant to service user’s interest.’(65q Researcher)
Some panellists proposed a discussion about what constitutes ‘quality research’ suggesting that it needed to be defined collectively. A small number offered suggestions:
‘Unless public involvement is seen as an a priori indicator of research quality, the assessment of research quality usually depends on more generic factors (e.g. research methods and design; sufficient examples of data; evidence of validation/triangulation etc.’(103q Researcher)
‘Standard measures of the quality of research, e.g. impact rating of the journal in which published, citation indices, etc. may play a role, but difficult to isolate the precise impact of PPI [patient and public involvement]’. (135q Member of the Public)
Social constructions and subjective experiences
Some Delphi panellists cautioned against considering public involvement as a mechanistic or procedural activity, rather than a dynamic partnership and collaboration. This was clearly articulated by one panellist:
‘We’ve begun really to look at user involvement more about relationships and relationships in social context. To not necessarily think of user involvement as putting people into research situations but more to think about how professionals and members of the lay public interact with each other in different contexts. And I think we really need to recognise that user involvement is both socially constructed but it’s also subjectively experienced and I think that’s the key to it really to think in those terms, that it is a social process that’s linked to professional practice but it’s also experienced subjectively. I don’t think you can separate the two and that’s probably why evaluation is quite difficult because to have a form of evaluation that encompasses those issues of social construction and subjective experience is really very difficult.’(81i Researcher)
Discussion
There are compelling reasons for investigating the impact of public involvement: to identify best ways of involving the public meaningfully in different research activities; to explore the possibility of deleterious effects and to achieve value for money. While potential benefits have been acknowledged, costs have also been identified, such as additional time and funding, as well as potentially negative effects on the public. 15 This study endorsed the value of public involvement and the importance of evaluating the impact, yet few impact issues were considered feasible to evaluate. We consider some of the possible reasons for this in the following sections. A broad definition of ‘feasible’ was given to panellists: ‘can it be done?’ and different dimensions of feasibility were addressed in the panellists’ responses, whether or not they believed that evaluation was feasible. These included: different methodological approaches; practical ways of how it could or could not be done; wider issues that might have some bearing on the complexity of the evaluation process (such as the research context, organizational issues and the attitudes of key stakeholders) and possible constraints such as costs.
The impact of public involvement on research processes, outcomes and on stakeholders
Consensus was reached by panellists on the feasibility of evaluating the impact of public involvement on identifying and prioritizing research topics. This is consistent with reports that public involvement can lead to a wider range of identified and prioritized research topics that are more relevant to service users [Alzheimer’s Society Quality Research in Dementia (http://alzheimers.org.uk/site/scripts/documents_info.php?documentID=1109)]. 3 , 9 , 12 , 34 , 35 , 36 , 37 Some panellists referred to these studies in their responses to the questionnaires and in interviews. Consensus was also established on the feasibility of evaluating the impact of public involvement in disseminating research findings. There are accounts of a range of ways in which the public has been involved in dissemination activities; through newsletters, conferences and joint authorship, 11 , 12 , 37 , 38 and several panellists described their own experiences of this activity.
The highest level of consensus related to the feasibility of evaluating the impact of public involvement on members of the public involved in research. This reflects accounts of positive benefits, such as increased self‐confidence, knowledge of the topic area and learning new skills, including research skills, 3 , 12 , 13 , 39 , 40 , 41 , 42 , 43 and also the possibility of negative impacts. 4 There is now more awareness of the need to anticipate and prepare for potentially negative effects, such as the emotional strain of hearing distressing accounts of illnesses and conditions similar to one’s own, overwork and frustration at the limited opportunities to influence the direction of the research. 13 , 39
We know less about the effects of public involvement on researchers, an impact issue considered feasible to evaluate by panellists. Some evidence suggests that it can deepen understanding of patient issues, 3 , 44 , 45 and prompt researchers to challenge their own beliefs and assumptions. 3 While this can be a positive experience, 46 some researchers have expressed concerns about perceived threats to their professional skills and knowledge, 47 and it is suggested that different research skills are needed by researchers who work collaboratively with members of the public. 12
Panellists did not consider that it was feasible to evaluate the impact of public involvement on many research processes and outcomes (see Table 1). Employing a mixed methods approach, that takes account of the qualitative findings, allows us to speculate on possible explanations for this. Many panellists referred to the sheer complexity of public involvement, with different conceptual frameworks, terminology and practice, making it difficult to generalize across research projects. Others highlighted the challenges of trying to track decisions made specifically as a result of public involvement within a deliberative process, while identifying what might have happened if public involvement had not been present. Difficulties in taking into account the wider research context, which may include political, organizational, structural and strategic constraints, were also mentioned. Some questioned the appropriateness of applying scientific enquiry to a social, collaborative partnership, where mutual learning takes place during personal interactions.
These reservations reflect the difficulties of assessing quality issues in research 48 and echo some of the findings from a recent comprehensive literature review of the impact of public involvement in research that also highlighted the gains from public involvement: ‘Some researchers have reflected on how to assess the impact of involvement and when and how best to involve the public in research. Their main conclusions have been that it is difficult to assess the impact of involvement or to predict where involvement would have the greatest impact’. 15 Guidance on evaluating complex interventions 49 is a timely addition to methodological approaches to evaluating the impact of public involvement, but there are also recommendations that: ‘strengthening the evidence base may therefore not only be about finding the most robust and rigorous ways of assessing impact, but also about helping researchers and the public to find the most useful and consistent way of telling their stories’. 15 The finding that members of the public rated the feasibility of evaluating some impact issues higher than researchers and others could reflect their experience of changes resulting from their influence, and/or being more confident that methods of capturing this could be identified. Another possibility is that researchers and others sought more rigorous evidence of impact: ‘The vast majority of the evidence of impact is based on the views of researchers and members of the public who have worked together on a research project. Most often these views have been obtained informally’. 15
Ethical and moral value of public involvement in research
The case for public involvement is often presented in terms of normative or substantive arguments, 50 particularly in relation to basic science. ‘Normative’ arguments view public involvement as an end in itself, considering moral or political values such as fairness and justice, while substantive arguments consider the effects of the contribution of the public, for example in terms of quality and relevance. Many panellists viewed public involvement to be of intrinsic value, and this appears to reflect prevailing views about its value internationally. 30 Several panellists believed that this intrinsic value should not be considered independent of its impact, suggesting that support for public involvement is not unreserved, underlying the importance of evaluating its impact.
Limitations and strengths of this study
Apart from the lack of international panellists, we believe we achieved diversity of perspective in our panel. Eight out of the fourteen telephone interviewees were researchers, but half of these brought additional perspectives. The requirement for panellists to have expertise in public involvement could have pre‐disposed the panel towards a favourable view of the feasibility of evaluating its impact. If this is the case, consensus about the limitations of evaluating the impact of public involvement can be viewed as a robust finding. Few panellists had experience of public involvement in basic research, but as this area is less well developed, it is unlikely that many types of pre‐clinical research would be represented. Most research areas associated with public involvement were included. In a few instances, panellists articulated their beliefs about the impact of public involvement rather than their views about the feasibility of evaluating its impact.
The 16 impact issues were developed to help to clarify when and how it might be feasible to evaluate the impact of public involvement. We recognize the limitations of this simplistic approach, in view of the complex and dynamic nature of public involvement, which has been described as ‘relationships in social contexts’. 51 In an assessment of the benefits of public involvement in diabetes research, it was suggested that ‘its impact on research stems from the continuing interaction between researchers and users, and the general ethos of learning from each other in an on‐going process’. 44
Implications of the study
Policies on public involvement in health and social research have been implemented widely, but we know little about the difference they have made. Most panellists agreed that there are ethical and moral reasons for public involvement, and there was consensus among the panellists that it is feasible to evaluate its impact on identifying and prioritizing topics to be researched, disseminating research and on members of the public and members of the research team. Although these have been suggested as feasible to evaluate, different stakeholders may have different priorities, and it is for others to decide whether or not these impact issues should be privileged as priorities for future evaluations.
Conflict of interest
All authors declare that they have no competing interests.
Ethical approval
This study was approved by the North Sheffield Local Research Ethics Committee.
Source of funding
We are grateful to Sheffield Health and Social Research Consortium for funding this study.
Acknowledgements
We are indebted to all who took part in the Delphi study, and list below Delphi panellists who completed both rounds of the Delphi process and agreed to be named: Ade Adebajo, Geoff Aitchison, Richard Baker, Angela Barnard, Di Barnes, Marian Barnes, Catherine Beverley, Arlene Blanchard, Ric Bowl, Louca‐Mai Brady, Anke Bramesfeld, David Britt, Fiona Brooks, Louise Bryant, Brian Buckley, David Buglar, Amanda Burls, Ceri Butler, Sarah Carr, Edward Carter, Iain Chalmers, Helen Christensen, Graham Cockshutt, Nicola Coe, Karen Collins, Jo Cooke, Phil Cotterell, Angela Coulter, Donna Cox, Mike Crawford, David Crepaz‐Keay, Sally Crossing, Sally Crowe, Jill Davies, Pam de Clive‐Lowe, Simon Denegri, Mark Doel, Jim Elliott, Pam Enderby, Richard Errington, Christine Farrell, Pete Fleischmann, Lester Firkins, Julie Flynn, Leslie Forsyth, Olivia Freeman, Nina Fudge, Tony Gilbert, Afaf Girgis, Jon Glasby, Paul Godin, Joanna Goven, Tanya Graham, Gordon Grant, Laura Greene, Bec Hanley, Amanda Harris, Hilary Hearnshaw, Sandy Herron‐Marx, Tony Hostick, David Howe, Gill Hubbard, Ron Jamieson, Zena Jones, Gavin Kendall, Alastair Kent, Sam Laubsch, Martin Lodemore, Elspeth Macdonald, Paul Mainwaring, Heather Maggs, Ron Marsh, Melanie Maxwell, Timothy Milewa, Helen Millar, Virginia Minogue, Annie Mitchell, Brigid Morris, Sara Morris, Gail Mountain, Janet Messer, Paola Mosconi, Shirley Nurock, Margaret O’Connor, Bie Nio Ong, Sue Palmer Hill, William Phillips, Susan Pickard, Vanessa Pinfold, Michael Preston‐Shoot, Lynne Ramsay, Lesley Roberts, Diana Robinson, Iliana Rokkou, Panayiota Romios, Cath Roper, Diana Rose, Fiona Ross, Carla Saunders, John Sitzia, Elizabeth Smith, Sophie Staniszewska, Kristina Staley, Jane Stewart, Jack Stilgoe, Jackie Sturt, Graham Tanner, Diane Thompson, Graham Thornicroft, Hazel Thornton, Michael Turner, Christine Vial, Paul Ward, Tracey Williamson, Elaine Willis, James Wilsdon, Roger Wilson, Barbara Woodward‐Carlton, David Wright, Til Wykes and Sally Young.
References
- 1. Department of Health . Best Research for Best Health. London: DOH, 2006. [Google Scholar]
- 2. Nilsen ES, Myrhaug HT, Johansen M, Oliver S, Oxman AD. Methods of consumer involvement in developing healthcare policy and research, clinical practice guidelines and patient information material. Cochrane Database of Systematic Reviews 2006; Issue 3. Art. No: CD004563. DOI: 10.1002/14651858.CD004563.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Hewlett S, de Wit MD, Richards PQ et al. Patients and professionals as research partners: challenges, practicalities, and benefits. Arthritis and Rheumatism, 2006; 55: 676–680. [DOI] [PubMed] [Google Scholar]
- 4. Beresford P. User involvement in research and evaluation: liberation or regulation? Social Policy and Society, 2002; 12: 95–105. [Google Scholar]
- 5. Florin D, Dixon J. Public involvement in health care. British Medical Journal, 2004; 328: 159–161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Editorial. Going public. Nature, 2004; 431: 883. [DOI] [PubMed] [Google Scholar]
- 7. Hanley B, Truesdale A, King A, Elbourne D, Chalmers I. Involving consumers in designing, conducting, and interpreting randomised controlled trials: questionnaire survey. British Medical Journal, 2001; 322: 519–523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Holmes W, Stewart P, Garrow A, Anderson I, Thorpe L. Researching Aboriginal health: experience from a study of urban young people’s health and well‐being. Social Science & Medicine, 2002; 54: 1267–1279. [DOI] [PubMed] [Google Scholar]
- 9. Oliver S, Clarke‐Jones L, Rees R et al. Involving consumers in research and development agenda setting for the NHS: developing an evidence‐based approach. Health Technology Assessment, 2004; 8: 1–148. [DOI] [PubMed] [Google Scholar]
- 10. Barnard A, Carter M, Britten N, Purtell R, Wyatt K, Ellis A. The PC 11 Report. An Evaluation of Consumer Involvement in the London Primary Care Studies Programme. Exeter, UK: Peninsula Medical School, 2005. [Google Scholar]
- 11. Langston AL, McCallum M, Campbell MK, Robertson C, Ralston SH. An integrated approach to consumer representation and involvement in a multicentre randomised controlled trial. Clinical Trials, 2005; 2: 80–87. [DOI] [PubMed] [Google Scholar]
- 12. McLaughlin H. Involving young service users as co‐researchers: possibilities, benefits and costs. British Journal of Social Work, 2006; 36: 1395–1410. [Google Scholar]
- 13. Cotterell P, Harlow G, Morris C et al. Identifying the Impact of Service User Involvement on the Lives of People Affected by Cancer: Final Report. London: Macmillian Cancer Support, 2008. [Google Scholar]
- 14. Staniszewska S. Patient and public involvement in health services and health research: a brief overview of evidence, policy and activity. Journal of Research in Nursing, 2009; 14: 295–298. [Google Scholar]
- 15. Staley K. Exploring Impact: Public Involvement in NHS, Public Health and Social Care Research. Eastleigh: INVOLVE, 2009. [Google Scholar]
- 16. Whitstock MT. Seeking evidence from medical research consumers as part of the medical research process could improve the uptake of research evidence. Journal of Evaluation in Clinical Practice, 2003; 9: 213–224. [DOI] [PubMed] [Google Scholar]
- 17. Craig GM. Editorial. Involving users in developing health services. British Medical Journal, 2008; 336: 286–287. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Department of Health . Research Governance Framework for Health and Social Care, 2nd edn London: DoH, 2005. [Google Scholar]
- 19. Staniszewska S, Herron‐Marx S, Monkford C. Editorial. Measuring the impact of patient and public involvement: the need for an evidence base. International Journal of Quality in Health Care, 2008; 20: 373–374. [DOI] [PubMed] [Google Scholar]
- 20. Entwistle VA, Renfrew MJ, Yearley S, Forrester J, Lamont T. Lay perspectives: advantages for health researchers. British Medical Journal, 1998; 316: 463–466. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Creswell JW. Research Design, Qualitative, Quantitative, and Mixed Methods Approaches. London: Sage Publications, 2009. [Google Scholar]
- 22. Barber R, Parry G, Cooper C, Boote J. Report of the Expert Workshop on the Impact of Public Involvement on Health and Social Care Research. Sheffield: ScHARR Report Series. University of Sheffield. ISBN 1 900752 66 2 2007. [Google Scholar]
- 23. Jones J, Hunter D. Consensus methods for medical and health services research. British Medical Journal, 1995; 311: 276–380. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Daykin N, Sanidas M, Barley V et al. Developing consensus and interprofessional working in cancer services: the case of user involvement. Journal of Interprofessional Care, 2002; 16: 405–406. [DOI] [PubMed] [Google Scholar]
- 25. Sheild T, Campbell S, Rogers A et al. Quality indicators for mental health care in primary care. Quality and Safety in Health Care, 2003; 12: 100–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Campbell SM, Braspenning J, Hutchinson A, Marshall M. Research methods used in developing and applying quality indicators in primary care. Quality and Safety in Health Care, 2002; 11: 358–364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Efstathiou N, Ameen J, Coll A‐M. A Delphi study to identify healthcare users’ priorities for cancer care in Greece. European Journal of Oncology Nursing, 2008; 12: 262–371. [DOI] [PubMed] [Google Scholar]
- 28. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 2000; 32: 1008–1015. [PubMed] [Google Scholar]
- 29. Telford R, Boote JD, Cooper CL. What does it mean to involve consumers successfully in NHS research? A consensus study. Health Expectations, 2004; 7: 209–220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. World Health Organisation . Declaration of Alma‐Ata. International Conference on Primary Health Care. USSR, 1978. [Google Scholar]
- 31. Seale C. Analysing qualitative data In: Seale C. (Ed.) Social Research Methods. A Reader. London: Routledge, 2004: 299–301. [Google Scholar]
- 32. Strauss A, Corbin J. Basics of Qualitative Grounded Research: Grounded Theory Procedures and Techniques. Thousand Oaks, CA: Sage, 1990: 303–306. [Google Scholar]
- 33. Strauss A, Corbin J. Open coding In: Seale C. (Ed.) Social Research Methods. London: Routledge, 2004: 303–306 [Google Scholar]
- 34. Caron‐Flinterman JF, Broerse JEW, Teerling J, Bunders JFG. Patients’ priorities concerning health research: the case of asthma and COPD research in the Netherlands. Health Expectations, 2005; 8: 253–263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Wright D, Corner J, Hopkinson J, Foster C. Listening to the views of people affected by cancer about cancer research: an example of participatory research in setting the cancer research agenda. Health Expectations, 2006; 9: 3–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Broerse JEW, Zweekhorst MBM, van Rensen AJML, de Haan MJM. Involving burn survivors in agenda setting on burn research: an added value? Burns, 2010; 36: 217–231. [DOI] [PubMed] [Google Scholar]
- 37. Boote J, Baird W, Beecroft C. Public involvement at the design stage of primary health research: a narrative review of case examples. Health Policy, 2009; 95: 10–23. [DOI] [PubMed] [Google Scholar]
- 38. Bryant L, Beckett J. The Practicality and Acceptability of an Advocacy Service in the Emergency Department for People Attending Following Self‐Harm. Leeds: University of Leeds, 2006. [Google Scholar]
- 39. Faulkner A. Capturing The Experiences of Those Involved in the TRUE Project: A Story of Colliding Worlds. Eastleigh: INVOLVE, 2004. [Google Scholar]
- 40. Faulkner A. Beyond Our Expectations: A Report of the Experiences of Involving Service Users in Forensic Mental Health Research. London: National Programme on Forensic Mental Health R&D, Department of Health, 2006. [Google Scholar]
- 41. Minogue V, Boness J, Brown A, Girdlestone J. The impact of service user involvement in research. International Journal of Health Care Quality Assurance incorporating Leadership in Health, 2005; 18: 103–112. [DOI] [PubMed] [Google Scholar]
- 42. Wyatt K, Carter M, Mahtani V et al. The impact of consumer involvement in research: an evaluation of consumer involvement in the London Primary Care Studies Programme. Family Practice, 2008; 25: 154–161. [DOI] [PubMed] [Google Scholar]
- 43. Fudge N, Wolfe CDA, McKevitt C. Assessing the promise of user involvement in health service development: ethnographic study. British Medical Journal, 2008; 336: 313–317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Lindenmeyer A, Hearnshaw H, Sturt J, Ormerod R, Aitchison G. Assessment of the benefits of user involvement in health research from the Warwick Diabetes Care Research User Group: a qualitative case study. Health Expectations, 2007; 10: 268–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Andejeski Y, Bisceglio IT, Dickersin K et al. Qualitative impact of including consumers in the scientific review of breast cancer research proposals. Journal of Women’s Health and Gender-Based Medicine, 2002; 11: 379–388. [DOI] [PubMed] [Google Scholar]
- 46. Koops L, Lindley RI. Thrombolysis for acute ischaemic stroke: consumer involvement in design of new randomised controlled trial. British Medical Journal, 2002; 325: 415–417. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Thompson J, Barber B, Ward PR. Health researchers’ attitudes towards public involvement in health research. Health Expectations, 2009; 12: 209–220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Watts G. Beyond the impact issue. British Medical Journal, 2009; 338: 553. [Google Scholar]
- 49. Craig P, Dieppe P, Macintyre S et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal, 2008; 337: 979–983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Caron‐Flinterman JF, Broerse EW, Teerling J et al. Stakeholder participation in health research agenda setting: the case of asthma and COPD research in the Netherlands: Science and Public Policy, 2006; 33: 291–304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Smith E, Ross F, Donovan S, Manthorpe J. Service user involvement in nursing, midwifery and health visiting research: a review of evidence and practice. International Journal of Nursing Studies, 2008; 45: 298–315. [DOI] [PubMed] [Google Scholar]
