Abstract
Introduction
It is now more widely recognized that public involvement in research increases the quality and relevance of the research. However, there are also more questions as to exactly how and when involvement brings added value.
The nature of the current evidence of impact
Based on the findings of recent literature reviews, most reports of public involvement that discuss impact are based on observational evaluations. These usefully describe the context, the type of involvement and the impact. However, the links between these factors are rarely considered. The findings are therefore limited to identifying the range of impacts and general lessons for good practice. Reflecting on the links between context, mechanism and outcome in these observational evaluations identifies which aspects of the context and mechanism could be significant to the outcome. Studies that are more in line with the principles of realistic evaluation can test these links more rigorously. Building on the evidence from observational evaluations to design research that explores the ‘missing links’ will help to address the question ‘what works best, for whom and when’.
Conclusions
We conclude that a more intentional and explicit exploration of the links between context, mechanism and outcome, applying the principles of realistic evaluation to public involvement in research, should lead to a more sophisticated understanding of the factors that increase or decrease the likelihood of positive outcomes. This will support the development of more strategic approaches to involvement maximizing the benefits for all involved.
Keywords: evidence, impact, patient and public involvement, public involvement, realistic evaluation
Introduction
Public involvement has been defined as research being carried out ‘with’ or ‘by’ members of the public rather than ‘to’, ‘about’ or ‘for’ them. It includes, for example, working with research funders to prioritize research topics, offering advice to researchers as members of a project steering group, commenting on and developing research materials and undertaking interviews with research participants1.
Public involvement in research is founded on the core principle that people who are affected by research have a right to have a say in what and how research is undertaken2, 3. It has also been reported to improve the quality and relevance of research.4, 5 Over the past decade, the Department of Health and subsequently the National Institute for Health Research (NIHR) have repeatedly stated their support for involving patients and the public in research.6, 7, 8 The Department of Health has also provided practical support through funding INVOLVE9 since 1996 (an organization that supports active public involvement in NHS, public health and social care research), as well as funding public involvement within other parts of the NIHR such as the Research Programmes, Research Design Services, Research Networks and Collaborations for Leadership in Applied Health Research and Care (CLAHRCs).
During this time, research regulators and a growing number of commissioners of publicly funded health research have started to ask applicants to describe how they plan to involve the public in their research. For example, since September 2009, health and social care researchers applying for ethical and governance approvals via the Integrated Research Application System (IRAS) have been asked to respond to a two‐part question about their plans for involvement.10, 11 Similarly, in 2010, the NIHR introduced a standard application form for all the research programmes they fund, which includes a section on patient and public involvement.
These developments have understandably led to increasing demands for evidence as to precisely what difference involvement makes. Robust evidence of the impact of involvement is needed to encourage a wide range of stakeholders to commit to involving the public in research. However, there is also a growing need for a clearer understanding of why, how and when involvement brings the greatest benefits. This is essential to developing a more strategic approach to involvement and to understand what type of involvement is most likely to bring added value within any particular research project. This would help a wide range of stakeholders to make better informed decisions about public involvement activities. For example:
Research funders would be better able to judge the appropriateness as well as the quality of the plans for public involvement in researchers' proposals.
Researchers would be able to identify the type of involvement that was most likely to benefit their specific research project and adopt a more strategic approach for example in selecting the right people to get involved and providing the most appropriate training and support.
Patients and members of the public would have a better understanding of what is expected of them, where their contributions could bring the most added value and how they could maximize their influence.
With the aim of building our knowledge and understanding of the impact of involvement so as to improve the practice of public involvement in research, we undertook a literature review in 2008.4 Our findings were supported by another review published the following year.5 Since that time, we have continued to review the literature to support the development of an online bibliography of evidence of the impact of involvement.12
In this review article, we reflect on the nature of the evidence that has been published to date and explore the strengths and weaknesses of the different approaches that have been taken to evaluating impact. Based on this analysis and our direct experience of evaluating public involvement activity,13, 14, 15 we have come to recognize that the impact of involvement is highly dependent on the specific context and the precise nature of the mechanism of involvement. We therefore argue that the links between context, mechanism and outcome require more rigorous investigation and in particular that the principles of realistic evaluation (Box 1) could be usefully applied to studies of public involvement in research.
Box 1. The principles of realistic evaluation.
Realistic evaluation is a model of theory‐driven evaluation that not only explores what outcomes are produced from interventions but also ‘how they are produced, and what is significant about the varying conditions in which the interventions take place’.16 Realistic evaluation aims to find out the contextual factors that make interventions effective, to understand why interventions work in some conditions but not others.
Such an approach is highly relevant to the evaluation of a complex social intervention such as public involvement in research, where the outcome is very much influenced by the context in which it takes place. The many contextual factors that shape the impact of involvement in research include the nature of the research project, the topic area, the skills, experiences, knowledge and attitudes of all the different people involved (researchers, patients or members of the public, clinicians, funders, regulators). There are also different methods used and different levels of influence. Realistic evaluation offers an approach to the evaluation of this complex activity that will help identify when a particular method of involvement is likely to lead to a particular outcome within a given context.
The design of a realist evaluation is distinct in that it starts with a theory about how a particular mechanism operates in a specific context to produce a defined outcome. All other aspects of the evaluation follow on from this. The theory is used to generate a hypothesis about what aspects of the mechanism might produce change, which subgroups might benefit most readily and what resources are necessary to sustain the changes. The evaluation or study then adopts a design and methods of data collection and analysis to test these hypotheses. The findings identify the links between context, mechanism and outcome. They are also fed back into further development of the theory that might lead to new hypotheses that can then be tested in future studies. A series of studies, each building on previous findings, helps to develop an increasingly refined understanding of how and why an intervention results in a particular outcome within a given context.
The nature of the current evidence of impact
An overview of the current literature on public involvement in research
In our original review of the literature,4 of the 396 articles, we identified as potentially reporting on the impact of public involvement in research, only 89 (22%) included evidence of impact. Through the subsequent development of the INVOLVE evidence library,12 we identified a further 53 articles reporting on impact.
Among the 142 articles that reported on impact, all but one took the form of observational research. The authors had often gathered the reflections of researchers and members of the public after they had worked together on a research project. Most often people were asked for their views on what difference the involvement made. Their reflections were either obtained through informal discussions or more formally through qualitative research including structured group discussions, one‐to‐one interviews and/or project diaries. The evaluations were either carried out by research team members or by independent evaluators. We refer to these studies as observational evaluations.
An alternative approach to exploring the impact of involvement is through the use of a randomized controlled trial (RCT). In our review of the literature, we only identified one study that applied this approach.17 We refer to this type of approach as an experimental evaluation.
None of the reports identified in our 2008 search incorporated the principles of realistic evaluation. Among the 53 articles identified since then, only two had adopted an approach along these lines.18, 19
In the remainder of this article, we reflect on the nature of the evidence that has been obtained through these different approaches to assessing impact and their contributions to our understanding of how and why involvement makes a difference.
The evidence from observational evaluation
The reports from observational evaluations are most often purely descriptive. They describe the context to the involvement in terms of the nature of the research project, its aims and its design. They describe the nature of the mechanism in terms of how the involvement was carried out as well as how members of the public were recruited, trained and supported. They also describe the outcomes in terms of the difference that the involvement made (Box 2).
Box 2. Two examples of observational evaluations generating descriptions of impact and lessons about good practice2, 20 .
Both evaluations were based on the reflections of researchers and service users who had worked together on research projects. One was retrospective and drew on experience from number of projects.2 The other was prospective: the research team met to consider the impact of the involvement both during and at the end of the study.20
Both describe the benefits of the involvement in terms of the benefits to the research, the benefits to the researchers and the benefits to the service users involved. Both also draw out lessons that relate to general good practice including:
The importance of good working relationships based on mutual trust and respect
The value of involving service users at the early stages of project design
Meeting the practical and support needs of the people involved
Providing training or briefing in research methods and processes prior to involvement
These types of evaluation have made an important contribution to our understanding of impact through identifying the many and varied ways in which public involvement can have an impact on both the research and the various stakeholders involved.4, 5 However, because these reports have not explored the links between context, mechanism and outcome, it is not always clear why involvement has worked in some circumstances but not others or what specific factors have contributed to a positive or negative outcome.
Some of these observational evaluations have asked the question ‘What helped the involvement to work well?’ However, again because the specific context and mechanism have not been considered, the findings tend to relate to general lessons about good practice (Box 2). They have not helped to identify what makes a particular type of involvement work well within a given context. While these findings have made a vital contribution to improving the quality of all involvement processes, they have not gone as far as addressing the question ‘What works best?’
In a small number of the observational evaluations (n = 4), notably in cases where the involvement did not work well, the authors have reflected on aspects of the context or the mechanism that could have contributed to the negative outcome. This may be because it is easier to identify factors that have contributed to negative outcomes or because there is a greater motivation to understand the reasons why. All four reports were observational evaluations of the impact of using peer interviewers.21, 22, 23, 24 With this type of involvement, the expected outcome is better quality data as a result of service users conducting interviews with their peers. This is reported to arise because the interviewees feel more at ease when interviewed by another service user and are therefore more comfortable in being open and honest about their experiences.21, 25, 26 This in turn generates more valid and reliable data. In the four studies where this outcome was not achieved, the researchers have reflected on the factors that could explain this lack of impact (Box 3).
Box 3. Observational evaluations that have reflected on links between context, mechanism and outcome in the case of involving peer interviewers.
Four observational evaluations have reflected on the factors that have influenced whether involving service users as interviewers has made a difference to the quality of the interviews and/or interview data. These were all qualitative research studies where service users were involved in carrying out in‐depth, face‐to‐face, semi‐structured interviews. Three of these reported on the researchers' reflections on the outcomes.22, 23, 24 The fourth study asked service user interviewees about their experiences of being interviewed by their peers.21
One of the factors that these studies identified as influencing the outcome of this type of involvement was whether the service user interviewers had the requisite skills for the role.21, 27 All interviewers need to have good interpersonal skills, to be good listeners, to show empathy and discretion and to be skilled in the use of interviewing techniques.24 Not everyone has these skills, irrespective of whether they are a service user. Nor will everyone be able to acquire these skills even with support and training.24 These studies highlighted that if service user interviewers lack these skills, their involvement has the opposite effect to that intended–making the interviewee feel less at ease and less able to be open and honest about their experiences.21 The authors conclude that there is a need for high‐quality training of service user interviewers and stress the importance of selecting the right people for the job.
Another important factor that emerged from these studies is whether the service user interviewer is trained and prepared to manage the peer‐to‐peer dynamic in such a way as to maximize the potential benefits. While a greater sense of empathy and shared experience with a peer interviewer often makes it more likely that interviewees will discuss their views in depth, this dynamic can also have the opposite outcome. It can result in some issues not being fully explored, which then reduces the quality of the data.22 This occurs if the interviewer (and/or the interviewee) does not expand upon a point of discussion because they make assumptions about the level of their shared understanding. Similarly, a service user interviewer may not follow‐up on a point because they consider it ‘old hat’, without realizing this may be a novel and valuable finding for the research.23
A third factor identified by these reports is the expectations/attitudes of the service users being interviewed. Some service users reported a general distrust of other service users and expressed concerns about whether their confidential information might ‘leak out’.21 This can make them feel less comfortable and less likely to be open and honest about their views.
Finally, the nature of the topic being discussed and other characteristics of the interviewer (age, gender and background) were also identified as being likely to influence the extent of the ‘shared empathy’.23 For example, parents who are drug users may feel more at ease talking to people with similar experiences because they do not feel judged and may feel more understood. However, women talking about their experiences of domestic violence may feel more at ease talking to another woman, irrespective of whether that woman is an academic or a service user.23 This may mean that in studies exploring different topics, a different type of ‘peer’ may be more or less appropriate.
When the links between context, mechanism and outcome are considered, different kinds of findings emerge. Firstly, there are lessons about how to make specific types of involvement work well. So for example in the case of peer interviewing (Box 3), we move beyond a general principle that the right person needs to be selected for a role to identifying the specific skills and personal attributes required by a peer interviewer. We also move beyond a general statement that training is important for involvement to identifying how training for a peer interviewer needs to prepare them for the unique challenges of their role, that is, it not only needs to equip them with interviewing skills but also needs to help them successfully manage the peer‐to‐peer dynamic.
Secondly, this reflection identifies how different contextual factors could influence the outcome, making it more or less likely that the desired impact will be achieved. For example, it appears that not all individuals completely trust their peers to act professionally in the role of interviewer.21 This concern has been reported in other studies although the potential to influence the impact of peer interviewers has not been previously discussed or explored.28, 29 These observations therefore raise new and important questions including:
How prevalent is this concern? Is it limited to specific groups of people or more commonly felt, that is, in what contexts is this concern likely to influence the impact of peer interviewers?
Are there ways of adapting the processes of peer interviewing so as to allay these concerns and to maximize the likelihood of a positive impact? For example, is there value in offering interviewees a choice about whether they are interviewed by their peers? Should we consider pairing academic and peer interviewers? Do peer interviewers need to emphasize that they are bound by the same rules of professional conduct as any other interviewer as part of the preamble to a peer interview?
We suggest that addressing these specific questions about how different types of approaches to peer interviewing might play out in different contexts to either increase or decrease the likelihood of a positive impact need to be explored through studies that are intentionally designed to build on the lessons we have learnt to date and explicitly test the links between context, mechanism and outcome.
The evidence from experimental evaluation
One study that took the form of a RCT to investigate the impact of public involvement focused on the involvement of members of the public in the development of patient information sheets (PIS) for clinical trials. The authors concluded that the involvement had little or no impact.17 This is in contrast to the findings from a large number of observational evaluations where involvement has been reported to have a major impact at this stage of the research.4, 30, 31, 32, 33, 34, 35 Patient‐generated information has been reported to be more accessible and acceptable to other patients, which is then understood to lead to better recruitment and retention. The discrepancy between the findings from the observational evaluations and those from this experimental evaluation can be explained by considering the contextual factors that appear to influence whether this type of involvement is likely to make a difference (Box 4).
Box 4. An experimental evaluation assessing the impact of public involvement on patient information sheets17 .
This RCT compared two different PIS, one written by patients and one written by researchers, within a wider clinical trial of a treatment for Gulf War Syndrome. The results showed no difference between the two PIS in terms of the impact on participants' understanding of the trial and recruitment and retention. The authors suggested that a wide range of contextual factors could have been responsible for the negative outcome.17 These included the fact that the researchers were skilled in producing clear, accessible patient information, the patients did not make many changes to their version of the PIS, the participants were highly informed about their condition and were used to reading technical information and the clinical trial itself was quite simple in its design.17
The authors also highlighted that the process of consent is more complex than simply the provision of written information. The dialogue between researcher and potential participant will also be important in informing decisions about whether to take part in a research project. Therefore, patient involvement might need to be extended to designing the whole recruitment process, not just the PIS, if involvement is to impact on recruitment and retention. This conclusion has been supported by other studies.31
The key lesson from the Guarino study is that if an experimental evaluation is not designed in a way that considers the contextual factors and aspects of the mechanism that have the potential to influence impact, then it may produce inaccurate or over‐simplified conclusions about when and how involvement makes a difference.
The evidence from realistic evaluation
Two recent reports of the impact of public involvement on research have adopted approaches that are more in line with the principles of realistic evaluation.18, 19 Both assessed the impact of involving peer interviewers (Box 5). The significant features of these two studies are that they have:
Box 5. The evidence from two evaluations of the impact of peer interviewers that adopted the principles of realistic evaluation.
Hamilton's study19 aimed to address whether the involvement of peer interviewers would have the same impact on data quality in the context of a quantitative study as has been observed in the case of qualitative research.4 The researchers had a number of hypotheses about how the involvement would have an impact on their project which was a quantitative telephone survey of people with mental health problems. One hypothesis was that involving peer interviewers would enable participants to be more open and honest about their experiences of stigma and discrimination. The researchers therefore set up a substudy within the wider survey that was designed in such a way as to explore whether the disclosure of peer status by the interviewers made a difference to the participants' responses. They made sure that the interviewers had the right kinds of skills for the role through a formal recruitment process and provided 2 days training for all the interviewers. Participants were randomly assigned to one of the three groups of interviewer, peer disclosing, peer non‐disclosing and non‐peer interviewers. The results revealed that there was no difference in the frequency of reports of discrimination to the three different types of interviewer, that is, the disclosure of peer status did not appear to have an impact on the participants' responses.
Gillard's study18 aimed to answer the question ‘How do peer interviewers make a difference to qualitative research findings?’ The researchers hypothesized that service user researchers and conventional university researchers would approach the tasks of conducting qualitative interviews and data analysis in different ways and that this in turn would influence the range of data collected and its interpretation. This was based on the research team's observations of an earlier pilot study. They therefore designed a substudy within the context of a bigger qualitative research study examining patients' experiences of being compulsorily detained. The substudy allowed for a direct comparison of the different types of interviewer and included secondary analysis of their interview transcripts and their coding of the interview data. The results showed that the service user interviewers tended to ask different kinds of interview questions but showed even more differences in the way they analysed the qualitative data. They drew out different themes that were complementary to those of the academics. The authors concluded that both perspectives are vital to developing a comprehensive and ‘real world’ insight into the research question under investigation.
Identified a specific question about the impact of peer interviewers
Developed a hypothesis about how this type of involvement could make a difference in the context of their study which has then informed the design of their evaluation and the methods they used to capture the evidence of impact
Factored in the contextual and mechanistic factors that are already known to influence outcome (thus avoiding known barriers to the impact of this specific type of involvement)
In summary, they have drawn on the findings from previous observational evaluations to intentionally design a study to explore relationships between context, mechanism and outcome and to address specific questions about why, when and how peer interviewers make a difference.
These types of study therefore help to refine our understanding of the impact of specific types of involvement and contribute to an increasingly more in‐depth understanding of why and how involvement makes a difference within a particular context. For example, the Hamilton study18 suggests that involving peer interviewers in quantitative telephone surveys does not have a significant impact on the quality of data collected. It suggests that the expected impact of the shared empathy between peer interviewer and interviewee may depend on a number of other factors that were not present in this study. For example, it might only be experienced if there is more personal contact between interviewer and interviewee, the simple fact of disclosure of service user status on the telephone may not be sufficient. Similarly, the impact might only be detectable when there more room for respondents to be expansive in their responses than is possible in answering a series of closed questions.
Thus, the findings lead to new hypotheses about when and how this type of involvement makes a difference. These could be explored by further studies where different combinations of context and mechanism are tested in terms of their impact on the interviewee's experience of the interview and the quality of the data. For example, there may be different outcomes when quantitative surveys are conducted face‐to‐face or when qualitative surveys are conducted by telephone, and when qualitative surveys are conducted in person. The nature of the topic may also have an influence on the extent of the impact of the involvement. We again suggest that these questions could be most usefully be addressed by adopting an approach that is more in line with realistic evaluation. These would be intentionally designed to explore these links between context, mechanism and outcome.
Discussion and conclusions
The reporting of the impact of public involvement in research is relatively limited.4, 5 Most of the studies to date have relied on observational evaluations. They describe the wide range of impacts that involvement has at different stages of research, on different stakeholders and in different contexts. The lessons from this work have contributed to identifying principles of good practice and addressing the question ‘how can we generally improve the quality of involvement processes’?
Importantly, these observational evaluations have also provided information about the kinds of contextual factors that appear to influence impact and the ways that a specific type of involvement might be adapted to maximize benefits. However, their explanatory power is limited because they simply describe what did or did not work, without exploring why and how these factors have influenced the outcome. Further research is needed which builds on this evidence to design studies that rigorously and intentionally explore the links between context, mechanism and outcome. Such approaches, which would be more in line with the principles of realistic evaluation, would further our understanding of ‘what works best, for whom and when’?
When an approach based on realistic evaluation is employed, the nature of the research question and the exact hypotheses being tested determine what method is most appropriate to capture the evidence of impact. As illustrated by the two research studies discussed previously,18, 19 an exploration of the impact of peer interviewers in a quantitative survey required quantitative methods and analysis, whereas an exploration of the impact of the same type of involvement in a qualitative study, required secondary qualitative analysis of interview data. Once the research question, hypothesis and design of the study are defined, the most appropriate method becomes clear. The distinction with an approach based on realistic evaluation is not what method is used (e.g. observation or randomized controlled trial) but how the study is designed to test the links between context, mechanism and outcome.
Like all social interventions, public involvement is a complex activity and many different factors influence its impact. Adopting an approach along the lines of realistic evaluation will help us to develop a more sophisticated understanding of the factors that increase or decrease the likelihood of positive outcomes.36 For example, Jagosh et al.37 have conducted a retrospective realistic review of published accounts of public involvement in the implementation and evaluation of community‐based health programmes. This analysis revealed important links between context, mechanism and outcome, but the findings were limited by the fact that a realist approach had not been used to design the evaluations in the first instance or to report the evidence of impact. We hope that incorporating a realistic approach into future evaluations will provide a more in‐depth understanding of involvement and thus support the development of ever more strategic approaches, minimizing the risk of negative outcomes and maximizing the benefits for all involved.
Sources of funding
Three of the authors (SB, HH, MT) are employed by the University of Leeds and funded by the National Institute for Health Research (NIHR). The views expressed in this article are those of the authors and not necessarily those of INVOLVE or the NIHR.
Conflicts of interest
Authors reported no conflict of interest.
References
- 1. INVOLVE . Briefing Notes for Researchers. Involving the Public in NHS, Public Health and Social Care Research, Briefing Note 2&3. Eastleigh: INVOLVE, 2012. [Google Scholar]
- 2. Hewlett S, Wit M, Richards P et al Patients and professionals as research partners: challenges, practicalities, and benefits. Arthritis & Rheumatism, 2006; 55: 676–680. [DOI] [PubMed] [Google Scholar]
- 3. Smith E, Ross F, Donovan S et al Service user involvement in nursing, midwifery and health visiting research: a review of evidence and practice. International Journal of Nursing Studies, 2008; 45: 298–315. [DOI] [PubMed] [Google Scholar]
- 4. Staley K. Exploring Impact. Public Involvement in NHS, Public Health and Social Care Research.Eastleigh: INVOLVE, 2009. [Google Scholar]
- 5. Brett J, Staniszewska S, Mockford C. The PIRICOM Study. A Systematic Review of the Conceptualisation, Measurement, Impact and Outcomes of Patient and Public Involvement in Health and Social Care Research. London: United Kingdom Clinical Research Collaboration, 2010. [Google Scholar]
- 6. Department of Health . Research Governance Framework for Health and Social Care, 2nd edn London: Department of Health, 2005. [Google Scholar]
- 7. Department of Health (Research and Development Directorate) . Best Research for Best Health. A New National Health Research Strategy. London: Department of Health, 2006. [Google Scholar]
- 8. Department of Health . Equity and Excellence. Liberating the NHS. London: The Stationery Office Ltd, 2010. [Google Scholar]
- 9. INVOLVE . About INVOLVE. Available at:http://www.invo.org.uk/about-involve/, accessed 17 February 2012.
- 10. IRAS . IRAS Partners. Available at: http://www.myresearchproject.org.uk/, accessed 17 February 2012.
- 11. Tarpey M. Public Involvement in Research Applications to the National Research Ethics Service. Eastleigh: INVOLVE, 2011. [Google Scholar]
- 12. INVOLVE . Evidence library. Available at:http://www.invo.org.uk/resource-centre/evidence-library/, accessed 17 February 2012.
- 13. TwoCan Associates . An Evaluation of the Process and Impact of Patient and Public Involvement in the Advisory Groups of the UK Clinical Research Collaboration. Final Report. London: United Kingdom Clinical Research Collaboration, 2009. [Google Scholar]
- 14. TwoCan Associates . A Critical Assessment of the Development of Patient and Public Involvement in the UK Clinical Research Collaboration. London: United Kingdom Clinical Research Collaboration, 2009. [Google Scholar]
- 15. TwoCan Associates . Evaluation of the ‘User Involvement in Local Diabetes Care’ Project. London: Diabetes UK, 2011. [Google Scholar]
- 16. Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications Ltd, 1997. [Google Scholar]
- 17. Guarino P, Elbourne D, Carpenter J, Peduzzi P. Consumer involvement in consent document development: a multicenter cluster randomized trial to assess study participants' understanding. Clinical Trials, 2006; 3: 19–30. [DOI] [PubMed] [Google Scholar]
- 18. Gillard S, Borschmann R, Turner K, Goodrich‐Purnell N, Lovell K, Chambers M. ‘What difference does it make?’ Finding evidence of the impact of mental health service user researchers on research into the experiences of detained psychiatric patients. Health Expectations, 2010; 13: 185–194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Hamilton S, Pinfold V, Rose D et al The effect of disclosure of mental illness by interviewers on reports of discrimination experienced by service users: a randomised study. International Review of Psychiatry, 2011; 23: 47–54. [DOI] [PubMed] [Google Scholar]
- 20. Barber R, Beresford P, Boote J, Cooper C, Faulkner A. Evaluating the impact of service user involvement on research: a prospective case study. International Journal of Consumer Studies, 2011; 35: 609–615. [Google Scholar]
- 21. Bengtsson‐Tops A, Svensson B. Mental health users' experiences of being interviewed by another service user in a research project. A qualitative study. Journal of Mental Health, 2010; 19: 234–242. [DOI] [PubMed] [Google Scholar]
- 22. Bryant L, Beckett J. The Practicality and Acceptability of an Advocacy Service in the Emergency Department for People Attending Following Self‐harm. Leeds: University of Leeds, 2006. [Google Scholar]
- 23. Elliott E, Watson AJ, Harries U. Harnessing expertise: involving peer interviewers in qualitative research with hard‐to‐reach populations. Health Expectations, 2002; 5: 172–178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Miller E, Cook A, Alexander H et al Challenges and strategies in collaborative working with service user researchers: reflections from the academic researcher. Research Policy and Planning, 2006; 24: 197–208. [Google Scholar]
- 25. Coupland H, Maher L, Enriquez J et al Clients or colleagues? Reflections on the process of participatory action research with young injecting drug users International Journal of Drug Policy, 2005; 16: 191–198. [Google Scholar]
- 26. Faulkner A. Beyond Our Expectations: A Report of the Experiences of Involving Service Users in Forensic Mental Health Research. London: National Programme on Forensic Mental Health R&D, Department of Health, 2006. [Google Scholar]
- 27. Miller E, Morrison J, Cook A. Brief encounter: collaborative research between academic researchers and older researchers. Generations Review, 2006; 16: 39–41. [Google Scholar]
- 28. Blackburn H, Hanley B, Staley K. Turning the Pyramid Upside Down. Eastleigh: INVOLVE, 2010. [Google Scholar]
- 29. Tetley J, Haynes L, Hawthorne M et al Older people and research partnerships. Quality in Ageing: Policy, Practice and Research, 2003; 4: 18–23. [Google Scholar]
- 30. Dewar BJ. Beyond tokenistic involvement of older people in research – a framework for future development and understanding. Journal of Clinical Nursing, 2005; 14 (Suppl 1): 48–53. [DOI] [PubMed] [Google Scholar]
- 31. Donovan J, Mills N, Smith M et al Quality improvement report: improving design and conduct of randomised trials by embedding them in qualitative research: ProtecT (prostate testing for cancer and treatment) study. Commentary: presenting unbiased information to patients can be difficult. British Medical Journal, 2002; 325: 766–770. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Hanley B, Truesdale A, King A, Elbourne D, Chalmers I. Involving consumers in designing, conducting, and interpreting randomised controlled trials: questionnaire survey. British Medical Journal, 2001; 322: 519–523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Howe A, Delaney S, Romero J, Tinsley A, Vicary P. Public involvement in health research: a case study of one NHS project over 5 years. Primary Health Care Research & Development, 2010; 11: 17–28. [Google Scholar]
- 34. Langston AL, McCallum M, Campbell MK, Robertson C, Ralston SH. An integrated approach to consumer representation and involvement in a multicentre randomized controlled trial. Clinical Trials, 2005; 2: 80–87. [DOI] [PubMed] [Google Scholar]
- 35. Lindenmeyer A, Hearnshaw H, Stuart J, Ormerod R, Aitchison G. Assessment of the benefits of user involvement in health research from the Warwick Diabetes Care Research User Group: a qualitative case study. Health Expectations, 2007; 10: 268–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Martin GP, Ward V, Hendy J et al The challenges of evaluating large‐scale, multi‐partner programmes: the case of NIHR CLAHRCs. Evidence & Policy, 2011; 7: 489–509. [Google Scholar]
- 37. Jagosh J, Macaulay A, Pluye P et al Uncovering the benefits of participatory research: implications of a realist review for health research and practice. The Milbank Quarterly, 2012; 90: 311–346. [DOI] [PMC free article] [PubMed] [Google Scholar]