Skip to main content
British Journal of Pain logoLink to British Journal of Pain
. 2012 May;6(2):85–91. doi: 10.1177/2049463712449961

Evaluating the impact of pain education: how do we know we have made a difference?

Emma Briggs 1,
PMCID: PMC4590105  PMID: 26516475

Summary points

1. Education is a core activity for most healthcare professionals working in pain management and an effective evaluation strategy should assess its impact.

2. Evaluation may have one or more purposes: accountability, development or knowledge generation. Other key principles include making evaluation integral to the education process, reflecting with learners on progress, self-evaluation by the pain educator and involving all the key stakeholders.

3. A wide variety of methods are available, but the choice will be influenced by the nature and amount of the pain education, number of learners, purpose of the evaluation and time and resources available.

4. Patient education can be evaluated through knowledge and attitude questionnaires, concordance with the treatment plan, satisfaction and pain- and disability-related measures.

5. Further research is needed to explore the specific strategies or combination of techniques that are effective for different groups, and build on the theoretical base underpinning effective pain education and evaluation for patients and professionals.

6. The importance of education for the public has also been recently recognised, but this wider educational initiative should also be fully evaluated to assess whether this initiative is making a difference.

Keywords: Pain Education, Evaluation, Patient Education, Public Education, undergraduates, postgraduates

Introduction

Amongst the pain community, the importance of pain education is widely accepted and recognised. It is a key part of (an often) complex treatment plan for patients, and educating the undergraduate and postgraduate healthcare workforce is an essential strategy for promoting effective practice. The importance of public education has also been recently highlighted with campaigns from The Patients Association1 and priorities identified at the Pain Summit 2011.2 This paper explores the key principles of evaluation in pain education and highlights issues in these three arenas: the education of patients, professionals and the public. Practical suggestions are provided for enhancing the evaluation process in pain education. There are many exciting and innovative educational initiatives locally, nationally and internationally, but there is a need to assess their impact. We need to evaluate whether pain education is making a difference and, if so, by how much?

In many ways, evaluation forms part of everyday practice as we examine the effectiveness of care provided on an individual basis or by exploring particular groups through feedback, audit and research. The American Evaluation Association3 defines evaluation more formally as ‘assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness’. This statement reflects the many levels at which evaluation can occur and comes from one of the national and international bodies on evaluation. Evaluation is a specific discipline with professional organisations, dedicated journals and a theoretical base to underpin the practice of evaluation. The focus here is much more specific, exploring the key principles of educational evaluation in order to ultimately enhance pain practice.

Purpose and key principles of evaluation

Evaluating the impact of pain education in any arena can be enhanced by using some key principles. Throughout this section, the term learner is used to describe the person benefiting from education, recognising that this could be a patient, student, qualified professional or the public. Similarly, educator is used when referring to those who design and/or facilitate the learning process.

Reflecting on the body of research and development on evaluation, Chelimsky4 identified three purposes of evaluation: accountability (measuring results), development (strengthening the intervention and empowering stakeholders) and knowledge generation (obtaining a deeper understanding of the process, similar to research) (Table 1). Evaluating pain education may have more than one purpose, but the aim needs to be clearly identified along with the various stakeholders involved, such as learners, educators and commissioners. The purpose should also be clear to these groups, and learners in particular need to know how their feedback and input will be used and whether it will benefit them or others, or both. It may be helpful to distinguish between an evaluation that is formative and used mid-way to improve the current learner’s experience and summative evaluation that will help future learners.5

Table 1.

Three main purposes of pain education (based on Chelimsky’s framework)4.

Purpose of evaluation Common methods Pain education examples
Accountability: measuring results or efficiency for stakeholders. Focus is the outcome and whether change is due to the educational intervention Typically quantitative, comparative evaluative methods, e.g. pre and post evaluation, quasi-experimental techniques Assessment of knowledge and attitudes towards pain before and after teaching sessions or pain management programme (PMP)
Survey of public knowledge following a national campaign
Development: providing information to empower and strengthen educational practice, a community or organisation. Focus is the process and outcome Information is collected retrospectively and prospectively. Case studies and stakeholder evaluations. Can include performance results Mid-session or course evaluation to inform the rest of the teaching or PMP
User feedback on professional education
Knowledge: obtaining a deeper understanding of learning or change. Focus is more on the process and less on the outcome Methods that explore issues to generate understanding or theory, e.g. interviews, focus groups Examining patient’s experience of an educational programme and the factors that led to change or lack of change

The evaluation of pain education should be planned before the learning takes place in order to select the optimum techniques and timing. The RUFDATA framework6 was developed to support those starting out and provides a useful aide memoire when planning any evaluation.

  • What are our Reasons and Purposes for evaluation (e.g. planning, developing, accountability)?

  • What will be our Uses of our evaluation (e.g. staff development, sharing good practice)?

  • What will be the Foci for our evaluations (range of activities, connect priority areas and original aims)?

  • What will be our Data and Evidence for our evaluations (e.g. numerical, observational, qualitative)?

  • Who will be the Audience for our evaluations (e.g. commissioners, educators, employers)?

  • What will be the Timing for our evaluations (should coincide with decision-making cycle)?

  • Who should be the Agency conducting the evaluations (e.g. you, external evaluators)?

Many evaluations, such as feedback forms, focus on the outcome and learner satisfaction, but this gives very limited information and does not explore how effective the pain teaching and learning has been. This also implies a passive experience rather than the learner taking an active part in the process, and evaluations should encourage learners to reflect on their learning, evaluating both the process and the outcome. The intended goals are important to include (referred to as convergent evaluation)5 and may ask participants to describe the most useful element or extent to which the planned learning outcomes or own goals were met. Unintended outcomes (divergent evaluation)5 are also valuable and can be explored by asking people to identify additional knowledge or skills gained, what they feel most confident about or the aspect they found most challenging.

The evaluation technique chosen should be aligned with the pain education provided in terms of resources or time spent. For example, a pain education programme will have a detailed evaluation and particular time points but a one-off teaching session to patients may simply involve written or verbal feedback. Ensuring that the evaluation occurs as close as possible to the learning experience and that it is accessible to the participants in terms of readability (for written evaluations) and access (e.g. electronic evaluations) will maximise the return rate and avoid frustration.

The learner’s perspective is central to the evaluation process and the emphasis should be on building on strengths and continuously improving pain education rather than proving that a teaching session, programme or learning technique works.5,7 In exploring and improving pain education, different sources of evidence can be sought and different stakeholders included. The pain educator has to be the next key stakeholder and self-evaluation and critical reflection are important skills in considering the quality of the learning. Peer feedback from a colleague can also be helpful as part of the evaluation process for individual educators and to encourage learning from each other. It is also a necessary element of university teaching through internal processes and the external examiner system.

Other stakeholders may include colleagues, managers, universities, funding, commissioning or quality assurance bodies. They may require much wider evidence as part of the evaluation including attendance, cost-effectiveness, student assignment results and completion rates, as well as impact data such as patient satisfaction and improvement measures.

Evaluation methods

The choice of method will be influenced by answers to questions in the planning phase and ultimately determined by the nature of the pain education, number of learners, the purpose of the evaluation and time and resources available. With a large student group quantitative methods such as a questionnaires or instant audience response systems may be favoured to capture data from the whole group for accountability purposes. Equally, the educator could interview a small number of students in order to perform a development evaluation and enhance their future learning experiences about pain. The sheer number of evaluation methods available is a positive reflection on the fact there is not one ‘best fit’8 and there are a number of tools available to us to gather evidence. Table 2 highlights some core evaluation methods, their uses and resources implications. The Evaluation Cookbook,8 upon which Table 2 is based, provides some excellent, practical advice on designing and using these strategies and further options.

Table 2.

Common evaluation methods (based on ref. 8).

Evaluation method Description Resource comments
Checklists Multiple choice questions on pre-determined standards Low–moderate preparation time, learner time and analysis time. Can be delivered online
Confidence logs Learner’s confidence rating at different time points Low–moderate preparation time, learner time and analysis time
Cost-effectiveness Educator-only analysis of cost vs. outputs and analysing alternative learning approaches Moderate–high preparation and administration time
Focus groups Group interview (6–12 people) used for development evaluation Low preparation time, moderate learner and analysis time
Interviews Individual interviews used for development evaluation Moderate–high preparation, learner and analysis time
Nominal group techniques Individual reflection followed by small group reflection and voting on top issues Low preparation time. Moderate learner time
Pre and post testing Assessing the impact before and after learning, usually on knowledge and attitudes Moderate–high preparation, learner and analysis time
Questionnaires Online or paper survey including a range of questions Moderate preparation and analysis, low learner time
System log Tracking use of online resources Low–moderate preparation and analysis

Pain education can be delivered to a variety of learners in different settings using different techniques, but there are core principles for an effective evaluation:

  • clearly identifying the purpose, effective planning, encouraging learners to reflect on their role in learning, self-evaluation by the educator, ensuring a continuous evaluation process and involving key stakeholders.

There a number of strategies that can be used, but there are, however, some unique issues to consider in the evaluation of patient, professional and public education.

Evaluating patient education

Patient education is usually part of a treatment plan; an interprofessional, complex intervention to which there are several interacting components.9 This presents significant challenges in evaluating and researching the educational element and its effectiveness. However, outcome measures have traditionally focused on changes in knowledge, attitudes, concordance or increased medication use and impact on pain scores. There is a large range of educational interventions available including one-to-one counselling, group sessions, audio guides, information leaflets, books on prescription schemes, DVDs/podcasts or innovations such as online support groups and telecare. In reality, strategies are chosen based on the resources available and are often combined to address different learning needs and provide follow-up materials.

The evidence around patient education is slowly growing and research studies have evaluated individual educational interventions or specific combinations of techniques. Cancer pain education is an example of an area where there is a fairly well-developed body of knowledge and systematic reviews on the effectiveness of education.1012 The most recent systematic review and meta-analysis11 pooled results from 15 trials (n = 3501), suggesting improved knowledge and attitudes scores (0.5 point on 0–5 scale) and reduced average and worst pain scores (average: 1.1; worst: 0.78 on 0–10 scale). Although these appear to be small changes, the authors highlight that these may be more effective than a co-analgesic such as a paracetamol or gabapentin (based on some previous drug trials). Mixed results were found on measurements around self-efficacy, and there was no effect on medication adherence or reduction in pain interference on daily activities.

Systematic reviews can helpfully summarise and assess the impact of pain education, but the diverse results illustrate the complexities around researching the experience of pain. There are many methodological challenges and current research and knowledge synthesis reviews place an emphasis on the randomised controlled trial. This approach alone may not be enough to evaluate the full impact of pain education as part of a complex intervention. Research involving other designs lack the homogeneity that is required to make reasonable comparisons between educational interventions and patient groups. There is also limited insight into the social and cultural context in which pain and patient education occurs. Future research needs to focus on which groups will have the greatest response to which interventions. This will help clinicians make informed decisions about strategies to use with pain sufferers.

In the meantime, knowledge and attitude questionnaires, concordance with the treatment plan, satisfaction and pain- and disability-related measures are practical ways of evaluating pain education and treatment. The choice of strategy will depend on the nature of the pain and resources, including time available. Our challenge for the future will be to evaluate patient education with the increasing and creative use of technology, such as using Internet-based support and mobile phone applications (apps). The availability and quality of these for current patients may vary considerably.13

Evaluating pain education for healthcare professionals

Conferences, workshops, seminars, study days and formal university courses on pain all provide opportunities for people to feedback to organisers, but they may not always evaluate the impact of the learning. These educational events are real opportunities to examine whether attitudes, knowledge, skills or, in the longer term, practice have changed as a result. There are a number of models that can be helpful in guiding the evaluation of education for healthcare professionals. Kirkpatrick’s14 and Moore et al.’s15 are two frequently cited models in healthcare education that are included here.

Donald Kirkpatrick wrote the Kirkpatrick Evaluation Model in the 1950s and subsequently published several books on the topic. The model was aimed at helping businesses evaluate their training programmes, but it has found some resonance with healthcare education. There are four levels:

  • Level 1: Reaction – the degree to which participants react favourably to the education.

  • Level 2: Learning – the degree to which participants acquire the intended knowledge, skills, attitudes, confidence and commitment based on their participation.

  • Level 3: Behaviour – the degree to which participants apply what they learned during the education when they are back on the job.

  • Level 4: Results – the degree to which specific outcomes occur as a result of the education event and subsequent reinforcement.

Moore’s framework is more detailed and was proposed as a guide for the design and evaluation of continuing medical education. It consists of seven levels. At the most basic levels are participation rates and satisfaction, increasing to more advanced levels of knowledge and skill acquisition, competence in an educational setting, performance in practice and then impact on patient and community outcomes. Table 3 gives examples of the sources of data that can be collected at each of these levels. Most pain educators are passionate about influencing practice and improving pain management by facilitating learning in others. However, evaluating the impact of pain education at those higher levels of performance, patient and community outcomes, is difficult to achieve because of the practicalities, resource implications and complex number of variables.

Table 3.

Moore et al.’s framework for planning and assessing educational activities15.

Level Description Sources of data
1 Participation: number of learners Attendance records
2 Satisfaction: degree to which expectations of participants about setting and delivery were met Questionnaire at the end
3A Learning: Declarative – extent to which participants state what the education intended them to know Pre and post knowledge tests. Self-report of knowledge gain
3B Learning: Procedural – extent to which participants state how to do what the education intended them to know how to do Pre and post knowledge tests. Self-report of knowledge gain
4 Competence: degree to which participants show in an educational setting how to do what the educational activity intended them to do Observation in an educational setting. Self-report of competence and intention to change
5 Performance: degree to which participants do what the educational activity intended them to do in their practice Observation of performance in patient care setting: patient charts, administrative databases. Self-report of performance
6 Patient health: degree to which the health status of patients improves due to changes in practice behaviour of participants Health status measures recorded in patient charts or administrative databases. Patient self-report of health status
7 Community health: degree to which the health status of a community of patients changes due to changes in practice behaviours of participants Epidemiological data and reports. Community self report

This challenge is reflected in the current research literature. A recent review of interprofessional pain education studies16 compared evaluation outcome measures with an adapted version of Kirkpatrick’s model. The results revealed that 89.3% (n = 441) of papers explored levels 1 and 2 (satisfaction and impact on knowledge, and attitudes) and none of the included studies had evaluated the benefits to patients. An examination of the pain education literature available reveals a similar trend, with papers focusing on knowledge increase and attitude shift rather than patient outcomes.

There is a need for further research to investigate the effectiveness of pain education for healthcare professionals, identify the optimal teaching and learning methods, share good practice, increase the number of publications and devise creative strategies to assess the impact of education beyond the classroom and consider those experiencing pain.

Evaluating pain education for the public

The importance of public education around pain is recognised by many patient and professional organisations. The Patients Association1 has an established campaign around pain, and public education has been identified as a priority at the first UK Pain Summit 2011.2 This latter campaign aims to increase the knowledge skills of individuals and communities in order to reduce the impact of chronic pain. Three specific recommendations of education and public health working groups are:

  • The media needs to create positive messages about coping and positive role models.

  • We need to understand the communities that we are trying to reach and use appropriate platforms.

  • Public information – an information campaign to focus on the commonalities of chronic pain and other long-term conditions.

A 3–5-year programme will be developed to include strategies based on these recommendations. Evaluating the impact of these campaigns is as important as the education itself and should be built into the programme of work. National pain education campaigns need a more complex evaluation that may involve specific outcomes measurements, surveys, analysis of media coverage or resource use (e.g. website activity logs) and a cost–benefit analysis. After designing and implementing such an exciting campaign, it is important to demonstrate the impact it is having.

Summary and conclusion

Pain education is a core activity of pain management practice and evaluation should be an integral component of any educational strategy. The core principles for an effective evaluation include identifying the purpose, planning, encouraging learners to reflect on their role in learning, self-evaluation by the educator, ensuring a continuous evaluation process and involving key stakeholders. Educators can choose from a wide range of evaluation techniques depending on the resources available and focus of the evaluation, whether it is accountability, development or knowledge generation.

Different areas of pain management practice present different challenges. Patient education can be evaluated through knowledge and attitude questionnaires, concordance with the treatment plan, satisfaction and pain- and disability-related measures. Further research is needed to explore the context of patient education and the specific strategies or combination of techniques that are effective for different groups. The theoretical base underpinning effective pain education and evaluation for professionals also needs further development. Models can be useful to guide the assessment of impact on learners but we need to move beyond satisfaction and aim for the higher levels that will investigate whether online or classroom learning is having a positive effect on behaviour in practice and patient outcomes. Finally, public campaigns around pain in the UK will offer an exciting opportunity to highlight the importance of pain management and engage relevant communities; this wider educational initiative should be fully evaluated to ensure that the programme meets the original objectives.

Making a difference is what motivates most healthcare professionals in pain management. It is only through an appropriate evaluation strategy that we can be confident that pain education has really made a difference to patients, professionals, organisations and the public.

Footnotes

Funding: This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Multiple choice questions

  1. What are the three main purposes of evaluation (based on Chelimsky4)?
    1. Knowledge, effectiveness, accountability
    2. Knowledge, development, sustainability
    3. Accountability, development, knowledge
  2. Which is the correct pneumonic that can be helpful in planning evaluation activities?
    1. RUFDATA
    2. RUDATAS
    3. REEVAL
  3. Evaluation should focus on:
    1. Outcome in most cases
    2. Process in most cases and outcome when necessary
    3. Process and outcome in all cases
  4. Pre and post testing of knowledge, attitudes and skills following pain education is a resource-intensive method.
    1. True
    2. False
  5. Kirkpatrick’s model of evaluation has four levels. Choose the correct levels.
    1. Attendance, learning, reaction, results
    2. Reaction, learning, behaviour, results
    3. Reaction, satisfaction, learning, behaviour

Answers

1: c; 2: a; 3: c; 4: a; 5: b.

References

  • 1. The Patients Association. Public attitudes to pain, www.patients-association.com/Default.aspx?tabid=93 (2010, accessed 10 February 2012).
  • 2. Chronic Pain Policy Coalition, British Pain Society, Faculty of Pain Medicine, Royal College of General Practitioners. Pain Summit 2011. Policy connect, www.painsummit.org.uk/documents (2011, accessed 10 February 2012).
  • 3. American Evaluation Association. Mission statement, www.eval.org/aboutus/organization/aboutus.asp (accessed 10 February 2012).
  • 4. Chelimsky E. Thoughts for a new evaluation society. Evaluation 1997; 3(1): 97–109. [Google Scholar]
  • 5. Moore I. A guide to practice: evaluating your teaching. 2009. Centre for Promoting Learner Autonomy, Sheffield Hallam University, http://extra.shu.ac.uk/cetl/cpla/resources.html (2009, accessed 10 February 2012).
  • 6. Saunders M. Beginning an evaluation with RUFDATA: theorising a practical approach to evaluation planning. Evaluation 200; 6(1): 7–21. [Google Scholar]
  • 7. Ramsden P. Learning to teach in higher education. London: Routledge, 2003. [Google Scholar]
  • 8. Oliver M, Conole G. Choosing a methodology. In: Harvey J. Evaluation cookbook. Learning Technology Dissemination Initiative, www.icbl.hw.ac.uk/ltdi/cookbook/ (1999, accessed 10 February 2012). [Google Scholar]
  • 9. Medical Research Council. Developing and evaluating complex interventions: new guidance, www.mrc.ac.uk/complexinterventionsguidance (2008, accessed 10 February 2012).
  • 10. Allard P, Maunsell E, Labbé J, Dorval M. Educational interventions to improve cancer pain control: a systematic review. J Palliat Med 2001; 4: 191–203. [DOI] [PubMed] [Google Scholar]
  • 11. Bennett MI, Bagnall AM, Closs SJ. How effective are patient-based educational interventions in the management of cancer pain? Systematic review and meta-analysis. Pain 2009; 143(3): 192–199. [DOI] [PubMed] [Google Scholar]
  • 12. Bennett MI, Bagnall AM, Raine G, et al. Educational interventions by pharmacists to patients with chronic pain: systematic review and meta-analysis. Clin J Pain 2011; 27(7): 623–630. [DOI] [PubMed] [Google Scholar]
  • 13. Rayen A. App to the future. Pain News 2011. (Winter): 18–19. [Google Scholar]
  • 14. Kirkpatrick DL. Evaluating training programmes – the four levels. 3rd ed. London: Berrett-Koehler Publishers, 2006. [Google Scholar]
  • 15. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof 2009; 29(1): 1–15. [DOI] [PubMed] [Google Scholar]
  • 16. Gillan C, Lovrics E, Halpern E, Wiljer D, Harnett N. The evaluation of learner outcomes in interprofessional continuing education: A literature review and an analysis of survey instruments. Med Teacher 2011; 33: e461–e470. [DOI] [PubMed] [Google Scholar]

References

  1. Carter K, Edwards J, Mohammad I, et al. Evaluate work-based learning. Educ Prim Care 2005; 16: 726–728. [Google Scholar]
  2. Harvey J. Evaluation cookbook. Learning Technology Dissemination Initiative, www.icbl.hw.ac.uk/ltdi/cookbook/ (1999, accessed 10 February 2012).
  3. Light G, Cox R, Calkins S. Learning and teaching in higher education: the reflective professional. 2nd ed. London: Sage, 2009. [Google Scholar]

Articles from British Journal of Pain are provided here courtesy of SAGE Publications

RESOURCES