Skip to main content
Bulletin of the World Health Organization logoLink to Bulletin of the World Health Organization
. 2013 Oct 11;92(1):20–28. doi: 10.2471/BLT.12.116806

Evidence briefs and deliberative dialogues: perceptions and intentions to act on what was learnt

Comptes rendus de preuves et réunions délibératoires: perceptions et intentions d'agir sur ce qui a été découvert

Escritos de pruebas y diálogos deliberativos: percepciones y voluntad de actuar en base a lo aprendido

ملخصات البيّنات وحوارات التداول: التصورات والمقاصد للعمل وفق الدروس المستفادة

证据简报和协商对话:已了解情况的认知和行为意图

Сводки фактов и совещательные диалоги: восприятие и намерение совершать действия на основе полученной информации

Kaelan A Moat a, John N Lavis b,, Sarah J Clancy c, Fadi El-Jardali d, Tomas Pantoja e; for the Knowledge Translation Platform Evaluation study team
PMCID: PMC3865546  PMID: 24391297

Abstract

Objective

To develop and implement a method for the evaluation of “evidence briefs” and “deliberative dialogues” that could be applied to comparative studies of similar strategies used in the support of evidence-informed policy-making.

Methods

Participants who read evidence briefs and attended deliberative dialogues in Burkina Faso, Cameroon, Ethiopia, Nigeria, Uganda and Zambia were surveyed before the start of the dialogues – to collect their views on pre-circulated evidence briefs – and at the end of the dialogues – to collect their views on the dialogues. The respondents’ assessments of the briefs and dialogues and the respondents’ intentions to act on what they had learned were then investigated in descriptive statistical analyses and regression models.

Findings

Of the 530 individuals who read the evidence briefs and attended dialogues, 304 (57%) and 303 (57%) completed questionnaires about the briefs and dialogues, respectively. Respondents viewed the evidence briefs and deliberative dialogues – as well as each of their key features – very favourably, regardless of the country, issue or group involved. Overall, “not concluding with recommendations” and “not aiming for a consensus” were identified as the least helpful features of the briefs and dialogues, respectively. Respondents generally reported strong intentions to act on what they had learnt.

Conclusion

Although some aspects of their design may need to be improved or, at least, explained and justified to policy-makers and stakeholders, evidence briefs and deliberative dialogues appear to be highly regarded and to lead to intentions to act.

Introduction

Over the last decade there has been growing interest in identifying methods to ensure that policy decisions that are aimed at strengthening health systems in low- and middle-income countries are guided by the best available research evidence.14 As a result, several “knowledge translation” platforms, such as the Evidence-informed Policy Networks supported by the World Health Organization, have been established in countries across Africa, the Americas, Asia and the eastern Mediterranean.58 Currently, nearly all of these platforms are focusing their efforts – at least in part – on two distinct but interrelated strategies: the preparation of “evidence briefs for policy”8 and the convening of “deliberative dialogues” that use such briefs as their primary inputs.5

Evidence briefs are a relatively new form of research synthesis. Each starts with the identification of a priority policy issue within a particular health system. The best available global research evidence – such as systematic reviews – and relevant local data and studies are then synthesized to clarify the problem or problems associated with the issue, describe what is known about the options available for addressing the problem or problems, and identify the key considerations in the implementation of each of these options. Research evidence generally needs to be made available in a timely way if it is to stand a good chance of being used as an input in policy-making.9,10 Evidence briefs can generally be prepared in a few weeks or months and – unlike most summaries of single reviews or studies – can place the relevant data in the context of what they mean for a particular health system.

Evidence briefs are used as primary inputs for the deliberative dialogues that facilitate interactions between researchers, policy-makers and stakeholders – the latter defined in this study as administrators in health districts, institutions and nongovernmental organizations, members of professional associations and leaders from civil society. Such interactions are known to increase the likelihood that research evidence will be used in policy-making.9,10 Deliberative dialogues also provide an opportunity to consider the best available global and local research evidence alongside the tacit knowledge of the key health-system “actors” who are involved in the issue being considered or likely to be affected by a decision related to it. At the same time, allowance can be made for other country- or region-specific influences on the policy process, such as institutional constraints, pressure from interest groups and economic crises.

Taken together, briefs and dialogues address the majority of the barriers that hinder the use of research evidence – such as the common perception that the research evidence that is available is not particularly valuable, relevant or easy to use – while building on the factors found to increase the likelihood that such evidence will be used to guide policy-making.5,913 The results of formative evaluations of both strategies in general – as well as some of their common features – have been encouraging.14 However, there have been no systematic attempts to determine how design and content affect the usefulness of evidence briefs and deliberative dialogues in supporting the use of research evidence by policy-makers and stakeholders.1518 There have also been few attempts to develop a method for evaluating such briefs and dialogues that can be applied across a range of countries, health system issues and groups and that includes an appropriate and tractable outcome measure.

To address this gap, we developed and administered two questionnaire-based surveys – one for evidence briefs and one for deliberative dialogues – across a range of issues and low- and middle-income countries. The main aim was to determine whether health system policy-makers, stakeholders and researchers in low- and middle-income countries viewed such knowledge translation strategies as helpful. Drawing on the “theory of planned behaviour”, we also sought to determine the respondents’ intentions to act on the research evidence contained in the evidence briefs and discussed during the deliberative dialogues and their assessment of the factors that might influence whether and how they would act on that evidence.19,20 The theory of planned behaviour was originally developed in the context of individual behaviour. However, this theory has been used successfully in the context of professional behaviour21,22 and has already shown some promise in the study of the behaviour of those involved in policy-making.23

Methods

Study participants

We conducted surveys as part of a 5-year project – the Knowledge Translation Platform Evaluation study – that is evaluating the activities, outputs and outcomes of knowledge translation platforms in 44 low- and middle-income countries, using all data that have been collected from the start of the project in 2009 to the initiation of this analysis.5 For the present investigation, this included data collected from surveys of policy-makers, stakeholders and researchers who were invited to attend deliberative dialogues in Burkina Faso, Cameroon, Ethiopia, Nigeria, Uganda and Zambia after being sent evidence briefs that had been prepared – by local knowledge translation platforms – as inputs for the dialogues.24 In each study country in which an evidence brief was prepared, potential dialogue participants were identified – via a “stakeholder-mapping” exercise – by the team responsible for the local knowledge translation platform. The aim of this exercise was to identify all those policy-makers, stakeholders and researchers who were likely to be involved in or affected by decisions made during the policy process surrounding the issue on which the evidence brief was focused. Samples of the policy-makers, stakeholders, and researchers identified in this manner were then sent the relevant evidence brief and invited to the corresponding dialogue.

Questionnaire development and administration

Two types of questionnaires were used to collect information from policy-makers, stakeholders and researchers: an “evidence brief” questionnaire and a “dialogue” questionnaire. Each type of questionnaire was divided into three or four sections. The first section investigated how helpful the respondent found each key feature of the brief or dialogue and the second section investigated how well the respondent felt that the brief or dialogue achieved its intended purpose. The dialogue questionnaire included a third section that contained 15 items based on “theory of planned behaviour” constructs.19 Questions about the respondent’s professional experiences formed the final section of both types of questionnaire.

The design of each questionnaire was based on the results of a pilot study, a review of the relevant literature, and feedback from a three-day workshop attended by members of the teams running knowledge translation platforms in eastern Africa, Kyrgyzstan and Viet Nam. The evidence brief questionnaire was also refined using feedback from a workshop that brought together representatives of all of the knowledge translation platforms in Africa.24 In addition, the portion of the same questionnaire that related to the theory of planned behaviour was subjected to a reliability assessment.25 Both types of questionnaires were translated into French for use in countries in which English was not widely spoken. Details of the survey instruments and their development can be accessed on line.26

All dialogue invitees from the six countries included in this study who were identified during the stakeholder mapping exercise were sent a package containing a letter of invitation to participate in the dialogue, a copy of the evidence brief, information about the study, a copy of the evidence brief questionnaire and a pre-stamped envelope addressed to the country team running the local knowledge translation platform.5 Participants were asked to return the completed evidence brief questionnaire in the pre-stamped envelope before arriving at the dialogue session. Invitees who did not do this but who presented at the registration desk to participate in a dialogue were asked to complete an evidence brief questionnaire before the dialogue had commenced. Each dialogue participant was handed a copy of the dialogue questionnaire at the end of the dialogue and asked to complete and return it immediately – before his or her departure. Completed questionnaires were collected by country teams and sent to the Knowledge Translation Platform Evaluation study team at McMaster University (Hamilton, Canada). All of the data from the questionnaires were then transferred into an Excel (Microsoft, Redmond, United States of America) database so that they could be compiled, compared and analysed.

Analysis

Two investigators independently coded the key features of each evidence brief and dialogue, which are listed in Table 1 and Table 2, and reconciled their coding. Although this coding was largely based on reviews of electronic copies of the briefs, dialogue summaries and reports to funders that described the dialogue process, it was finalized for each knowledge translation platform in discussions with the core members of the country team responsible for the platform. We used Excel to calculate detailed descriptive statistics for the respondents’ assessments of the evidence briefs in general, the deliberative dialogues in general and each of the key features of the briefs and dialogues that we investigated. The assessments of the various types of respondents were compared. We conducted ordinary least-squares regressions – in version 19 of the SPSS software package (SPSS Inc., Chicago, USA) – to explore associations between the respondents’ professional characteristics and their overall assessments of the briefs and dialogues as well as their assessments of how helpful they found each key feature.

Table 1. Respondents’ views of evidence briefs, by professional role reported,a in a survey conducted in six African countries in 2009–2013.

Focus of assessment Mean score (SD)
All roles (n = 304) Policy-maker (n = 149) Stakeholder (n = 72) Researcher (n = 24) Other (n = 14) No roleb (n = 45)
Evidence brief as a wholec 6.3 (0.8) 6.3 (0.7) 6.2 (1.0) 6.2 (0.8) 6.5 (0.5) 5.9 (1.0)
Key featuresd
Described the context for the issue being addressed 6.4 (1.1) 6.5 (0.9) 6.4 (1.3) 6.5 (1.0) 6.2 (1.4) 6.4 (1.0)
Described different features of the problem, including – where possible – how it affects particular groups 6.3 (1.1) 6.4 (0.9) 6.2 (1.2) 6.3 (1.1) 5.8 (1.2) 6.0 (1.1)
Described options for addressing the problem 6.2 (1.0) 6.3 (0.9) 6.1 (1.1) 6.1 (0.9) 5.9 (1.4) 6.0 (1.1)
Described what is known about the options – based on research evidence – and gaps in what is known 6.1 (1.0) 6.2 (0.9) 6.1 (1.2) 6.0 (1.1) 5.8 (0.9) 6.0 (0.9)
Described key implementation considerations 6.2 (1.0) 6.3 (1.0) 6.1 (1.1) 6.4 (0.9) 5.9 (1.3) 6.2 (0.8)
Employed systematic and transparent methods to identify, select and assess the research evidence 6.1 (1.0) 6.0 (2.9) 6.1 (2.4) 6.3 (2.2) 6.1 (2.2) 6.2 (2.4)
Took quality considerations into accounte 6.0 (1.1) 6.0 (3.1) 5.9 (2.9) 6.3 (2.8) 5.8 (2.2) 6.2 (1.8)
Took local applicability into accounte 6.2 (1.0) 6.2 (1.0) 6.2 (1.0) 6.4 (0.8) 5.8 (1.6) 6.3 (0.8)
Took equity considerations into accounte 6.2 (1.1) 6.1 (3.0) 6.1 (2.5) 6.5 (1.5) 5.5 (2.7) 6.5 (0.6)
Did not conclude with particular recommendations 5.5 (1.6) 5.3 (2.7) 5.8 (2.1) 5.9 (1.8) 4.6 (2.2) 5.6 (1.1)
Employed a “graded entry” formatf 6.3 (1.1) 6.3 (1.1) 6.2 (1.0) 6.6 (0.7) 6.0 (1.5) 6.4 (0.7)
Included a reference list 6.4 (1.2) 6.5 (1.0) 6.3 (1.2) 6.4 (1.4) 6.1 (1.7) 6.1 (1.7)
Was subjected to a review by at least one policy-maker, one stakeholder and one researcher 6.3 (1.0) 6.4 (3.3) 6.1 (3.2) 6.6 (3.4) 6.4 (2.7) 6.4 (2.9)

SD, standard deviation.

a Each respondent’s role was categorized as “policy-maker”, “stakeholder”, “researcher” or “other”. Respondents were coded as policy-makers if they chose “policy-maker” for at least one current role and as stakeholders if they reported “stakeholder” but not “policy-maker” as a current role. Those who identified themselves as “researchers” but not “policy-makers” or “stakeholders” were coded as researchers. All other respondents who reported a role were considered to have “other” roles.

b Respondent failed to indicate a professional role.

c Scored for achievement of aim on a Likert scale that ranged from 1 (complete failure) to 7 (complete success).

d Scored for helpfulness on a Likert scale that ranged from 1 (very unhelpful) to 7 (very helpful).

e When discussing the research evidence.

f Such as a list of key messages as well as a full report.

Table 2. Respondents’ views of deliberative dialogues, by professional role reported,a in a survey conducted in six African countries in 2009–2013.

Focus of assessment Mean score (SD)
All roles (n = 303) Policy-maker (n = 149) Stakeholder (n = 69) Researcher (n = 30) Other (n = 12) No roleb (n = 43)
Dialogue as a wholec 6.4 (0.8) 6.4 (1.5) 6.3 (2.3) 6.4 (1.6) 6.5 (0.7) 6.3 (1.9)
Key featuresd
Addressed a high-priority policy issue 6.6 (0.9) 6.7 (1.5) 6.6 (2.4) 6.7 (1.6) 6.8 (0.5) 6.1 (2.0)
Provided an opportunity to discuss different features of the problem, including – where possible – how it affects particular groups 6.5 (1.0) 6.5 (1.5) 6.6 (2.4) 6.5 (1.8) 6.5 (0.5) 6.2 (1.9)
Provided an opportunity to discuss options for addressing the problem 6.2 (1.1) 6.3 (1.6) 6.2 (2.4) 6.3 (1.8) 6.3 (0.7) 6.1 (1.9)
Provided an opportunity to discuss key implementation considerations 6.3 (0.9) 6.4 (1.5) 6.3 (2.3) 6.6 (1.6) 6.3 (0.6) 5.9 (1.9)
Provided an opportunity to discuss who might do what differently 6.2 (1.1) 6.3 (1.5) 6.2 (2.3) 6.2 (1.8) 5.9 (1.6) 5.8 (1.9)
Was informed by a pre-circulated evidence brief 6.3 (1.0) 6.4 (1.7) 6.3 (2.4) 6.4 (1.6) 6.5 (0.7) 5.9 (2.1)
Was informed by discussion about the full range of factors that can inform how to approach a problem, possible options for addressing it, and key implementation considerations 6.3 (1.0) 6.4 (1.6) 6.3 (2.4) 6.3 (1.6) 6.0 (1.3) 5.9 (2.0)
Brought together many individuals who could be involved in – or affected by – future decisions related to the issue 6.4 (0.9) 6.5 (1.6) 6.4 (2.4) 6.6 (2.0) 6.3 (0.8) 6.0 (2.1)
Aimed for fair representation among policy-makers, stakeholders and researchers 6.4 (0.9) 6.5 (1.6) 6.4 (2.4) 6.4 (1.5) 6.3 (0.9) 5.9 (2.0)
Engaged a facilitator to assist with deliberations 6.5 (1.0) 6.5 (1.0) 6.4 (1.1) 6.5 (1.1) 6.6 (0.5) 6.3 (1.4)
Allowed for frank, off-the-record deliberationse 6.3 (1.1) 6.3 (1.2) 6.3 (1.3) 6.7 (0.8) 6.9 (0.3) 6.1 (1.3)
Did not aim for consensus 5.9 (1.4) 5.7 (1.5) 6.1 (1.3) 6.2 (1.8) 6.1 (1.0) 5.9 (1.6)

SD, standard deviation.

a Each respondent’s role was categorized as “policy-maker”, “stakeholder”, “researcher” or “other”. Respondents were coded as policy-makers if they chose “policy-maker” for at least one current role and as stakeholders if they reported “stakeholder” but not “policy-maker” as a current role. Those who identified themselves as “researchers” but not “policy-makers” or “stakeholders” were coded as researchers. All other respondents who reported a role were considered to have “other” roles.

b Respondent failed to indicate a professional role.

c Scored for achievement of aim on a Likert scale that ranged from 1 (complete failure) to 7 (complete success).

d Scored for helpfulness on a Likert scale that ranged from 1 (very unhelpful) to 7 (very helpful).

e Deliberations followed the “Chatham House” rule.27

Respondents were asked to identify their own professional roles. Since many respondents claimed to have multiple roles, for the regression models it was necessary to categorize each respondent’s role as a policy-maker, stakeholder, researcher or “other”. Respondents were coded as policy-makers if they chose “policy-maker” for at least one of their current roles and as stakeholders if they reported “stakeholder” but not “policy-maker” as one of their current roles. Those who identified themselves as “researchers” but not “policy-makers” or “stakeholders” were coded as researchers. Respondents who did not identify themselves as either a policymaker, a stakeholder or a researcher and who marked “other” as their role were considered to have “other” roles that could not be further defined. In the regression models, “number of years in current role” was entered as a continuous variable, while “experience or training in other roles” was entered as a binary variable – with values of 1 and 0 indicating such experience or training and no such experience or training, respectively. Respondents with missing data were omitted from the corresponding regression. We used simple t-tests to compare group values for variables that could not be included in our regression analyses because of multicollinearity.

Results

In total, 530 individuals from six African countries were sent questionnaires on the evidence briefs, which addressed 17 priority issues (Table 3). Of these 530 subjects, 304 (57%) and 303 (57%) completed the questionnaires about the briefs and deliberative dialogues, respectively. Cameroon had the largest number of respondents for the evidence brief surveys (n = 99), followed by Uganda (n = 66) and Zambia (n = 46). Cameroon also had the largest number of respondents for the dialogue surveys (n = 77), followed by Uganda (n = 69) and Nigeria (n = 48). In all six study countries, the category of professional role that was most frequently self-reported in the evidence brief survey was policy-maker (49%), followed by stakeholder (24%), researcher (8%) and “other” (5%). In this survey, 45 (15%) of the respondents did not provide a role category. The category of professional role most frequently self-reported in the dialogue survey was also policy-maker (49%), followed by stakeholder (23%), researcher (10%) and “other” (4%). In this survey, 43 (14%) of the respondents did not provide a role category. Full details of the data collected on professional roles are available in Appendix A (available at: http://www.testserver5.org/moat_et_al._2013_BWHO_Appendix-A.pdf).

Table 3. Priority issues that were the focus of evidence briefs and deliberative dialogues evaluated in six African countries in 2009–2013 .

Country Priority issues
Burkina Faso Implementing strategies for the reduction of maternal mortality
Cameroon Scaling up community-based health insurancea
Scaling up malaria-control interventions
Improving governance for health district development
Retaining health workers in rural areas
Optimizing the use of antenatal clinics
Improving the reception and management of patients in the accident and emergency departments of national and regional hospitals
Improving the affordability of the accident and emergency departments of national and regional hospitals
Ethiopia Developing human-resource capacity for implementing malaria-elimination measures
Preventing postpartum haemorrhage
Nigeria Strengthening health systems – this was addressed twice
Uganda Task shifting to optimize the roles of health workers and improve the delivery of maternal and child health care
Increasing access to skilled birth attendants
Improving palliative care
Zambia Strengthening the health system for mental health
Preventing postpartum haemorrhage
Retaining human resources for health

a There has not been any evaluation of the deliberative dialogue about this issue.

All the briefs included in this study contained a description of the context for the issue being addressed, a description of the various features of the problem and a description of the options for addressing the problem. All the briefs also employed a “graded-entry” format – such as one comprising a list of key messages as well as a full report – and included a reference list for those who wanted to read more about the issue involved. However, only 52% of the evidence briefs investigated either explicitly took quality considerations into account when discussing the research evidence or were subjected to a merit review and only 62% explicitly took local applicability into account when discussing the research evidence.

All but two of the key features listed in Table 2 were included in all of the convened dialogues that we investigated. The exceptions were “providing an opportunity to discuss who might do what differently” and “not aiming for a consensus”, which were features of 50% and 95% of the dialogues investigated, respectively (Appendix A).

Every key feature of the evidence briefs that we investigated was viewed very favourably by all – or almost all – of the respondents (Table 1). Compared with the other key features of the evidence briefs, “not concluding with recommendations” was judged less favourably by the respondents categorized as policy-makers, stakeholders, researchers or “other”.

Similarly, all of the key features of the deliberative dialogues were generally viewed favourably by all groups of respondents (Table 2). However, “not aiming for consensus” was viewed less favourably than any other key feature, particularly by policy-makers.

Respondents in the “other” category often rated key features of the briefs and dialogues less favourably than the respondents who could be assigned to a more specific role. In general, respondents reported strong intentions to use research evidence of the type that was discussed at the deliberative dialogues; positive attitudes towards research evidence of the type discussed at the dialogues; and subjective norms in their professional life that were conducive to using research evidence of the type that was discussed at the dialogues (Table 4). Compared with the other respondents, those who did not provide a role category considered themselves to have relatively limited behavioural control and so to be less likely to act on what they had learnt from the briefs and dialogues.

Table 4. Respondents’ intentions to act on what was learnt from evidence briefs and deliberative dialogues, by professional role reported,a in a survey conducted in six African countries in 2009–2013.

Focus of assessment Mean score (SD)
Policy-maker (n = 149) Stakeholder (n = 69) Researcher (n = 30) Other (n = 12) No roleb (n = 43)
Future use of research evidencec
    Expected 6.3 (0.6) 6.2 (0.8) 6.2 (1.3) 6.1 (0.5) 5.8 (1.1)
    Wanted 6.4 (0.6) 6.1 (0.8) 6.2 (0.9) 6.1 (0.7) 6.0 (1.4)
    Intended 6.3 (0.8) 6.1 (0.8) 6.2 (1.0) 6.2 (1.9) 6.1 (1.4)
Attitude to use of research evidenced 6.6 (0.7) 6.5 (0.8) 6.5 (0.8) 6.6 (2.4) 6.3 (1.1)
Subjective normse 6.2 (1.4) 6.3 (1.9) 5.9 (1.5) 6.2 (1.1) 6.3 (1.9)
Perceived behavioural controlf 6.2 (1.8) 6.1 (1.8) 6.1 (1.8) 6.3 (1.7) 5.5 (1.7)

SD, standard deviation.

a Each respondent’s role was categorized as “policy-maker”, “stakeholder”, “researcher” or “other”. Respondents were coded as policy-makers if they chose “policy-maker” for at least one current role and as stakeholders if they reported “stakeholder” but not “policy-maker” as a current role. Those who identified themselves as a “researcher” but not a “policy-maker” or “stakeholder” were coded as researchers. All other respondents who reported a role were considered to have “other” roles.

b Respondent failed to indicate a professional role.

c Respondents were asked to score how well they agreed with statements saying that they expected, wanted or intended to use evidence of the type discussed at the deliberative dialogue, with each score ranging from 1 (strongly disagree) to 7 (strongly agree).

d Respondents were asked to state their attitude in terms of how harmful, bad, unpleasant or unhelpful they considered the use of research evidence to be, with each score ranging from 1 (for very harmful, very bad, very unpleasant or very unhelpful) to 7 (for very beneficial, very good, very pleasant or very helpful). The mean of the four scores for each respondent was then calculated.

e Respondents were asked to score how well they agreed with the following statements: “Most people who are important to me in my professional life think that I should use research evidence of the type discussed at the deliberative dialogue”; “It is expected of me that I use research evidence of the type discussed at the deliberative dialogue”; and “I feel under social pressure to use research evidence of the type discussed at the deliberative dialogue”. Each score ranged from 1 (strongly disagree) to 7 (strongly agree). The mean of the three scores for each respondent was then calculated.

f Respondents were asked to score how well they agreed with the following statements: “I am confident that I could use research evidence of the type discussed at the deliberative dialogue”; “The decision to use research evidence of the type that was discussed at the deliberative dialogue is beyond my control” (which was reverse coded to align with the other variables); and “Whether I use research evidence of the type discussed at the deliberative dialogue is entirely up to me”. Each score ranged from 1 (strongly disagree) to 7 (strongly agree). They were also asked to score how easy it would be for them to use research evidence of the type discussed at the deliberative dialogue, with each score ranging from 1 (very difficult) to 7 (very easy). The mean of the four scores for each respondent was then calculated.

Although we initially attempted to include all of the respondent characteristics that we investigated into our regression models, we had to exclude “previous experience or training” because of multicollinearity. The data analyses only revealed two differences between groups of respondents that reached statistical significance. In the regression models for the evidence briefs – in comparisons with researchers, the reference category – a self-reported professional role that fell in the “other” category was found to be a significant predictor of giving “not concluding with recommendations” a lower score for helpfulness (P = 0.028; Table 5). In the analysis of the data for the deliberative dialogues, t-tests revealed that respondents without past experience as a researcher gave “not aiming for consensus” significantly lower scores for helpfulness than respondents with experience (P = 0.015).

Table 5. Associations between respondents’ professional role and their scoring of evidence briefs and deliberate dialogues, in a survey conducted in six African countries in 2009–2013.

Characteristic β coefficienta for scoreb
Evidence briefs
Deliberative dialogues
Overall “Did not conclude with recommendations” Overall “Did not aim for consensus”
Role categoryc
    Policy-maker +0.233 –0.602 –0.024 –0.513
    Stakeholder +0.165 +0.129 –0.074 –0.059
    Other +0.410 –1.255d +0.056 –0.374
Years in current position +0.013 +0.029 +0.006 –0.007

a The regression coefficients related to each categorical variable (role) reflect the average difference in score between “researchers” – the reference category – and people in each of the roles shown in the table. A positive sign indicates that those in the role shown had a higher average score than researchers; a negative sign indicates that those in the role shown had a lower average score than researchers.

b Overall scores were for the achievement of aim. Scores for key features – “did not conclude with recommendations” and “did not aim for consensus” – reflect respondents’ perceptions of how helpful these features were.

c For the analysis of the respondents’ role categories, three dummy variables – one each for policy-maker, stakeholder and “other” – were created and “researcher” was used as the reference category. Respondents who failed to indicate a professional role were omitted from the regression.

d Statistically significant (P = 0.028).

Discussion

Our evaluation has shown that evidence briefs and deliberative dialogues – two novel approaches to supporting the use of research evidence in policy-making – are very well received, regardless of the countries in which they are used, the health system issues that they address or the group of “actors” that is investigated. Respondents tended to view the evidence briefs and deliberative dialogues in general – as well as each of their key features – very favourably. These observations support previous recommendations that have been made about the use of these strategies in the research literature.1517,2831 “Not concluding with recommendations” emerged as the least helpful feature of evidence briefs from the perspective of all of the respondents taken together, whereas “not aiming for consensus” emerged as the least helpful feature of deliberative dialogues from the perspective of policy-makers. It is not clear whether these observations represent a problem in the ways those running the knowledge translation platforms in the study are explaining the rationale for not concluding evidence briefs with recommendations and not aiming for a consensus during deliberative dialogues, or if they represent true variations in preferences. The rationale for not concluding evidence briefs with recommendations is that any such recommendations would have to be based on the views and values of the authors of the brief – even though it is the views and values of the participants in the subsequent deliberative dialogue that are assumed to be much more important. The rationale for not aiming for consensus in the dialogues is that most dialogue participants cannot commit their organizations to a course of action without first building support within their organizations.

The policy-makers, stakeholders and researchers who had read an evidence brief as an input into a deliberative dialogue all reported strong intentions to act on what they had learnt from this process. However, those who did not report a role category were relatively unlikely to report that they intended to act on the same information. It is possible that these respondents were aware of factors beyond their control – such as the political context in which they worked – that would hamper their ability to use research evidence.

The present study is an early attempt to develop a better understanding about how two novel strategies to support the use of research evidence in policy-making – evidence briefs and deliberative dialogues – are viewed by their target audiences in low- and middle-income countries. It was also an attempt to see if the same strategies encourage their target audiences to act – or, at least, to want to act – on research evidence. Our evaluation covered several countries, issues and categories of profession and was designed to measure an appropriate and tractable outcome: intention to act. This approach could easily be applied across more countries and issues in the future. The intention was to make our study sample as representative as possible, by attempting to include data from every individual who had read an evidence brief and attended a deliberative dialogue.

Our study has two weaknesses that should be acknowledged. First, we only used a first wave of data and so our regression models were often constrained by small sample sizes; response rates were less than optimal; and data for specific questions were sometimes missing. Second, we focused on the characteristics of the respondents because we lacked high-quality data about the characteristics of the context – which can vary in terms of the institutions, interests and ideas that might influence the policy process. Despite these limitations, our observations provide useful insights for those seeking to inform policy-making or to evaluate evidence briefs, deliberative dialogues and similar strategies in the future.

Acknowledgements

Several members of the Knowledge Translation Platform Evaluation study team contributed to this paper but are not listed as authors: Gbangou Adjima and Salimata Ki (Burkina Faso); Jean Serge Ndongo and Pierre Ongolo-Zogo (Cameroon); Mamuye Hadis and Adugna Woyessa (Ethiopia); Abel Ezeoha and Jesse Uneke (Nigeria); Harriet Nabudere and Nelson Sewankambo (Uganda); and Joseph Kasonde and Lonia Mwape (Zambia). JNL has dual appointments with the McMaster Health Forum, McMaster University’s Centre for Health Economics and Policy Analysis and Department of Political Science, and the Department of Global Health and Population at the Harvard School of Public Health. FE has a dual appointment with McMaster University’s Department of Clinical Epidemiology and Biostatistics.

Funding:

We thank the European Commission FP7 programme (which funded the Supporting the Use of Research Evidence in African Health Systems project), the Alliance for Health Policy and Systems Research, the International Development Research Centre (IDRC) International Research Chair in Evidence-Informed Health Policies, and the Canadian Institutes of Health Research for their financial support.

Competing interests:

None declared.

References

  • 1.First Global Symposium on Health Systems Research [Internet]. Montreux Statement from the Steering Committee of the First Global Symposium on Health Systems Research. Geneva: World Health Organization; 2010. Available from: http://healthsystemsresearch.org/hsr2010/ [accessed 4 October 2013].
  • 2.The Bamako call to action: research for health. Lancet. 2008;372:1855. doi: 10.1016/S0140-6736(08)61789-4. [DOI] [PubMed] [Google Scholar]
  • 3.The Mexico Statement: strengthening health systems. Lancet. 2004;364:1911–2. doi: 10.1016/S0140-6736(04)17485-0. [DOI] [PubMed] [Google Scholar]
  • 4.World report on knowledge for better health Geneva: World Health Organization; 2004. [Google Scholar]
  • 5.Johnson NA, Lavis JN. Procedures manual for evaluating knowledge-translation platforms in low- and middle-income countries: overview. Hamilton: McMaster University Program in Policy Decision-Making; 2010. [Google Scholar]
  • 6.Hamid M, Bustamante-Manaog T, Truong VD, Akkhavong K, Fu H, Ma Y, et al. EVIPNet: translating the spirit of Mexico. Lancet. 2005;366:1758–60. doi: 10.1016/S0140-6736(05)67709-4. [DOI] [PubMed] [Google Scholar]
  • 7.Corkum S, Cuervo LG, Porrás A, EVIPNet Americas Secretariat EVIPNet Americas: informing policies with evidence. Lancet. 2008;372:1130–1. doi: 10.1016/S0140-6736(08)61459-2. [DOI] [PubMed] [Google Scholar]
  • 8.Lavis JN, Panisset U. EVIPNet: Africa’s first series of policy briefs to support evidence-informed policymaking. Int J Technol Assess Health Care. 2010;26:229–32. doi: 10.1017/S0266462310000206. [DOI] [PubMed] [Google Scholar]
  • 9.Lavis JN, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005;10(Suppl 1):35–48. doi: 10.1258/1355819054308549. [DOI] [PubMed] [Google Scholar]
  • 10.Lavis JN, Hammill A, Gildiner A, et al. A systematic review of the factors that influence the use of research evidence by public policymakers Hamilton: McMaster University Program in Policy Decision-Making; 2005. [Google Scholar]
  • 11.Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44. doi: 10.1258/135581902320432778. [DOI] [PubMed] [Google Scholar]
  • 12.Oxman AD, Lavis JN, Lewin S, Fretheim A. SUPPORT tools for evidence-informed health policymaking (STP) 1: what is evidence-informed policymaking? Health Res Policy Syst. 2009;7(Suppl 1):S1. doi: 10.1186/1478-4505-7-S1-S1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ. 2006;84:620–8. doi: 10.2471/BLT.06.030312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lavis JN, Hamid M, Sewankambo N et al. International dialogue on evidence-informed action Hamilton: McMaster University; 2007. [Google Scholar]
  • 15.Lavis JN, Permanand G, Oxman AD, Lewin S, Fretheim A. SUPPORT tools for evidence-informed health policymaking (STP) 13: preparing and using policy briefs to support evidence-informed policymaking. Health Res Policy Syst. 2009;7(Suppl 1):S13. doi: 10.1186/1478-4505-7-S1-S13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Lavis JN, Boyko JA, Oxman AD, Lewin S, Fretheim A. SUPPORT tools for evidence-informed health policymaking (STP) 14: organising and using policy dialogues to support evidence-informed policymaking. Health Res Policy Syst. 2009;7(Suppl 1):S14. doi: 10.1186/1478-4505-7-S1-S14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lomas J, Culyer T, McCutcheon C, McAuley SLL. Conceptualizing and combining evidence for health system guidance Ottawa: Canadian Health Services Research Foundation; 2005. [Google Scholar]
  • 18.Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68. doi: 10.1111/j.1468-0009.2007.00506.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ajzen I. The theory of planned behaviour. Organ Behav Hum Decis Process. 1991;50:179–211. doi: 10.1016/0749-5978(91)90020-T. [DOI] [Google Scholar]
  • 20.Francis J, Eccles M, Walker A, Johnson M, Grimshaw J, Foy R. Constructing questionnaires based on the theory of planned behaviour. Newcastle upon Tyne: Centre for Health Services Research, University of Newcastle; 2004. [Google Scholar]
  • 21.Francis JJ, Eccles MP, Johnston M, Whitty P, Grimshaw JM, Kaner EFS, et al. Explaining the effects of an intervention designed to promote evidence-based diabetes care: a theory-based process evaluation of a pragmatic cluster randomised controlled trial. Implement Sci. 2008;3:50. doi: 10.1186/1748-5908-3-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Eccles MP, Hrisos S, Francis J, Kaner EF, Dickinson HO, Beyer F, et al. Do self-reported intentions predict clinicians’ behaviour: a systematic review. Implement Sci. 2006;1:28. doi: 10.1186/1748-5908-1-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Boyko JA, Lavis JN, Abelson J, Dobbins M, Carter N. Deliberative dialogues as a mechanism for knowledge translation and exchange in health systems decision-making. Soc Sci Med. 2012;75:1938–45. doi: 10.1016/j.socscimed.2012.06.016. [DOI] [PubMed] [Google Scholar]
  • 24.Johnson NA, Lavis JN. Procedures manual for the “Evaluating Knowledge-Translation Platforms in Low- and Middle-Income Countries” study: formative evaluation Hamilton: McMaster University Program in Policy Decision-Making; 2009. [Google Scholar]
  • 25.Boyko JA, Lavis JN, Dobbins M, Souza NM. Reliability of a tool for measuring theory of planned behaviour constructs for use in evaluating research use in policymaking. Health Res Policy Syst. 2011;9:29. doi: 10.1186/1478-4505-9-29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.McMaster University [Internet]. KTPE overview. Evaluating knowledge-translation platforms in low- and middle-income countries. Hamilton: McMaster University; 2013. Available from: http://www.researchtopolicy.org/KTPEs/KTPE-overview [accessed 4 October 2013].
  • 27.Chatham House [Internet]. Chatham House rule. London: The Royal Institute of International Affairs; 2013. Available from: http://www.chathamhouse.org/about-us/chathamhouserule [accessed 4 October 2013].
  • 28.Boyko J. Deliberative dialogues as a mechanism for knowledge translation and exchange Hamilton: McMaster University; 2010. [DOI] [PubMed] [Google Scholar]
  • 29.The knowledge translation toolkit. Bridging the “know-do” gap: a resource for researchers Ottawa: International Development Research Centre; 2011. Available from: http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=851 [accessed 4 October 2013].
  • 30.Communication notes: reader-friendly writing – 1:3:25 Ottawa: Canadian Health Services Research Foundation; 2009. [Google Scholar]
  • 31.Rosenbaum SE, Glenton C, Wiysonge CS, Abalos E, Mignini L, Young T, et al. Evidence summaries tailored to health policy-makers in low- and middle-income countries. Bull World Health Organ. 2011;89:54–61. doi: 10.2471/BLT.10.075481. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Bulletin of the World Health Organization are provided here courtesy of World Health Organization

RESOURCES