Skip to main content
EFSA Journal logoLink to EFSA Journal
. 2019 Jul 8;17(Suppl 1):e170707. doi: 10.2903/j.efsa.2019.e170707

Conducting fit‐for‐purpose food safety risk assessments

Yann Devos 1, Kevin C Elliott 2, Philip Macdonald 3, Katherine McComas 4, Lucia Parrino 5, Domagoj Vrbos 6, Tobin Robinson 7, David Spiegelhalter 8, Barbara Gallani 6
PMCID: PMC7015513  PMID: 32626444

Abstract

The interplay between science, risk assessment and risk management has always been complex, and even more so in a world increasingly characterised by rapid technical innovation, new modes of communication, suspicion about authorities and experts, and demands for people to have a say in decisions that are made on their behalf. In this challenging era where scientific advice on food safety has never been in greater demand, risk managers should effectively navigate the interplay between facts and values and be able to rely on robust and fit‐for‐purpose risk assessments to aid them. The fact that societal resistance is often encountered when scientific advice on food safety operates at a distance from social values and fails to actively engage with citizens, has led to increasing emphasis on the need to advance forms of risk assessment that are more contextual, and socially sound and accountable. EFSA's third Scientific Conference explored how risk assessments could be constructed to most usefully meet society's needs and thus connect science with society, while remaining scientifically robust. Contributors to the conference highlighted the need to: (1) frame risk assessments by clear policy goals and decision‐making criteria; (2) begin risk assessments with an explicit problem formulation to identify relevant information; (3) make use of reliable risk assessment studies; (4) be explicit about value judgements; (5) address and communicate scientific uncertainty; (6) follow trustworthy processes; (7) publish the evidence and data, and report the way in which they are used in a transparent manner; (8) ensure effective communication throughout the risk analysis process; (9) involve society, as appropriate; and (10) weigh risks and benefits on request. Implementation of these recommendations would contribute to increased credibility and trustworthiness of food safety risk assessments.

Keywords: expertise, public engagement, risk, science communication, transparency, trust

1. Introduction

Risk assessment is an important scientific tool that contributes to risk analysis in the area of food safety. In the case of regulated stressors connected to food and feed production (such as genetically modified organisms, plant protection products and food additives), risk assessors evaluate the probability and seriousness of harm to human and animal health and the environment from a proposed activity (e.g. cultivating a genetically modified plant). Quantification of risk follows scientific methods for the identification, gathering and interpretation of evidence. The decision on the level of acceptable risk, and thus whether a proposed activity in the case of a regulated stressor, ought to be permitted is taken by risk managers who weigh policy options to accept, minimise, reduce or reject the characterised risks with other relevant information such as the economic, social or political implications of the proposed activity. Additional measures for prevention and control of specific risks may be required. In practice, determining the acceptability of risk requires balancing specific sector needs with the broader public good. Any regulatory decisions should protect the health and well‐being of citizens and the environment, while enabling innovation. Risk communication then involves dialogue between risk assessors, risk managers and other interested parties, and includes the explanation of risk assessment conclusions and the basis upon which regulatory decisions were made. Although interrelated, risk assessment, risk management and risk communication fulfil different roles in, and contribute differently to, decisions: scientific information and risk assessment can lend credibility to decisions, but they do not dictate decisions because decisions are not based solely on science. It is recognised that risk analysis is an iterative process, and interaction between risk managers and risk assessors is essential to the success and acceptance of the whole scientific assessment process. Therefore, the assessment, management and communication of risks are not to be carried out completely independently, but they must continually interact (e.g. Codex Alimentarius, 2007).

The interplay between science, risk assessment and risk management has always been complex, and even more so in the so‐called ‘post‐truth’ world in which scientific facts are often dismissed or ignored. Facts are often uncertain, expertise is mistrusted and questioned, values are in dispute, discussions about risks are often polarised and politicised, stakes are high and decisions are urgent (Funtowicz and Ravetz, 1993; Sarewitz, 2004, 2007; Waltner‐Toews, 2019). In this challenging era where scientific advice on food safety has never been in greater demand, risk managers should effectively navigate the interplay between facts and values and be able to rely on robust and fit‐for‐purpose risk assessments to aid them. The need to advance forms of risk assessment that are more contextual and socially sound and accountable has been advocated, as societal resistance is often encountered when scientific advice on food safety operates at a distance from values and fails to actively engage with citizens (Robinson et al., 2016; Url, 2018; Patel, 2019).

The plenary sessions ‘Where science meets society – Putting risk assessment in context’ and ‘Staying relevant in a changing world’ at EFSA's third Scientific Conference ‘Science, Food and Society’ (Parma, Italy, 18–21 September 2018)1 explored what ‘fit‐for‐purpose food safety risk assessment’ means and which elements contribute to it. Models of science–policy interaction and how they evolved with changing values were presented briefly, as they provide the frame within which scientific advice operates. Subsequently, it was explored how scientific advice could be constructed to most usefully meet society's needs and thus connect science with society, while remaining scientifically robust. This publication builds upon some of the presentations made and discussions held during the plenary sessions at the conference.

2. Models of science–policy interaction

Several models, ranging from the ‘modern’ model to alternative ones (i.e. precautionary, consensus, science‐policy demarcation and extended participation), have been suggested to conceptualise the complex interplay between science, risk assessment and risk management in policy processes. According to the ‘modern’ model, which is based on the classic technocratic vision, facts determine correct decisions; science informs policy by producing objective, valid and reliable knowledge – ‘speaking truth to power’ (Funtowicz and Strand, 2007). However, in recent decades, it has become more and more clear that in many cases facts are neither fully certain in themselves, nor sufficient for acceptable policy. Moreover, more research does not necessarily increase confidence in risk assessments or trust in decisions (Jaffe, 2006). New scientific information quite often reveals previously unknown complexities, increasing the sense of uncertainty (van der Sluijs, 2005) and emphasising differences between competing perspectives (Sarewitz, 2004, 2007; Zollo, 2019). To account for the uncertain nature of facts and inconclusiveness of information, the ‘precautionary’ model has been proposed. Although it is still framed and expressed in terms of quantitative science and modern rationality, it accounts for precaution to protect and legitimise decisions.

In the absence of conclusive facts, scientific information becomes one among many inputs to policy processes. Various actors have their own perspectives and tend to speak many, often conflicting, truths to power. In response to this phenomenon, the ‘consensus’ model emerged as an attempt to rescue the ‘modern’ model from conflicting facts. Key elements in this model are dialogue between scientists to properly frame problems, the creation of intersubjective knowledge in multidisciplinary expert panels, and the search for robust findings (van der Sluijs et al., 1998; Funtowicz and Strand, 2007).

To ensure that political accountability rests with risk managers and is not shifted inappropriately to scientists and risk assessors, the ‘science–policy demarcation’ model advocated the demarcation between institutions (and individuals) that provide scientific advice and those where it is used. This demarcation is designed to protect science from the ‘political’ interference that would threaten its integrity (Funtowicz, 2006). It also prevents scientists from using the authority of their status as an illegitimate validation of their own perspectives when they engage in partisan advocacy on contentious policy issues. Scientists have as much right as anyone else to argue for their values. However, they ought not to claim that their scientific expertise gives special weight to those values.

In these different models, policy is modified by precaution, problems are framed by stakeholders, and scientists are protected from political interference. However, the core ideal of the ‘modern’ model – science as the sole legitimate provider of reliable knowledge separated from society – was not fully challenged. In response to critiques of technocracy and excessive expert power, some authors have called for more ‘socially robust’ science (e.g. Jasanoff, 2003; Nowotny, 2003; Irwin, 2006), which is ‘both more democratically and more technically warranted’. The ‘extended participation’ model advocates more participatory approaches, whereby society contributes to knowledge production processes and these contributions become part of the relevant knowledge (Funtowicz and Ravetz, 1993; Funtowicz, 2006). The outcome of such knowledge production is a socially more robust one (Nowotny, 2003). While achieving social robustness is a complicated task, it can be achieved when knowledge is credible to important actors, relevant to the needs of decision‐makers and produced in a legitimate way (Cash et al., 2003).

3. Fit‐for‐purpose food safety risk assessment

Various elements contribute to achieving fit‐for‐purpose risk assessments. In the area of food safety, contributors to EFSA's conference highlighted the need to: (1) frame risk assessments by clear policy goals and decision‐making criteria; (2) begin risk assessments with an explicit problem formulation to identify relevant information; (3) make use of reliable risk assessment studies; (4) be explicit about value judgements; (5) address and communicate scientific uncertainty; (6) follow trustworthy processes; (7) publish the evidence and data, and report the way in which they are used in a transparent manner; (8) ensure effective communication throughout the process; (9) involve society, as appropriate; and (10) weigh risks and benefits on request.

3.1. Clarifying policy goals and decision‐making criteria

In the case of regulated stressors connected to food and feed production, risk assessors use scientific information to test risk hypotheses about the likelihood and seriousness of harmful effects that may occur following a proposed activity (Devos et al., 2019). Hypotheses take the form that the proposed activity will pose no greater harm or risk, usually when compared with the existing activity (Raybould, 2006, 2007, 2010). This information may exist prior to the risk assessment or may be acquired by new studies.

A key focus for those involved in regulation is often to improve the science used for risk assessment rather than ensuring that the risk assessment is consistent with the objectives of the guiding policy. Indeed, the context and boundaries of risk assessments need to be clear and they are typically framed by legal and other policy objectives (Collins et al., 2010). However, risk assessment can be hindered by the absence of clear policy goals and decision‐making criteria (e.g. definition of protection goals and what constitutes harm, limits or thresholds of concern, trigger values for action or acceptability of risk, judging the sufficiency of scientific knowledge and the extent to which uncertainty should be reduced for decision‐making). These are needed to guide the interpretation of scientific information (Evans et al., 2006; Hokanson et al., 2018). Even in jurisdictions with well‐developed regulatory systems, policy goals and decision‐making criteria are often defined in general terms. Broad policy goals like the protection of biodiversity or ensuring sustainable agricultural practices should be translated into practical objectives for the risk assessment. Consequently, risk assessors are left to make a series of small policy decisions to determine how best to address very broad policy objectives and operationalise them for risk assessment (e.g. Garcia‐Alonso and Raybould, 2014; Devos et al., 2015; Faber et al., 2019).

What is regarded as harmful is subjective and rooted in social values, as are statements of policy (Sarewitz, 2004; Sanvido et al., 2012). If what constitutes harm is not defined or at least contextualised, risk assessors face an extremely difficult or impossible task because there are no criteria by which to determine whether certain potential effects of an activity are relevant to the risk assessment. Natural sciences can help risk assessors to predict whether there could be consequences resulting from an activity, but it cannot determine whether those consequences are acceptable and thus reveal value‐based truths about what society must do and which actions to take (Evans et al., 2006).

Consequently, risk managers must interpret the objectives of policy and regulations to define harm and thus decide what would constitute harmful effects. This will provide a useful framework in which risk assessors can operate, recognising that there will always be some areas of uncertainty. Alternatively, risk assessors can elaborate different management options from which risk managers can select the most suitable one(s) (EFSA, 2016).

Once definitions of ‘harm’, ‘benefit’ and ‘acceptability’ are in place, science can estimate the probability and severity of any harmful effects (i.e. assess the risk), the probability and value of any beneficial effects (i.e. assess the opportunity) (Raybould and Macdonald, 2018). In this context, it is important to consider whether the proposed activity may lead to new harms, or only to different ways of causing harm that already result from current practice.

Yet, consensus on what constitutes harm is presently often lacking (Sanvido et al., 2012), underlining the need for further dialogue between risk assessors and risk managers to clarify how the risk assessment can address policy goals and decision‐making criteria.

3.2. Using problem formulation to identify relevant information for risk assessment

It has been asserted that problem formulation is key to identifying relevant information for the risk assessment of regulated stressors (Devos et al., 2019). Robust risk assessments begin with an explicit problem formulation (EFSA, 2015), which involves among other steps: (1) formally devising plausible pathways to harm that describe how a proposed activity could be harmful; (2) formulating risk hypotheses about the likelihood and severity of such events; (3) identifying the information that will be useful to test the risk hypotheses; and (4) developing a plan to acquire new data for hypothesis testing should tests with the available information be insufficient for decision‐making (Raybould, 2006, 2007, 2010). Existing or newly acquired information that provide rigorous tests of such hypotheses may be regarded as relevant for risk assessment. On the other hand, information that provides weak tests of these hypotheses, or tests hypotheses unrelated to the acceptability of risk, can be seen as less important or irrelevant for risk assessment.

Hypotheses about the acceptability of risk play a vital role in establishing the correct relationship between policy and science in risk assessment. They ensure that: (1) the assessment focuses on predicting relevant harmful effects and excludes others as less important or irrelevant; (2) existing information is used effectively; and (3) new data are collected with a clear purpose. Without this relationship, risk assessments would attempt to characterise all the possible effects that might result from a proposed activity, which is unnecessary and unfeasible. Scientifically, such exhaustive characterisation is the test of a null hypothesis of no effect or of no change from an existing acceptable activity that the new activity seeks to replace. Raybould and Macdonald (2018) argue that this ‘data‐driven’ approach to risk assessment has significant disadvantages. First, it suggests that more data will inevitably yield a more precise outcome. Second, decisions are made in response to statistically significant differences that may be of unknown biological relevance and some of which are likely to be spurious. According to Raybould and Macdonald (2018), this leads to a third problem in that decisions based on such practices are likely to be controversial because regulatory policy appears to be made post hoc in response to whatever variables show statistically significant effects, not after careful consideration of societal needs.

Applying problem formulation to risk assessment helps to maximise the usefulness of risk assessment studies for decision‐making through an iterative process, because: (1) harm is defined explicitly from the start; (2) the construction of risk hypotheses is guided by policy rather than an exhaustive attempt to address any possible differences; (3) existing information is used effectively; (4) new data are collected with a clear purpose; (5) risk is characterised against well‐defined criteria of hypothesis corroboration or falsification; and (6) risk assessment conclusions can be communicated clearly.

3.3. Making use of reliable risk assessment studies

Risk assessment studies can vary in quality. It is therefore important to assess whether a risk assessment study was carried out in such a way that it minimises the probability of erroneous (i.e. false negatives and false positives), or inconclusive results (Begley, 2013; Moermond et al., 2017). Furthermore, high confidence in study results is a precondition for the acceptance of data across jurisdictions and may facilitate the sharing of useful information among risk assessors. Consequently, the testing of relevant risk hypotheses in support of risk assessment should strive to be as rigorous as hypothesis testing in any other branch of science; it needs to comply with quality standards to increase confidence in the results and add certainty to the conclusions.

3.4. Managing value judgements

Scientific research and risk assessments on topics related to food safety and the environment are subject to numerous value‐laden judgements. In other words, scientists and risk assessors are forced to make choices that are not settled by logic and the available evidence but that can have ethically or socially important consequences (e.g. on the economy and the environment) depending on how they are made (Elliott, 2017). These judgements include choices about how to design studies; how to collect, analyse and interpret data; how to weigh information from multiple studies; how to extrapolate (and interpolate) beyond the available data; and how to frame and communicate findings (Douglas, 2016). It is important to recognise that these judgements can have significant social consequences even if scientists and risk assessors are not purposely aiming to support particular values when making them. Thus, it is typically unrealistic to think that the value‐ladenness of science can be eliminated or ignored (Douglas, 2009; Brown, 2013; Elliott and Richards, 2017). Instead, it is better to manage value‐laden judgements in science and risk assessment through efforts that promote reflection, openness and dialogue.

In his book, A Tapestry of Values: An Introduction to Values in Science, Elliott (2017) argues for three principles that can help scientists handle the value‐ladenness of their work in a responsible fashion: transparency, representativeness and engagement. With respect to transparency, he argues that scientists should be as clear as possible about their ‘data, methods, models, and assumptions so that others can identify the ways in which their work supports or is influenced by particular values’ (Elliott, 2017). Ideally, transparency allows others to understand how the results of a scientific analysis could have been different if important judgements were made differently. With respect to representativeness, he claims that value‐laden judgements should generally be made in a manner that represents major social and ethical priorities: ‘When clear, widely recognised ethical principles are available, they should be used to guide the values that influence science’ (Elliott, 2017). When ethical principles are less settled, Elliott argues that scientists should take into account broad societal priorities when making judgements that are not settled by the available evidence. With respect to engagement, he argues that scientists, risk assessors, and members of the public can interact in a number of ways in order to help identify important value‐laden judgements and deliberate over how best to handle them (Elliott, 2017). For example, scientists can incorporate members of the public in community‐based participatory research projects (e.g. Suryanarayanan et al., 2018); social scientists can elicit views from members of the public about scientific issues (e.g. Davies et al., 2009); scholars from different disciplinary backgrounds or employment sectors can collaborate to help uncover important assumptions (Hartley and Kokotovich, 2018); and citizen groups and scholars can scrutinise the laws, regulatory requirements, standards, and other institutional policies that steer value‐laden judgements in science and risk assessment (e.g. Wickson and Forsberg, 2015; Elliott, 2016).

As discussed further in the subsequent sections, several kinds of engagement efforts have been implemented in the past and could prove useful in the future to uncover value‐laden judgements that merit further scrutiny in the practices of institutions like EFSA (Patel, 2019; Smith et al., 2019). First, some scholars have experimented with the approach of ‘embedding’ a social scientist or humanist in a scientific laboratory for a period of time in order to help identify and explore background assumptions or other judgements (Schuurbiers and Fisher, 2009). Similarly, humanists or social scientists could be placed in Scientific Panels responsible for decision‐making at regulatory institutions in an effort to help identify important judgements being made. Another way to identify and promote discussion about these judgements would be to use an instrument like the Toolbox Dialogue, which consists of a set of philosophical statements about science (e.g. about the role of values, the nature of scientific models, and the goals of research). When science teams use the Toolbox Dialogue, their members are asked to indicate the extent to which they agree with the statements, and then they meet to talk about their answers in order to stimulate discussion about their assumptions and presuppositions (Eigenbrode et al., 2007). In other cases, it might be fruitful to convene focus groups of researchers or risk assessors to identify important judgements and explore their views about them. This approach could be especially fruitful in response to debated or controversial issues like endocrine disruption, where there appear to be important disagreements within the scientific community that lead to differing policy recommendations (see e.g. Elliott and Resnik, 2014).

Several additional strategies for moving forward in a productive fashion to address value‐laden judgements in regulatory science and risk assessment have been suggested (Elliott, 2019). First, Elliott suggests that decision‐makers should become more comfortable with scientific disagreement, finding ways to respect different positions on value‐laden judgements and formulate policy despite inconclusive evidence (Sarewitz, 2004; Pielke, 2007). Second, Elliott argues that those engaged in regulatory science should explore creative ways to clarify and communicate about the important value‐judgements being made. In some cases, scientists and risk assessors are already aware of important judgements, and they just need to find ways to communicate about them more effectively. In other cases, the value‐laden judgements that lead to different results or interpretation may be more difficult to identify, and thus more extensive efforts to uncover them will be needed. The Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY‐BPA) provides one example of what these sorts of efforts could look like (e.g. Schug et al., 2013). Third, Elliott suggests that institutional processes for setting standards and guidelines for regulatory science and risk assessment should be scrutinised to ensure that they are as fair as possible. Given that these standards and guidelines specify how a wide range of value‐laden judgements should be made, Elliott argues that all interested and affected parties should have opportunities to influence the processes for creating and revising them.

3.5. Addressing and communicating scientific uncertainty

In an era of misinformation and questioning of expertise, open communication of uncertainty has never been so vital. This presents a strong challenge to those who value quantitative and scientific evidence, and who would also like to be trusted – EFSA's tagline is ‘Trusted Science for Safe Food’.

But Onora O'Neill has made the fundamental point that when organisations say they want to be trusted, they are missing a vital point (Royal Society, 2012). Trust is something that is offered to us, we must earn it, and we earn it by demonstrating trustworthiness. Transparency is generally recognised as an important component of trustworthiness, but simply releasing vast amounts of indigestible information is not helpful. O'Neill has closely examined the idea of transparency within the context of open data, and under the term ‘intelligent transparency’, she identified four important features (Royal Society, 2012). Information should be:

  • Accessible: People should be able to get at it;

  • Comprehensible: People should be able to understand it;

  • Useable: It should suit their needs; and

  • Assessable: Interested parties should, if necessary, be able to examine the workings and assess its quality.

Risk assessment in a societal context therefore requires trustworthy communication, and this means acknowledging uncertainty, and ‘showing your working’ to those that want to see it. Such uncertainty is not only about the future, but is also ‘epistemic’, as it concerns our ignorance, or what we don't know. We may not know single facts, such as what caused an outbreak of food poisoning. We may not know directly (in theory) measurable statistics, such as the consumption of particular foods, or ‘virtual’ quantities that can only be inferred, such as the true average impact from moderate alcohol consumption. Finally, we may be uncertain about scientific knowledge, such as whether glyphosate is a carcinogen.

EFSA realises the importance of epistemic uncertainty and is making strenuous efforts to assess and communicate it appropriately (EFSA, 2018, 2019). But official institutions can be poor in acknowledging what they don't know. Take the BBC claim in February 2018 that ‘UK unemployment falls by 3,000 to 1.44 million’ (BBC, 2018). This was based on an official release from the Office of National Statistics, and it is only by careful searching through their website that it is revealed that this fall of 3,000 has a ‘95% confidence interval of plus or minus 77,000’ (ONS, 2018). In other words, we actually have no idea whether unemployment has risen or fallen, we just know it has not changed much.

In contrast, some agencies boldly proclaim their uncertainty. For many years the Monetary Policy Committee of the Bank of England has communicated its judgements about future UK inflation and growth using fan‐charts, in which 30, 60 and 90% intervals are shown for the succeeding 3 years (BoE, 2017). Using the same format, they also show their epistemic uncertainty for past inflation and growth, which is still subject to revision and hence not known precisely. In a recent development, the estimated margins‐of‐error about the numbers of migrants to and from the UK has been communicated using a form of fan‐chart (ONS, 2019).

Such margins of error might be termed direct expressions of uncertainty about the quantity of interest, but not all uncertainties can be quantified in this way. Uncertainty can arise through lack of understanding, or lack of evidence, which precludes quantification. Many institutions have developed scales which summarise the quality of the underlying evidence, which we might term indirect uncertainty. The most popular is probably the GRADE scale used in many areas of health research such as the Cochrane Collaboration, in which the evidence supporting an estimated treatment effect is given a ‘rating’ from 1 to 4, corresponding to ‘very low’, ‘low’, ‘moderate’ and ‘high’‐quality evidence.2 The Intergovernmental Panel for Climate Change (IPCC) also makes extensive use of a scale of low/moderate/high confidence in the science underlying any claim (Mastrandrea et al., 2010). In the UK, a network of ‘What Works’ centres are developing toolkits to communicate the effectiveness of a range of policies in different domains, and both the Educational (EEF, online) and Policing (College of Policing, online) What‐Works centres make use of ordered scales summarising the quality of the underlying evidence.

The vital question remains: can ‘experts’ honestly communicate their uncertainty without losing trust and credibility? Early empirical research suggests that if uncertainty can be communicated with confidence, preferably as a range, then trust in the source is not diminished. Institutions might thus consider ‘ratings’ for the quality of the evidence underlying their claims. And finally, they should be unapologetic about acknowledging their uncertainty.

3.6. Following trustworthy processes

Stakeholders and the public will judge the trustworthiness of risk assessments based on how honestly, competently and consistently risk and uncertainty are communicated to them and whether the assessment aligns with theirs. It is therefore important to get agreement from the major stakeholders and society on the process to be used for a risk assessment, before commencing on the assessment. Post‐publication reaction to a process will inevitably be coloured by the result or conclusion that the process led to, rather than the strength of the process itself. Again, carrying out consultation on every process each time a risk assessment is initiated is not currently within the capacity of the EU food safety system, but the establishment of a limited number of processes on which society agrees, should be feasible. Such agreement upfront could assist in defusing results‐based criticism (‘I like your process when it gives me the result I want, but discredit it when I don't like the result’).

Experts responsible for risk assessment should be selected in a transparent manner on the basis of their expertise, experience, and their independence with regard to the interests involved. The procedures used to select these experts should be documented and include a public declaration of any potential conflict of interest. This declaration should also identify and detail their individual expertise, experience and independence. Expert bodies and consultations should ensure effective participation of experts from different parts of the world, including experts from developing countries.

International consensus is also an important lever for building trust in a process. The opposite is, of course, very damaging. Different groups of scientists, seemingly picking different processes at random leading to different conclusions inevitably brings into doubt the trustworthiness of science (Patel, 2019).

3.7. Publishing the evidence and data used

Lack of transparency and openness in the risk assessment process can increase anxiety by creating the impression that risk assessors know things that they are not willing to reveal, which may fuel distrust. Transparency and openness are promoted by: (1) thoroughly and systematically documenting and reporting all the steps of the risk assessment, from the methods applied to the interim and final results, discussion and conclusions, including all assumptions and decisions made; and (2) guaranteeing accessibility to data, results and all relevant supplementary information, as appropriate and possible, without violating confidentiality issues.

To be transparent, a risk assessment should be as understandable, appraisable and reproducible as possible by interested third parties. This permits others to identify errors, to support, reject or refine conclusions and to reuse data for further analysis (Royal Society, 2012). Although thorough documentation and reporting allows reproducibility of the risk assessment process (methods and data), it does not necessarily enable an exact reproduction of the final conclusions, which always involves expert judgement and interpretation of results.

Opening up risk assessment data is not an unqualified good. There are legitimate boundaries of openness which must be maintained (Royal Society, 2012). Consequently, transparency is not always fully achievable given the confidentiality of certain information.

The advent of a new era of openness in food safety has enormous potential (Cavalli et al., 2019). The benefits associated with open data, such as an increase in transparency, innovation, possibilities for global networking and a higher standard of scientific studies, may result in better support for, and acceptance of, the decision‐making process. This probably outweighs its perceived risks (e.g. potential misuse or misinterpretation of data, data overload leading to more elitism in who can handle the data volume). Considering that open data (publicly accessible and readily interpretable) is a key enabler for transparency and accountability, EFSA is moving from data‐on‐demand to a proactive data‐by‐default approach. EFSA is making much of its data and evidence publicly accessible via its scientific data warehouse,3 knowledge junction4 and repositioned EFSA Journal on Wiley,5 as well as on the EU Open Data Portal.6 EFSA's published outputs are now available as JATS XML, the international standard for journal articles. EFSA is in the process of piloting migration from market registration dossiers in PDF format towards electronic dossier submission and automatic publication of non‐confidential information using structured formats based, insofar as possible, on existing international standards to enable data access and reuse.

3.8. Ensuring effective communication throughout the process

Efforts to address current and future communication challenges on science and society must first recognise the irrefutable evidence that the landscape of communication has changed and will continue to change, with implications at the individual, group, societal and global levels (Bucchi, 2019; Smith et al., 2019; Zollo, 2019). The internet and the rise of social media have opened a veritable Pandora's box, providing unparalleled access to thousands of sources of information about any given topic. To meet this challenge, effective communication must reconcile itself to the reality that former, outdated models of communication, which sought to convince audiences of the primacy of a single scientific authority, are competing with a plethora of other sources that may be as convincing and more trusted, due to aspects beyond the control of any one source.

Any honest effort to communicate risk today must recognise that ‘controlling’ the field of messages is a bygone concept, if it ever truly described how communication took place. It must also acknowledge that audiences are active seekers – and producers – of information, most often from contemporary disintermediated and user‐generated channels. Recognition and reconciliation of these realities can lead to rewards if communicators accept this knowledge and adopt more interactive communication approaches that respond to audience needs and incorporate stakeholder values into decision‐making.

The communication landscape of today and the future offers many options to engage with audiences in meaningful and creative ways. From telepresence robots that are first responders to environmental disasters, to social media sites that influence healthy behaviours, to mobile devices that monitor pain management, and virtual reality headsets that allow people to experience new worlds, the list of these technologies and the potential they afford is profound.

Actions to communicate more effectively should consider the way citizens form opinions and consume public information (Smith et al., 2019). Moreover, such actions should be implemented with a view of increasing accountability and trustworthiness. In the area of food safety, the last pan‐European research on public awareness, perception and expectation was conducted in 2010. Consequently, there is a crucial gap to bridge, particularly considering the sociopolitical changes of the past decade which have influenced consumer preferences and behaviour (Patel, 2019). EFSA has committed to embedding research on public awareness, perception and expectation at the EU level, as part of its social science roadmap. The findings, in turn, will inform the risk analysis process across the EU.

Scientific language, while powerful at conveying information, is often unhelpful for communicating with the wider public in a world of competing information and limited attention spans. The ‘elitist scientific vocabulary’ (as it is often described) can increase confusion rather than meeting the target audience's needs. Consequently, to improve science messaging easy‐to‐understand, timely and meaningful information is required, which should be delivered in visually attractive formats – all features of communication should be adapted to ‘speak’ to the fast and intuitive decision‐making model. To reduce the amount of competing and fragmented information, risk assessors should partner with peer institutions, risk managers and trustworthy actors from civil society to deliver a clear and coherent message that addresses the ‘bigger picture’.

Yet, communication is not enough by itself. It must be coupled with inclusion of the public in the risk analysis process. From the outset – problem formulation – the process guiding the risk assessment should understand societal values, while maintaining the independence of the assessment itself. Only such an approach can pursue the objective of trustworthy scientific advice.

3.9. Engaging with society as appropriate

Engagement has been proposed as a mode of science communication that could lead to better trusted decision‐making by addressing both facts and values in an open two‐way dialogue (e.g. Löfstedt, 2004). Moving towards deliberatively engaging society about societal issues could contribute to improving the quality, legitimacy and sustainability of decision‐making processes. So far, risk communication has typically focused on communicating facts instead of values as a one‐way process to deliver information, attempting to convince audiences of the legitimacy and authority of an expert‐informed assessment. However, it has been argued that the public should not be seen as passive recipients of scientific advice, but rather as active participants (Patel, 2019; Smith et al., 2019). Listening to the problems faced by the public can help scientists and risk assessors to frame an issue in a way that is relevant to the people it affects and provide useful evidence and scientific advice.

Public engagement in the process of risk assessment and communication can be foreseen at different stages but is considered most relevant during problem formulation where there is a need to identify and frame the right scientific assessment questions. However, it remains to be explored when it is appropriate to engage with the public and how to open this process to a broader range of people all the way through the assessment process (Robinson et al., 2016).

Engagement on all the assessments handled by EFSA (around 500 a year) may be counterproductive due to the limited capacity of many interested parties to contribute, in addition to resource limitations on the side of EFSA to manage such an additional workload. Therefore, some prioritisation would seem essential and would preferentially in itself entail public engagement regarding the choice of assessments and other activities (e.g. setting standards, methods or procedures) on which engagement would be possible. This process could be aided by the development of some simple criteria to guide the discussion.

Since EFSA's second Scientific Conference, engagement through the science process has increased, including some first examples of consulting on the terms of reference of new mandates, through consultation on the methodology to be applied in specific risk assessments, and through the more traditional consultation on draft reports before final adoption. To these activities on engagement, one can also add the regular process of open calls for data to help capture the knowledge base on which subsequent assessments are based.

A closer look at the assessment questions could seek to understand: (1) the nature of the topic (potential concern for public, animal or plant health; urgent request or a new or unknown risk); (2) knowledge and perceptions (high visibility based on media exposure, known societal concerns or diverging views); and (3) particular interest or concern for institutional partners and stakeholders (EU/national authorities, consumer organisations, NGOs, market impact, etc.). Such assessment, which takes into consideration society's concerns and the interests of different targeted audiences, could provide an initial estimate of the socioeconomic dimension of a specific question and point to dedicated engagement and risk communication opportunities.

Various public engagement mechanisms have been developed and tested to allow effective public and stakeholder engagement, and significant analytical effort is being devoted to understanding and addressing their effectiveness against target audiences and objectives (Rowe and Frewer, 2005; Walls et al., 2011; see also Action Catalogue7 ). Some of these mechanisms may be applied to the field of regulatory science (Smith et al., 2019). However, in an area where there are no one‐size‐fits‐all solutions, appropriate engagement strategies in risk assessment and communication must consider: (1) context awareness, i.e. understanding the position of different stakeholders; (2) a balanced representation of all the constituencies having a stake in the subject of the assessment, while looking critically at whether these can be considered as representatives of the public at a large; (3) clarity and transparency on the process and points in which the contributions can inform the risk analysis while maintaining scientific independence; and (4) how to systemically ensure this approach without reducing it to a set of outreach activities.

Enhancing collaboration with and openness to interested parties and external knowledge communities could positively contribute to addressing more complex food safety‐related questions. It could also enhance scientific scrutiny, reproducibility, and overall could produce better quality and more trusted outputs (Robinson et al., 2016). To effectively achieve this goal, information and communications technology options need to be developed to facilitate the secure and efficient exchange of data and ideas for use in crowdsourcing (Noel‐Storr, 2019). It also remains to be seen how such input can practically be integrated into scientific assessment processes without undermining either the quality or robustness of the science.

Greater involvement and participation could also introduce potential risks, such as the disproportionate influence of a limited number of actors or the loss of control by risk assessment bodies over the content of an output (Robinson et al., 2016). It is therefore important to demonstrate the relative freedom from bias in the scientific assessment.

3.10. Weighing risks and benefits on request

Because the main objective of risk‐based legislation is to ensure a high level of protection, it focuses on the assessment of risks only and does not explicitly consider whether a proposed activity meets wider socioeconomic and ecological aspirations and other policy goals. However, debates on the risks of regulated stressors connected to food and feed production have often intersected with a wider debate about how these stressors can contribute to sustainable development goals (e.g. Devos et al., 2014; Url, 2018). As discussions become framed more broadly, a more holistic approach to assessing and addressing risks has been suggested, where risks are weighed against benefits (Tijhuis et al., 2012; Boobis et al., 2013; Vidry et al., 2013; Boué et al., 2015). It has also been suggested that decisions should be made about the acceptability of new technologies in the context of other risks as well as the costs of not using these technologies (Tait and Barker, 2011). To ensure that acceptable risk is properly contextualised, risk managers need to be properly informed of the potential benefits and the consequences of not adopting the proposed activity.

Apart from the fact that it is difficult to integrate a risk–opportunity or a sustainability assessment in the risk assessment of a regulated stressor, such an assessment should not be restricted to the market approval of a single regulated stressor. It is important to recognise that such assessment ought to be conducted for classes of regulated stressors, not for particular ones.

To date, the remit of risk assessors is limited to certain regulated stressors (and potential combinations of those plus other stressors) but does not consider the consequences of limiting these tools and the impact that other products or tools, which would be used instead, might have. If not taken up by self‐tasking, such a role, respectively mandating EFSA, is theoretically left to the risk managers at the European Commission and EU Member States. However, experience in the EU shows that a holistic risk assessment (including risk–benefit considerations) is not properly established. The pivotal role of thorough, in‐depth risk assessments goes without saying. Yet, when it comes to decision‐making, this may not be enough, and the consequences of and the risks associated with alternatives need to be evaluated adequately, too.

4. Conclusions

EFSA's third Scientific Conference explored what fit‐for‐purpose food safety risk assessment means and which elements contribute to it. An important and recurrent message conveyed at the conference is the need to advance forms of risk assessment that are more contextual and socially sound and accountable, while remaining scientifically robust. Ways that scientific advice could be constructed to most usefully meet society's needs and ensure better engagement were discussed.

Contributors to the conference argued that a fit‐for‐purpose risk assessment should begin with a clear articulation of policy that provides a context and boundaries for the risk assessment and helps guide risk managers to make decisions that align with societal goals. In the absence of clear policy, risk assessors are left to make a series of small policy decisions with varying degrees of success. While policy shapes the objectives of the risk assessment, science informs the process. This ensures that the outcomes of the risk assessment are robust, repeatable and defensible. A key focus for those involved in regulation is often to improve the science used for risk assessment rather than ensuring that the risk assessment is consistent with the objectives of the guiding policy. Modern analytical methods can produce prodigious amounts of data that can lead risk assessors down unproductive avenues with the mistaken impression that more data will inevitably yield a more precise outcome. In this context, a risk assessment to determine whether an activity is acceptable becomes a loop of data generation and scientific inquiry where the original purpose becomes lost and the risk manager is left to reach a decision without guidance. A key factor for risk assessors and risk managers is to recognise that the decision to regulate an activity is a socioeconomic one and consequently the policy guiding the risk assessment is an articulation of those socioeconomic values.

Risk assessments are subject to numerous value judgements. While it is tempting to try to prevent ethical and social value considerations from influencing these judgements, it is typically unrealistic to think that these influences can be eliminated or ignored. This often prevents the necessary reflection about values and can result in conflict and distrust. Contributors to the conference recognised the need to manage value‐laden judgements in risk assessment through efforts that promote reflection, openness and engagement. Likewise, it is essential to acknowledge and communicate risks and unavoidable scientific uncertainty in a transparent and trustworthy way. Early empirical research suggests that if uncertainty can be communicated with confidence, preferably as a range, then trust in the source is not diminished. It was noted that intelligent transparency requires the information to be accessible, intelligible, useable and assessable.

As risk assessment is performed in a world that is complex and more connected than ever before, risk assessment must be informed by the societal context in which it operates, using multidisciplinary social‐science research, while providing platforms for the representative participation of those that are the ultimate beneficiaries of its conclusions. Communication requires professional input, listening to audiences, addressing misunderstandings, and testing all outputs for comprehension and usefulness. Only in doing so can risk assessment meet the demands of the public to generate and communicate knowledge that safeguards society.

Abbreviations

BPA

bisphenol A

CLARITY‐BPA

Consortium Linking Academic and Regulatory Insights on BPA Toxicity

EEF

Education Endowment Foundation

IPCC

Intergovernmental Panel for Climate Change

ONS

Office for National Statistics

Suggested citation: Devos Y, Elliott KC, Macdonald P, McComas K, Parrino L, Vrbos D, Robinson T, Spiegelhalter D and Gallani B, 2019. Conducting fit‐for‐purpose food safety risk assessments. EFSA Journal 2019;17(S1):e170707, 16 pp. 10.2903/j.efsa.2019.e170707

Acknowledgements: The European Food Safety Authority (EFSA) and authors wish to thank the participants of the plenary sessions ‘Where science meets society: Putting risk assessment in context’ and ‘Staying relevant in a changing world’ at EFSA's third Scientific Conference ‘Science, Food and Society’ (Parma, Italy, 18–21 September 2018) for their active and valuable contribution to the discussion. We also thank Silvio O Funtowicz, Simone Gabbi and Fern Wickson for their contributions to this publication, and Hans Verhagen for carefully proofreading it.

Disclaimer: The views or positions expressed in this article do not necessarily represent in legal terms the official position of the European Food Safety Authority (EFSA). EFSA assumes no responsibility or liability for any errors or inaccuracies that may appear. This article does not disclose any confidential information or data. Mention of proprietary products is solely for the purpose of providing specific information and does not constitute an endorsement or a recommendation by EFSA for their use.

Approved: 7 May 2019

Notes

References

  1. BBC (British Broadcasting Corporation), 2018. UK unemployment falls to 1.44 million. 24 January 2018 [Internet]. Available online: https://www.bbc.co.uk/news/business-42802526
  2. Begley CG, 2013. Six red flags for suspect work. Nature, 497, 433–434. [DOI] [PubMed] [Google Scholar]
  3. BoE (Bank of England), 2017. Inflation report fan charts, November 2017. Available online: https://www.bankofengland.co.uk/-/media/boe/files/inflation-report/2017/fan-charts-nov-2017
  4. Boobis A, Chiodine A, Hoekstra J, Lagiou P, Przyrembel H, Schlatter J, Schütte K, Verhagen H and Watzl B, 2013. Critical appraisal of the assessment of benefits and risks for foods, ‘BRAFO Consensus Working Group’. Food and Chemical Toxicology, 55, 659–675. [DOI] [PubMed] [Google Scholar]
  5. Boué G, Guillou S, Antignac J‐P, Le Bizec B and Membré J‐M, 2015. Public health risk‐benefit assessment associated with food consumption – a review. European Journal of Nutrition & Food Safety, 5, 32–58. [Google Scholar]
  6. Brown M, 2013. Values in science beyond underdetermination and inductive risk. Philosophy of Science, 80, 829–839. [Google Scholar]
  7. Bucchi M, 2019. Facing the challenges of science communication 2.0: quality, credibility and expertise. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cash DW, Clark WC, Alcock F, Dickson NM, Eckley N, Guston DH, Jäger J and Mitchell RB, 2003. Knowledge systems for sustainable development. Proceedings of the National Academy of Sciences of the United States of America, 100, 8086–8091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cavalli E, Gilsenan M, Van Doren J, Grahek‐Ogden D, Richardson J, Abbinante F, Cascio C, Devalier P, Brun N, Linkov I, Marchal K, Meek B, Pagliari C, Pasquetto I, Pirolli P, Sloman S, Tossounidis T, Waigmann E, Schünemann H and Verhagen H, 2019. Managing evidence in food safety and nutrition. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Codex Alimentarius , 2007. Working principles for risk analysis for food safety or application by governments. CAC/GL 622007. Available online: http://www.fao.org/3/a-a1550t.pdf
  11. College of Policing , online. Crime Reduction Toolkit. Available online: https://whatworks.college.police.uk/toolkit/Pages/Toolkit.aspx
  12. Collins H, Weinel M and Evans R, 2010. The politics and policy of the third Wave: new technologies and society. Critical Policy Studies, 4, 185–201. [Google Scholar]
  13. Davies S, McNaghten P and Kearnes M, 2009. Reconfiguring Responsibility: Lessons for Public Policy (Part 1 of the Report on Deepening Debate on Nanotechnology). University of Durham, Durham, UK. [Google Scholar]
  14. Devos Y, Sanvido O, Tait J and Raybould A, 2014. Towards a more open debate about values in decision‐making on agricultural biotechnology. Transgenic Research, 23, 933–943. [DOI] [PubMed] [Google Scholar]
  15. Devos Y, Romeis J, Luttik R, Maggiore A, Perry JN, Schoonjans R, Streissl F, Tarazona JV and Brock TCM, 2015. Optimising environmental risk assessments – accounting for biodiversity and ecosystem services helps to translate broad policy protection goals into specific operational ones for environmental risk assessments. EMBO Reports, 16, 1060–1063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Devos Y, Craig W, Devlin RH, Ippolito A, Leggatt RA, Romeis J, Shaw R, Svendsen C and Topping CJ, 2019. Using problem formulation for fit‐for‐purpose pre‐market environmental risk assessments of regulated stressors. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Douglas H, 2009. Science, policy, and the value‐free ideal. University of Pittsburgh Press, Pittsburgh. [Google Scholar]
  18. Douglas H, 2016. Values in science In: Humphreys P. (ed.). The Oxford Handbook of the Philosophy of Science. Oxford University Press, New York, NY: pp 609–630. [Google Scholar]
  19. EEF (Education Endowment Foundation), online. Teaching and Learning Toolkit. Available online: https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit#closeSignup
  20. EFSA (European Food Safety Authority), 2015. Scientific report on principles and process for dealing with data and evidence in scientific assessments. EFSA Journal 2015;13(5):4121, 35 pp. 10.2903/j.efsa.2015.4121 [DOI] [Google Scholar]
  21. EFSA (European Food Safety Authority), 2016. Guidance to develop specific protection goals options for environmental risk assessment at EFSA, in relation to biodiversity and ecosystem services. EFSA Journal 2016;14(6):4499, 50 pp. 10.2903/j.efsa.2016.4499 [DOI] [Google Scholar]
  22. EFSA (European Food Safety Authority), 2018. Guidance on uncertainty analysis in scientific assessments. EFSA Journal 2018;16(1):5123, 39 pp. 10.2903/j.efsa.2018.5123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. EFSA (European Food Safety Authority), 2019. Guidance on communication of uncertainty in scientific assessments. EFSA Journal 2019;17(1):5520, 73 pp. 10.2903/j.efsa.2019.5520 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Eigenbrode SD, O'Rourke M, Wulfhorst JD, Althoff DM, Goldberg CS, Merrill K, Morse W, Nielsen‐Pincus M, Stephens J, Winowiecki L and Bosque‐Pérez NA, 2007. Employing philosophical dialogue in collaborative science. BioScience, 57, 55–64. [Google Scholar]
  25. Elliott KC, 2016. Standardized study designs, value judgments, and financial conflicts of interest in research. Perspectives on Science, 24, 529–551. [Google Scholar]
  26. Elliott KC, 2017. A tapestry of values: an introduction to values in science. Oxford University Press, New York. [Google Scholar]
  27. Elliott KC, 2019. Managing value‐laden judgements in regulatory science and risk assessment. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Elliott KC and Resnik DB, 2014. Science policy and the transparency of values. Environmental Health Perspectives, 122, 647–650. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Elliott KC and Richards T, 2017. Exploring inductive risk: case studies of values in science. Oxford University Press, New York. [Google Scholar]
  30. Evans J, Wood G and Miller A, 2006. The risk assessment–policy gap: an example from the UK contaminated land regime. Environment International, 32, 1066–1071. [DOI] [PubMed] [Google Scholar]
  31. Faber JH, Marshall S, Van den Brink PJ and Maltby L, 2019. Priorities and opportunities in the application of the ecosystem services concept in risk assessment for chemicals in the environment. Science of the Total Environment, 651, 1067–1077. [DOI] [PubMed] [Google Scholar]
  32. Funtowicz SO, 2006. Why knowledge assessment? In: Pereira AG, Vaz SG. and Tognetti S. (eds.). Interfaces between Science and Society. Greenleaf Publishing, Sheffield: pp. 138–145. [Google Scholar]
  33. Funtowicz SO and Ravetz JR, 1993. Science for the post‐normal age. Futures, 25, 739–755. [Google Scholar]
  34. Funtowicz SO and Strand R, 2007. Models of science and policy In: Traavik T. and Lim LC. (eds.). Biosafety First: Holistic Approaches to Risk and Uncertainty in Genetic Engineering and Genetically Modified Organisms. Tapir, Trondheim, Norway: pp. 263–278. [Google Scholar]
  35. Garcia‐Alonso M and Raybould A, 2014. Protection goals in environmental risk assessment: a practical approach. Transgenic Research, 23, 945–956. [DOI] [PubMed] [Google Scholar]
  36. Hartley S and Kokotovich A, 2018. Disentangling risk assessment: new roles for experts and publics In: Nerlich B, Hartley S, Raman S. and Smith A. (eds.). Science and the Politics of Openness: Here Be Monsters. Manchester University Press, Manchester: pp. 176–194. [Google Scholar]
  37. Hokanson KE, Ellstrand N and Raybould A, 2018. The integration of science and policy in regulatory decision‐making: observations on scientific expert panels deliberating GM crops in centers of diversity. Frontiers in Plant Science, 9, 1157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Irwin A, 2006. The politics of talk: coming to terms with the ‘new’ scientific governance. Social Studies of Science, 36, 299–320. [Google Scholar]
  39. Jaffe G, 2006. Regulatory slowdown on GM crop decisions. Nature Biotechnology, 24, 748–749. [DOI] [PubMed] [Google Scholar]
  40. Jasanoff S, 2003. Technologies of humility: citizen participation in governing science. Minerva, 41, 223–244. [Google Scholar]
  41. Löfstedt RE, 2004. Risk communication and management in the 21st Century. International Public Management Journal, 7, 335–346. [Google Scholar]
  42. Mastrandrea MD, Field CB, Stocker TF, Edenhofer O, Ebi KL, Frame DJ, Held H, Kriegler E, Mach KJ, Matschoss PR, Plattner G‐K, Yohe GW and Zwiers FW, 2010. Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. Intergovernmental Panel on Climate Change (IPCC). Available online: https://wg1.ipcc.ch/AR6/documents/AR5_Uncertainty_Guidance_Note.pdf
  43. Moermond CTA, Beasley A, Breton R, Junghans M, Laskowski R, Solomon K and Zahner H, 2017. Assessing the reliability of ecotoxicological studies: an overview of current needs and approaches. Integrated Environmental Assessment and Management, 13, 640–651. [DOI] [PubMed] [Google Scholar]
  44. Noel‐Storr AH, 2019. Working with a new kind of team: harnessing the wisdom of the crowd in trial identification. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Nowotny H, 2003. Democratising expertise and socially robust knowledge. Science and Public Policy, 30, 151–156. [Google Scholar]
  46. ONS (Office for National Statistics), 2018. UK labour market: January 2018 [web page]. Estimates of employment, unemployment, economic inactivity and other employment related statistics for the UK. Available online: https://www.ons.gov.uk/releases/uklabourmarketstatisticsjan2018
  47. ONS (Office for National Statistics), 2019. Migration statistics quarterly report: February 2019[web page]. Available online: https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/internationalmigration/bulletins/migrationstatisticsquarterlyreport/february2019
  48. Patel , 2019. Understanding people. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [Google Scholar]
  49. Pielke R, 2007. The honest broker: making sense of science in policy and politics. Cambridge University Press, Cambridge. [Google Scholar]
  50. Raybould A, 2006. Problem formulation and hypothesis testing for environmental risk assessments of genetically modified crops. Environmental Biosafety Research, 5, 119–125. [DOI] [PubMed] [Google Scholar]
  51. Raybould A, 2007. Ecological versus ecotoxicological methods for assessing the environmental risks of transgenic crops. Plant Science, 173, 589–602. [Google Scholar]
  52. Raybould A, 2010. The bucket and the searchlight: formulating and testing risk hypotheses about the weediness and invasiveness potential of transgenic crops. Environmental Biosafety Research, 9, 123–133. [DOI] [PubMed] [Google Scholar]
  53. Raybould A and Macdonald P, 2018. Policy‐led comparative environmental risk assessment of genetically modified crops: testing for increased risk rather than profiling phenotypes leads to predictable and transparent decision‐making. Frontiers in Bioengineering and Biotechnology, 6, 43 10.3389/fbioe.2018.00043 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Robinson T, Germini A, Deluyker H, Hardy A and Liem D, 2016. Conference conclusions: shaping the future of food safety, together. EFSA Journal, 14(S1), s0510, 1–9. [Google Scholar]
  55. Rowe G and Frewer LJ, 2005. A typology of public engagement mechanisms. Science, Technology & Human Values, 30, 251–290. [Google Scholar]
  56. Royal Society , 2012. Science as an open enterprise. The Royal Society Science Policy Centre report 02/12. The Royal Society, London: Available online: https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/ [Google Scholar]
  57. Sanvido O, Romeis J, Gathmann A, Gielkens M, Raybould A and Bigler F, 2012. Evaluating environmental risks of genetically modified crops – ecological harm criteria for regulatory decision‐making. Environmental Science & Policy, 15, 82–91. [Google Scholar]
  58. Sarewitz D, 2004. How science makes environmental controversies worse. Environmental Science & Policy, 7, 385–403. [Google Scholar]
  59. Sarewitz D, 2007. Liberating science from politics. American Scientist, 94, 104–106. [Google Scholar]
  60. Schug TT, Heindel JJ, Camacho L, Delclos KB, Howard P, Johnson AF, Aungst J, Keefe D, Newbold R, Walker NJ and Zoeller RT, 2013. A new approach to synergize academic and guideline‐compliant research: the CLARITY‐BPA research program. Reproductive Toxicology, 40, 35–40. [DOI] [PubMed] [Google Scholar]
  61. Schuurbiers D and Fisher E, 2009. Lab‐scale intervention. EMBO Reports, 10, 424–427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. van der Sluijs JP, 2005. Uncertainty as a monster in the science policy interface: four coping strategies. Water Science and Technology, 52, 87–92. [PubMed] [Google Scholar]
  63. van der Sluijs J, van Eijndhoven J, Shackley S and Wynne B, 1998. Anchoring devices in science for policy: the case of consensus around climate sensitivity. Social Studies of Science, 28, 291–323. [Google Scholar]
  64. Smith A, Parrino L, Vrbos D, Nicolini G, Bucchi M, Carr M, Chen J, Dendler L, Krishnaswamy K, Lecchini D, Löfstedt R, Patel M, Reisch L, Verloo D, Vos E, Zollo F and Gallani B, 2019. Communicating to and engaging with the public in regulatory science. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Suryanarayanan S, Kleinman DL, Gratto C, Toth A, Guedot C, Groves R, Piechowski J, Moore B, Hagedorn D, Kauth D, Swan H and Celley M, 2018. Collaboration matters: honey bee health as a transdisciplinary model for understanding real‐world complexity. BioScience, 68, 990–995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Tait J and Barker G, 2011. Global food security and the governance of modern biotechnologies. EMBO Reports, 12, 763–768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Tijhuis MJ, de Jong N, Pohjola MV, Gunnlaugsdottir H, Hendriksen M, Hoekstra J, Holm F, Kalogeras N, Leino O, van Leeuwen R, Luteijn JM, Magnusson SH, Odekerken G, Rompelberg C, Tuomisto JT, Ueland O, White BC and Verhagen H, 2012. State of the art in benefit‐risk analysis: food and nutrition. Food and Chemical Toxicology, 50, 5–25. [DOI] [PubMed] [Google Scholar]
  68. Url B, 2018. Don't attack science agencies for political gain. Nature, 553, 381. [DOI] [PubMed] [Google Scholar]
  69. Vidry S, Hoekstra J, Hart A, Watzl B, Verhagen H, Schütte K, Boobis A and Chiodini A, 2013. Benefit‐Risk Analysis for Foods (BRAFO)‐Executive project summary. European Journal of Nutrition & Food Safety, 3, 146–153. [Google Scholar]
  70. Walls J, Rowe G and Frewer L, 2011. Stakeholder engagement in food risk management: evaluation of an iterated workshop approach. Public Understanding of Science, 20, 241–260. [Google Scholar]
  71. Waltner‐Toews D, 2019. Responding to globalised foodborne disease: risk assessment as post‐normal science. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Wickson F and Forsberg E‐M, 2015. Standardising responsibility: the significance of interstitial spaces. Science and Engineering Ethics, 21, 1159–1180. [DOI] [PubMed] [Google Scholar]
  73. Zollo F, 2019. Dealing with digital misinformation: a polarised context of narratives and tribes. EFSA Journal, Special Issue July 2019, Third EFSA Conference on Science, Food and Society. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from EFSA Journal are provided here courtesy of Wiley

RESOURCES