Skip to main content
Forensic Science International: Synergy logoLink to Forensic Science International: Synergy
. 2024 Apr 29;8:100470. doi: 10.1016/j.fsisyn.2024.100470

Understanding ‘error’ in the forensic sciences: A primer

Kristy A Martire a,, Jason M Chin b, Carolyn Davis c, Gary Edmond d, Bethany Growns e, Stacey Gorski f, Richard I Kemp a, Zara Lee g, Christopher M Verdon h, Gabrielle Jansen i, Tanya Lang c, Tess MS Neal j, Rachel A Searston k, Joshua Slocum g, Stephanie Summersby l, Jason M Tangen m, Matthew B Thompson n,o, Alice Towler m, Darren Watson p, Melissa V Werrett q, Mariam Younan a, Kaye N Ballantyne l
PMCID: PMC11240290  PMID: 39005839

Abstract

This paper distils seven key lessons about ‘error’ from a collaborative webinar series between practitioners at Victoria Police Forensic Services Department and academics. It aims to provide the common understanding of error necessary to foster interdisciplinary dialogue, collaboration and research. The lessons underscore the inevitability, complexity and subjectivity of error, as well as opportunities for learning and growth. Ultimately, we argue that error can be a potent tool for continuous improvement and accountability, enhancing the reliability of forensic sciences and public trust. It is hoped the shared understanding provided by this paper will support future initiatives and funding for collaborative developments in this vital domain.


In 2010, Mnookin and colleagues called for a research culture in the forensic sciences [1]. More than a decade later, this call retains its relevance and urgency [2]. Although strides have been taken in promoting and enhancing this research culture, innovative and pragmatic strategies for leveraging and improving research are still required to meet the formidable challenges facing the forensic sciences and their use in criminal justice processes [[3], [4], [5], [6]]. The calculation and communication of error rates is one such challenge [7].

Error rates are a central feature of ongoing research and debate in the forensic sciences [8]. The emphasis on error is partially due to the evidentiary standards in the United States, where both the Daubert Standards [9] and US Federal Rules of Evidence (Rule 702) [10] require that expert evidence is derived from reliable principles and methods. It is also partly due to authoritative scientific interventions insisting that engagement with error is an important part of the forensic science mission [11,12].

In recent years, studies to compute and report error rates for various forensic science disciplines and techniques have become more common [4]. A brief examination of the current literature illustrates the nuanced and intricate nature of these efforts. However, the volume and complexity of new knowledge pose a significant challenge for forensic scientists grappling with heavy caseloads and backlogs [13,14]. How can they stay informed, engage meaningfully, and maintain a critical perspective on a topic that is not only vital but also rapidly evolving?

In response to this emergent challenge, we initiated a collaborative webinar series between academics and practitioners. Jointly organized by the Office of the Chief Forensic Scientist, Victoria Police Forensic Services Department and the Evidence-based Forensics Initiative [15], the series was attended by a diverse mix of academics and forensic scientists, who read and co-presented a selection of contemporary papers related to error rates in forensic sciences. The papers are listed alphabetically in Table 1. These papers were not intended to be an exhaustive list of all relevant papers, nor were all papers directly or solely on the topic of error rates. Instead, participants nominated relevant, interesting and accessible papers that were ultimately chosen to elicit discussions and reflections on themes surrounding error in the forensic sciences and to prompt diverse views from participants.

Table 1.

Papers reviewed by the EBFI OCFS collaborative webinar series (alphabetical order).

1 Carr, S., Piasecki, E., & Gallop, A. (2020). Demonstrating reliability through transparency: A scientific validity framework to assist scientists and lawyers in criminal proceedings. Forensic Sci Int, 308, 110110. https://doi.org/10.1016/j.forsciint.2019.110110
2 Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56(4), 600–616.
3 Hicklin, R. A., Winer, K. R., Kish, P. E., Parks, C. L., Chapman, W., Dunagan, K., Richetelli, N., Epstein, E. G., Ausdemore, M. A., & Busey, T. A. (2021). Accuracy and reproducibility of conclusions by forensic bloodstain pattern analysts. Forensic Science International, 325, 110856. https://doi.org/10.1016/j.forsciint.2021.110856
4 Kloosterman, A., Sjerps, M., & Quak, A. (2014). Error rates in forensic DNA analysis: Definition, numbers, impact and communication. Forensic Science International: Genetics, 12, 77–85. https://doi.org/10.1016/j.fsigen.2014.04.014
5 Mattijssen, E. J. A. T., Witteman, C. L. M., Berger, C. E. H., Brand, N. W., & Stoel, R. D. (2020). Validity and reliability of forensic firearm examiners. Forensic Science International, 307, 110112. https://doi.org/10.1016/j.forsciint.2019.110112
6 Murrie, D. C., Gardner, B. O., Kelley, S., & Dror, I. E. (2019). Perceptions and estimates of error rates in forensic science: A survey of forensic analysts. Forensic Science International, 302, 109887. https://doi.org/10.1016/j.forsciint.2019.109887
7 Nightingale, S. J., & Farid, H. (2020). Assessing the reliability of a clothing-based forensic identification. Proc Natl Acad Sci U S A, 117(10), 5176–5183. https://doi.org/10.1073/pnas.1917222117
8 Smith, C. A., & Thompson, M. B. (2019). Performance claims in forensic science expert opinion evidence. University of Queensland Law Journal, The, 38(2), 261–277. https://doi.org/pdf/10.3316/ielapa.031676069765392
9 van Straalen, E. K., de Poot, C. J., Malsch, M., & Elffers, H. (2020). The interpretation of forensic conclusions by criminal justice professionals: The same evidence interpreted differently. Forensic Science International, 313, 110331. https://doi.org/10.1016/j.forsciint.2020.110331
10 Wilson-Wilde, L., Romano, H., & Smith, S. (2019). Error rates in proficiency testing in Australia. Australian Journal of Forensic Sciences, 51(sup1), S268–S271. https://doi.org/10.1080/00450618.2019.1569154

Each co-presentation explained the methodology and results of the assigned paper, before providing commentary and critique from both academic and practitioner perspectives. These presentations and the discussions that followed revealed diverging perspectives, provided methodological insights, suggested alternative interpretations, foregrounded implicit and sometimes faulty assumptions, and ultimately assisted us to develop a shared understanding of error in the forensic sciences. This shared understanding developed gradually across the course of the webinar series and was built through the exploration of several recurring themes. These themes can be distilled into seven key lessons about ‘error’.

  • 1.

    There are different perspectives on what constitutes an error, therefore Error is subjective

  • 2.

    There are different ways to compute or estimate the same error, therefore Error is multidimensional

  • 3.

    All complex systems involve error, therefore Error is unavoidable

  • 4.

    Some approaches to error management are more effective than others, therefore Error is cultural

  • 5.

    Performance can be improved by attending to error, therefore Error is educational

  • 6.

    Successful communication of error is challenging, therefore Error is misunderstood

  • 7.

    Error management goes beyond the boundaries of any individual discipline, therefore Error is transdisciplinary

In this paper we provide a succinct summary of these lessons on error to facilitate future discussions, collaborations and research endeavours between practitioners and academics. We acknowledge that the lessons listed in this primer are not exhaustive and that many - if not all - of these issues will have been raised by scholars and forensic science practitioners at other times and in other contexts. However, we believe there is value in bringing these key ideas together in a readily accessible format as a launching point for all those with a stake in the provision of high-quality forensic science services (e.g., legal professionals, law enforcement agencies, forensic science practitioners, justice researchers, policy makers and the general public). In doing so, we hope that this paper will facilitate the exchange of ideas and make a positive contribution to the research culture within the forensic sciences, fostering a deeper understanding to improve the management and communication of error.

1. Error is subjective

Determining when a mistake constitutes an error can be challenging [16,17] because there is limited agreement about what counts as an error [3,18,19]. If a forensic scientist errs on a specific decision that does not change the final opinion provided, or it is detected before release (e.g., a near miss), is it a type of error we need to be concerned with [17]? It is therefore crucial to recognize that discussions about error or error rates may involve different perspectives and assumptions about what constitutes an error [[20], [21], [22]].

For example, Murrie et al [23] used three different types of error to illustrate the concept of error rates: wrongful convictions, erroneous conclusions by examiners, and incidents of laboratory contamination and procedural failures. Dror & Charlton [24] offer a different conceptualisation, composed of three broad categories: 1) human error including intentional, negligent and competency error; 2) instrumentation and technology errors; and 3) fundamental methodological errors including those that flow from the human mind and cognition. Finally, Kloosterman et al [17] consider seven types of error - ranging from clerical to contamination – along with their potential and actual impacts.

Forensic scientists, legal practitioners, quality assurance managers, and forensic laboratory managers may also have distinct priorities and definitions of error based on their respective roles and objectives. Forensic scientists may be most interested in how often their conclusions align with ground truth (practitioner-level error e.g., individual proficiency testing results; [25,26]). Quality assurance managers may wish to know how often a technical review fails to detect a procedural mistake (case-level error). Forensic laboratory managers may want metrics on how often their systems produce misleading reports (departmental-level error). Legal practitioners may be interested in how often an incorrect result from a forensic science technique contributes to a wrongful conviction (discipline-level error), or whether there was an error in the specific case. Different types of studies may be required to examine these different outcomes (e.g., white-box versus black-box studies; [12]), and any one of these outcomes might be defined as an error and used as the basis for an error-rate calculation. Yet, it is often unclear which, if any, error rate is appropriate in a particular context, required by the stakeholder, or available in an instant case. For example, concerns have been raised about the appropriateness and utility of proficiency tests as measure of either individual competence or discipline-level error in the forensic sciences [[27], [28], [29], [30]]. Collaborative Testing Services Inc. (CTS) – one of the major proficiency test providers – has also formally stated that it is inappropriate to use their test results as a means to calculate error rates [27].

Discussions of errors and error rates can only be meaningful if there is a shared understanding of the specific error being described and its likely functional consequences.

2. Error is multidimensional

Along with the different types of potential errors identified in Lesson 1, there are also different ways to estimate and quantify them [7,16,20,22,23,28,31]. Not only can different errors be measured in different ways, but the same error can also be measured, regulated and described in different ways [32]. For example, errors made by an individual examiner when associating a sample with a source could be described in terms of false positive and false negative errors (per [23,33]), the proportion of incorrect associations (proportion incorrect), the rate of incorrect associations out of all associations (misclassification rate), or the ratio of correct to incorrect associations (accuracy). Error rates can also be communicated in terms of positive predictive values (PPV), negative predictive values (NPV), sensitivity and specificity [34].

Error is multidimensional and as a consequence there is no single or accepted way to characterise the likelihood of an error and there is no single or accepted way to compute an “error rate” [35,36]. A further challenge lies in some forensic disciplines, where the clarity of claims about abilities and levels of performance — ideally subjected to empirical testing — is ambiguous, hindering effective quantification [36].

There is also significant debate about how inconclusive opinions should be treated when quantifying errors in the forensic sciences [[37], [38], [39], [40]]. Whether or not an inconclusive judgement is a mistake at all can depend on the context in which the judgement is being provided (e.g., a laboratory study of perceptual capacity versus an examination of characteristics of reported opinions for court), the type of judgment being provided (e.g., an expert opinion or a finding of fact), and the quality and quantity of information available for examination (e.g., degraded versus intact samples) among other things.

Ultimately, flexibility in how error is measured also creates the flexibility to choose estimates that are most appropriate for the context(s) of use [17]. For example, positive predictive value has been suggested as a useful performance metric for courts because it offers an estimate of how often an analyst provides an accurate opinion when they make an identification decision in the absence of ground truth information [12]. Sensitivity and specificity, on the other hand, might be used to convey estimates of error if the goal is to understand the validity of a forensic methodology in general [25,41,42]. However, unconstrained flexibility in how error is estimated means that choices can be made to tell the best story for less virtuous reasons [43]. For example, “cherry-picking” the false identification rate from one particular study to give the impression of a lower error rate would be a misleading use of such information.

Resolving issues surrounding the quantitation of error rates is also more challenging for some disciplines than others. Despite this complexity, openness about the potential for error - even in the absence of precisely defined error rates - is better than a lack of acknowledgment. Rather than disregarding or underplaying uncertainties in error estimation, a transparent approach that acknowledges these intricacies should be adopted. This approach calls for a willingness to acknowledge the limits of one's own knowledge and understanding. Merging this humility with a genuine effort to transparently convey the discipline's current state of knowledge about the quantification of error has the potential to mitigate misconceptions and foster wider trust and understanding [44]. It might also stimulate research and procedural reform to reduce the incidence of error. For example, where the potential for error is high, or may have significant consequences, research may be focused on finding more objective means of obtaining the same information. Alternatively, targeted training and feedback, along with a reduction in the strength of opinions expressed, may reduce the incidence and possible consequences of an error.

Error is multidimensional, and the estimation and expression of the “error rate” must be suited to the judgement, circumstances, audiences, and available information to be relevant and meaningful.

3. Error is unavoidable

Error is an unavoidable part of all complex systems, affecting both instrumental and human elements [23,24,45,46]. Even validated systems cannot guarantee 100 % accuracy in every application [20] and are susceptible to failures, deviations from intended outcomes, inaccuracies, and even the unexpected [47]. Therefore, it is crucial for forensic science stakeholders and criminal justice partners to recognize the potential for error, understand its causes and occurrence, and establish safeguards and redundancies to minimize the risk of error leading to miscarriages of justice [48].

It is important to acknowledge that many forensic science organizations have long recognized the presence and inevitability of error, and implemented quality management systems in an attempt to minimize or manage it [3,16,49]. Accredited laboratories are required to have frameworks in place to minimize, detect, and rectify errors, as well as procedures to address results that do not conform to expectations [[17], [49], [50], [51]]. These frameworks help to prevent errors from entering legal systems [52,53]. However, unless a quality system adapts to identify, mitigate, and manage all the risks of error that arise in a complex and evolving system, there will be some undetected and unpreventable errors [17,54].

Furthermore, conceptualizations of error in the forensic sciences may also need to expand. Traditionally forensic scientists, particularly those in analytical disciplines, primarily focus on measurement uncertainty and describe error in terms of the limits of detection and quantitation. While these parameters are crucial for quantitative results and opinions, it's essential to recognize that error encompasses more than just measurement uncertainty. Comprehensive consideration, measurement, and disclosure of error is necessary for all types of opinions, including qualitative, quantitative, analytical, and cognitive. A robust quality system requires knowledge of the risk and nature of potential errors beyond just uncertainty to function effectively and efficiently.

More generally, science is a discipline which valorizes transparency as a vital self-correcting mechanism [55,56]. One form of scientific integrity (embodied in Mertonian norms such as universalism and organised scepticism) involves helping others to identify the weaknesses in scientific methods and knowledge [57,58]. Scientists should therefore want to discuss their methods, claims [36,59,60], uncertainties, limitations and potential for error [17,61]. An active dialogue around error is a fundamental part of the scientific process, supporting the validation of findings and enhancing overall knowledge.

Forensic scientists must be open to talking about errors. Failure to do so suggests that they do not exist, or that it is acceptable to hide them. Neither is true.

4. Error is cultural

Cultures best suited to managing the risks of error embrace Lesson 3: error is unavoidable. Fostering a positive error management culture involves promoting open discussions focused on learning and improvement, steering away from blame or punishment [3,17,62,63]. Unfortunately, these types of risk management culture are rare in criminal justice systems [64,65].

The significant consequences of errors within justice systems often result in defensive, punitive actions against individuals and laboratories [24,[66], [67], [68]]. These actions include disciplinary measures, termination, and loss of accreditation [65,66,69]. Furthermore, forensic scientists who speak frankly in court about the potential for error can have their professionalism questioned and may be subject to personal attack. These consequences understandably erode willingness to openly address and acknowledge errors, reducing the likelihood that they will be detected and managed. Laboratories with high-profile errors may also experience reputational damage, media attention, and legal scrutiny long after root causes have been corrected [17,70].

Although some types of errors are intentional and avoidable and therefore should have serious negative consequences - for example, dry-labbing, withholding critical information, intentionally misrepresenting results, or other forms of scientific fraud and misconduct [66] - a reactive and punitive approach to all errors is ineffective [64]. Blame-based approaches create an unhelpful culture of silence, secrecy and fear among forensic scientists that prevent system improvement and increase the risk that errors will continue long after they might otherwise have been resolved [71]. Adoption of a risk management approach, such as that outlined in ISO3100, can be beneficial in reducing punitive approaches and developing a culture that readily monitors, mitigates, and learns from errors for the purposes of improvement.

Nevertheless, legal systems cannot and should not ignore risks associated with system errors, just because they are inevitable. Existing legal safeguards - while far from perfect - can be used to explore the possibility of error before they contribute to miscarriages of justice and irreversible harms [72,73]. For example, case conferences, pre-trial hearings, cross-examination and the use of opposing experts all have the potential to highlight weaknesses, uncertainties and potential mistakes that could alter the course of a prosecution. In practice, these opportunities are often missed by legal practitioners [74,75].

Shifting towards a positive error culture that emphasizes learning and prevention is crucial for organizations within the criminal justice system to effectively manage and mitigate errors.

5. Error is educational

Discussions about error in the forensic sciences often overlook the significant benefits errors can bring to the learning process. Rather than being strictly avoided, making and addressing errors in controlled training environments can provide valuable insights for forensic scientists and laboratories and contribute to a continuous learning environment.

Research spanning a variety of fields and subjects (e.g., medicine, aviation and nuclear power) demonstrates the positive impact of feedback on performance improvement [[76], [77], [78], [79], [80]]. Although research on the benefits of feedback in the forensic context is limited [81,82], there is little reason to expect different outcomes or to doubt its vital importance [65,83]. Analyzing errors made in cases and during training allows forensic laboratories to understand the conditions in which errors are most likely to occur and make necessary adjustments to methods and processes [16,65,84]. Additionally, the availability of corrective and timely feedback empowers forensic scientists to adapt their practices, enhancing the accuracy of their decisions over time [65].

To ensure continuous learning and growth, training programs that provide safe opportunities for error-based learning and correction should be frequent, challenging, and aligned with the complexities encountered in day-to-day casework [11,84,85]. This approach provides forensic scientists with ample opportunities to refine their skills throughout their careers while enabling laboratories to identify specific areas and contexts where errors may arise within their existing systems. Embracing errors as valuable learning experiences fosters the positive error culture from Lesson 4, ultimately leading to more accurate and reliable forensic evidence.

Forensic scientists need the time and opportunity to make and rectify errors across the course of their careers in low-stakes environments without fear of blame or punishment.

6. Error is misunderstood

Communicating error is challenging for many reasons, one of which is the complex nature of the information being conveyed [86,87]. Forensic scientists draw conclusions about the key evidence based on techniques and concepts from science and statistics which are difficult for non-experts to comprehend [[88], [89], [90]]. Effective translation of the technical information needed to understand the uncertainties and error rates associated with applied forensic science techniques requires an appreciation of the limits of human memory, language, and cognition [87].

Forensic scientists are also expected to communicate the possibility of error to diverse audiences, including other practitioners, legal professionals, jurors and law enforcement personnel [61,91]. Each end-user group has varying levels of familiarity and understanding of forensic science which requires tailored messaging to suit the knowledge and needs of the audience at hand. Striking a balance between scientific accuracy and comprehensibility is difficult in this context but essential to ensure that miscommunication about error is minimised for all involved [61,87]. The challenges associated with communication are further complicated by uncertainty in the responsibility for translating and simplifying evidence for lay audiences. Is simplification the responsibility of forensic scientists or is it a legal question requiring judicial guidance?

Despite the importance of clear communication, there is little widely accepted and empirically derived guidance that forensic scientists can rely on to ensure effective comprehension by their audience [88]. Approaches which convey error in numerical and statistical formats (e.g., probabilities, frequencies, incidence rates, likelihood ratios, etc) are challenging for lay people to grasp [89,[92], [93], [94]], and verbal expressions of the magnitude or risk of error often produce highly variable interpretations [95,96]. As a result, key questions about how best to present information about error and error rates remain unanswered, leaving forensic scientists to propose their own pragmatic solutions [17].

Addressing the challenge of communicating errors in forensic science demands the continuous refinement of strategies and guidelines to ensure accurate understanding in varying contexts of application.

7. Error is transdisciplinary

The terms multidisciplinary, interdisciplinary, and transdisciplinary describe ways in which disciplines collaborate to devise integrative solutions for a particular topic or issue [97]. Disciplinary research focuses solely on one discipline, while multidisciplinary research involves many disciplines examining a topic from different perspectives, often maintaining their disciplinary boundaries. On the other hand, interdisciplinary research involves collaborative efforts where disciplines work together to address the same issue. Successful interdisciplinary approaches may yield transdisciplinary perspectives, where concepts and theories from diverse disciplines blend into a comprehensive framework that diminishes the relevance of original disciplinary boundaries [[98], [99], [100], [101]].

Transdisciplinary collaboration between forensic scientists and academics is not just useful but essential for effectively addressing the issue of error in the forensic sciences [8,11,60]. The convoluted nature of error already discussed requires transdisciplinary consideration, as it goes beyond the boundaries of any individual discipline. Error involves internal and external systems of quality assurance and management, legal considerations, human judgement and decision-making factors, as well as statistically and conceptually complex methods of estimation and description.

A transdisciplinary approach can generate uniquely innovative and impactful scientific advancements and enduring solutions to applied problems [102,103]. For those of us who recognize the potential value of transdisciplinarity for understanding and reducing error in the forensic sciences, how should we go about it? Patience and open-mindedness are imperative for the success of transdisciplinarity such that “each team member needs to become sufficiently familiar with the concepts and approaches of his and her colleagues as to blur the disciplinary bounds and enable the team to focus on the problem as part of broader phenomena: as this happens, discipline authorization fades in importance, and the problem and its context guide an appropriately broader and deeper analysis” ([101], p. 1344).

Without collaboration between forensic scientists, scientists (especially statisticians and cognitive scientists), lawyers, and others, important aspects of error are likely to be misunderstood or overlooked. For example, academics may design studies that are high in technical and analytical sophistication but are disconnected from the most pressing issues facing forensic scientists, thereby potentially limiting the practical utility of the research. By working together, forensic scientists and academics from multiple disciplines can bridge the gap between theory and practice, ensuring that scholarship is informative, impactful, and applicable to real-world forensic settings.

The OCFS-EBFI webinar series is one example of a successful transdisciplinary collaboration between academics and forensic science practitioners. The webinar series provided a platform for open and honest communication between practitioners and academics to critically explore, refine, integrate and update their perspectives on error in forensic science [104].

Transdisciplinary collaboration is essential for effectively addressing the complex nature of error in forensic science.

8. Conclusion

In presenting these seven key lessons on error in the forensic sciences, we set out to provide a primer for forensic scientists and academics interested in the topic. We sought to lay a foundation for dialogue, collaborative initiatives, and research endeavours among forensic practitioners and academics, and in doing so to contribute positively to the research culture in the forensic sciences.

The insights distilled from critical analysis and discussion of ten papers on this topic underscored the inevitability of error in forensic sciences and emphasized the need for it to be recognized, understood, and constructively tackled, rather than feared and concealed. To this end, there is a pressing need to shift towards a culture that is more open about error, values transparency and disclosure, promotes learning from mistakes, and encourages open discussions about error.

Equally crucial is the requirement for clear and effective communication about the nature of error, how it is defined and measured, and its potential consequences. This is of paramount importance in ensuring clarity among the variety of stakeholders across justice systems. Furthermore, the collaboration between academics and forensic scientists emerged as a key mechanism to facilitate a more comprehensive approach to these issues.

By applying these lessons, error can be leveraged as a potent tool for continuous learning and improvement in the forensic sciences. This will enhance not only the reliability and credibility of the forensic sciences within the justice system but also increase public trust in the forensic sciences and justice systems. The foundation laid in this paper, we believe, can be used to support a wide range of initiatives aimed at exploring and mitigating error, and can serve as a platform for seeking funding to foster collaborative professional development and learning activities in this vital domain.

Funding

The OCFS-EBFI Webinar Series was supported by funding from the Office of the Chief Forensic Scientist, Victoria Police Forensic Services Department.

R.K. partially supported by ARC Discovery project (DP220100585).

K.A.M. supported by ARC Discovery project (DP220102412).

T.M.S.N. was supported by a PLuS Alliance Fellowship from Arizona State University and the University of New South Wales, as well as a Fulbright Scholarship from the Australian-American Fulbright Commission. This manuscript is not an official Department of State publication, and the views and information presented here are the authors' and do not represent the Fulbright Commission or the host country's government or institutions.

R.A.S supported by ARC Industry Fellowship (IE230100380).

A.T supported by ARC DECRA (DE210100357).

J.M.T, R.A.S., G.E. & M.B.T supported by ARC Linkage Project (LP170100086).

Generative AI disclosure

During the preparation of this work the authors used OpenAI ChatGPT4 to harmonize text. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

CRediT authorship contribution statement

Kristy A. Martire: Writing – review & editing, Writing – original draft, Supervision, Project administration, Investigation, Funding acquisition, Conceptualization. Jason M. Chin: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Carolyn Davis: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Gary Edmond: Writing – review & editing, Investigation, Conceptualization. Bethany Growns: Writing – review & editing, Writing – original draft, Conceptualization. Stacey Gorski: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Richard I. Kemp: Writing – review & editing, Investigation, Conceptualization. Zara Lee: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Christopher M. Verdon: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Gabrielle Jansen: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Tanya Lang: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Tess M.S. Neal: Writing – review & editing, Writing – original draft, Conceptualization. Rachel A. Searston: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Joshua Slocum: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Stephanie Summersby: Writing – review & editing, Writing – original draft, Project administration, Investigation, Conceptualization. Jason M. Tangen: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Matthew B. Thompson: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Alice Towler: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Darren Watson: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Melissa V. Werrett: Writing – review & editing, Writing – original draft, Investigation, Conceptualization. Mariam Younan: Project administration, Conceptualization. Kaye N. Ballantyne: Writing – review & editing, Writing – original draft, Supervision, Project administration, Investigation, Conceptualization.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:

Kristy A. Martire reports financial support was provided by Victoria Police Forensic Services Department. Richard I. Kemp, Rachel A. Searston, Alice Towler, Jason M. Tangen, Gary Edmond, Matthew B. Thompson and Kristy Martire report financial support was provided by Australian Research Council. Tess M. S. Neal reports financial support was provided by PLuS Alliance Fellowship and Australian-American Fulbright Commission. Carolyn Davis, Stacey Gorski, Zara Lee, Christopher M. Verdon, Gabrielle Jansen, Tanya Lang, Joshua Slocum, Stephanie Summersby, Darren Watson, Melissa V. Werrett and Kaye N. Ballantyne report a relationship with Victoria Police Forensic Services Department that includes: employment. Jason M. Chin is an Editor at Forensic Science International: Synergy. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Mnookin J.L., Cole S.A., Dror I.E., Fisher B.A. The need for a research culture in the forensic sciences. UCLA Law Rev. 2010;58:725. https://heinonline.org/HOL/P?h=hein.journals/uclalr58&i=731 [Google Scholar]
  • 2.Cino J.G. Roadblocks: cultural and structural impediments to forensic science reform. Houst. Law Rev. 2019;57:533. https://heinonline.org/HOL/P?h=hein.journals/hulr57&i=553 [Google Scholar]
  • 3.Earwaker H., Nakhaeizadeh S., Smit N.M., Morgan R.M. A cultural change to enable improved decision-making in forensic science: a six phased approach. Sci. Justice. 2020;60(1):9–19. doi: 10.1016/j.scijus.2019.08.006. [DOI] [PubMed] [Google Scholar]
  • 4.Koehler J.J., Mnookin J.L., Saks M.J. The scientific reinvention of forensic science. Proc. Natl. Acad. Sci. USA. 2023;120(41) doi: 10.1073/pnas.2301840120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Mnookin J.L. The uncertain future of forensic science. Daedalus. 2018;147(4):99–118. doi: 10.1162/daed_a_00523. [DOI] [Google Scholar]
  • 6.Weyermann C., Willis S., Margot P., Roux C. Towards more relevance in forensic science research and development. Forensic Sci. Int. 2023;111592 doi: 10.1016/j.forsciint.2023.111592. [DOI] [PubMed] [Google Scholar]
  • 7.Dror I.E. The error in “error rate”: why error rates are so needed, yet so elusive. J. Forensic Sci. 2020;65(4):1034–1039. doi: 10.1111/1556-4029.14435. [DOI] [PubMed] [Google Scholar]
  • 8.Airlie M., Robertson J., Krosch M.N., Brooks E. Contemporary issues in forensic science—worldwide survey results. Forensic Sci. Int. 2021;320 doi: 10.1016/j.forsciint.2021.110704. [DOI] [PubMed] [Google Scholar]
  • 9.Daubert. v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).
  • 10.Federal Rules of Evidence, Pub. L. No. 93–595, §1, § 702.
  • 11.National Research Council . National Academies Press; 2009. Strengthening Forensic Science in the United States: a Path Forward. 0309131359. [Google Scholar]
  • 12.Presidents Council of Advisors on Science and Technology . 2016. Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods. [Google Scholar]
  • 13.Almazrouei M.A., Morgan R.M., Dror I.E. Stress and support in the workplace: the perspective of forensic examiners. Forensic Sci. Int.: Mind and Law. 2021;2 doi: 10.1016/j.fsiml.2021.100059. [DOI] [Google Scholar]
  • 14.Houck M.M. Backlogs are a dynamic system, not a warehousing problem. Forensic Sci. Int.: Synergy. 2020;2:317–324. doi: 10.1016/j.fsisyn.2020.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Evidence Based Forensics Initiative Jan 2023. https://www.evidencebasedforensics.com/ Retrieved 12/12/2023 from.
  • 16.Budowle B., Bottrell M.C., Bunch S.G., Fram R., Harrison D., Meagher S., Oien C.T., Peterson P.E., Seiger D.P., Smith M.B., Smrz M.A., Soltis G.L., Stacey R.B. A perspective on errors, bias, and interpretation in the forensic sciences and direction for continuing advancement. J. Forensic Sci. 2009;54(4):798–809. doi: 10.1111/j.1556-4029.2009.01081.x. [DOI] [PubMed] [Google Scholar]
  • 17.Kloosterman A., Sjerps M., Quak A. Error rates in forensic DNA analysis: definition, numbers, impact and communication. Forensic Sci. Int.: Genetics. 2014;12:77–85. doi: 10.1016/j.fsigen.2014.04.014. [DOI] [PubMed] [Google Scholar]
  • 18.Hicklin R.A., Winer K.R., Kish P.E., Parks C.L., Chapman W., Dunagan K., Richetelli N., Epstein E.G., Ausdemore M.A., Busey T.A. Accuracy and reproducibility of conclusions by forensic bloodstain pattern analysts. Forensic Sci. Int. 2021;325 doi: 10.1016/j.forsciint.2021.110856. [DOI] [PubMed] [Google Scholar]
  • 19.Koehler J.J. Fingerprint error rates and proficiency tests: what they are and why they matter symposium. Hastings Law J. 2007;59(5):1077–1100. https://heinonline.org/HOL/P?h=hein.journals/hastlj59&i=1117 [Google Scholar]
  • 20.Christensen A.M., Crowder C.M., Ousley S.D., Houck M.M. Error and its meaning in forensic science. J. Forensic Sci. 2014;59(1):123–126. doi: 10.1111/1556-4029.12275. [DOI] [PubMed] [Google Scholar]
  • 21.Georgiou N., Morgan R.M., French J.C. The shifting narrative of uncertainty: a case for the coherent and consistent consideration of uncertainty in forensic science. Aust. J. Forensic Sci. 2023;55(6):781–797. doi: 10.1080/00450618.2022.2104370. [DOI] [Google Scholar]
  • 22.Smith A.M., Neal T.M.S. The distinction between discriminability and reliability in forensic science. Sci. Justice. 2021;61(4):319–331. doi: 10.1016/j.scijus.2021.04.002. [DOI] [PubMed] [Google Scholar]
  • 23.Murrie D.C., Gardner B.O., Kelley S., Dror I.E. Perceptions and estimates of error rates in forensic science: a survey of forensic analysts. Forensic Sci. Int. 2019;302 doi: 10.1016/j.forsciint.2019.109887. [DOI] [PubMed] [Google Scholar]
  • 24.Dror I.E., Charton D. Why experts make errors. J. Forensic Ident. 2006;56(4):600–616. [Google Scholar]
  • 25.Thompson M.B., Tangen J.M., McCarthy D.J. Expertise in fingerprint identification. J. Forensic Sci. 2013;58(6):1519–1530. doi: 10.1111/1556-4029.12203. [DOI] [PubMed] [Google Scholar]
  • 26.Wilson-Wilde L., Romano H., Smith S. Error rates in proficiency testing in Australia. Aust. J. Forensic Sci. 2019;51(sup1):S268–S271. doi: 10.1080/00450618.2019.1569154. [DOI] [Google Scholar]
  • 27.Koehler J.J. Proficiency tests to estimate error rates in the forensic sciences. Law Probab. Risk. 2013;12(1):89–98. doi: 10.1093/lpr/mgs013. [DOI] [Google Scholar]
  • 28.Koehler J.J. Forensics or fauxrensics? Ascertaining accuracy in the forensic sciences. Ariz. State Law J. 2017;49(4):1369–1416. [Google Scholar]
  • 29.Koertner A.J., Swofford H.J. Comparison of latent print proficiency tests with latent prints obtained in routine casework using automated and objective quality metrics. J. Forensic Ident. 2018;68(3):379–388. [Google Scholar]
  • 30.Max B., Cavise J., Gutierrez R.E. Assessing latent print proficiency tests: lofty aims, straightforward samples, and the implications of nonexpert performance. J. Forensic Ident. 2019;69(3):281–298. [Google Scholar]
  • 31.Jackson G.P. Error terror in forensic science: when spectroscopy meets the courts. Spectroscopy. 2016;31(11):2–4. [Google Scholar]
  • 32.Zwaan L., Singh H. vol. 2. 2015. pp. 97–103. (The Challenges in Defining and Measuring Diagnostic Error). 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Nightingale S.J., Farid H. Assessing the reliability of a clothing-based forensic identification. Proc. Natl. Acad. Sci. U. S. A. 2020;117(10):5176–5183. doi: 10.1073/pnas.1917222117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Trevethan R. Sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practice. Front. Public Health. 2017;5:307. doi: 10.3389/fpubh.2017.00307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Grunau G., Linn S. Commentary: sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practice. Front. Public Health. 2018;6:256. doi: 10.3389/fpubh.2018.00256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Smith C.A., Thompson M.B. Performance claims in forensic science expert opinion evidence. Univ. Queensl. Law J. 2019;38(2):261–277. doi: 10.3316/ielapa.031676069765392. [DOI] [Google Scholar]
  • 37.Biedermann A., Kotsoglou K.N. Forensic science and the principle of excluded middle: “Inconclusive” decisions and the structure of error rate studies. Forensic Sci. Int.: Synergy. 2021;3 doi: 10.1016/j.fsisyn.2021.100147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Dorfman A.H., Valliant R. Inconclusives, errors, and error rates in forensic firearms analysis:Three statistical perspectives. Forensic Sci. Int.: Synergy. 2022;5 doi: 10.1016/j.fsisyn.2022.100273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Dror I.E., Scurich N. (Mis)use of scientific measurements in forensic science. Forensic Sci. Int.: Synergy. 2020;2:333–338. doi: 10.1016/j.fsisyn.2020.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Morrison G.S. A plague on both your houses: the debate about how to deal with ‘inconclusive’ conclusions when calculating error rates. Law Probab. Risk. 2022;21(2):127–129. doi: 10.1093/lpr/mgac015. [DOI] [Google Scholar]
  • 41.Bossuyt P.M., Reitsma J.B., Bruns D.E., Gatsonis C.A., Glasziou P.P., Irwig L., Lijmer J.G., Moher D., Rennie D., de Vet H.C.W., Kressel H.Y., Rifai N., Golub R.M., Altman D.G., Hooft L., Korevaar D.A., Cohen J.F. Stard 2015: an updated list of essential items for reporting diagnostic accuracy studies. Radiology. 2015;277(3):826–832. doi: 10.1148/radiol.2015151516. [DOI] [PubMed] [Google Scholar]
  • 42.Thompson M.B., Tangen J.M. Generalization in fingerprint matching experiments. Sci. Justice. 2014;54(5):391–392. doi: 10.1016/j.scijus.2014.06.008. [DOI] [PubMed] [Google Scholar]
  • 43.Andrade C. HARKing, cherry-picking, p-hacking, fishing expeditions, and data dredging and mining as questionable research practices. J. Clin. Psychiatr. 2021;82(1) doi: 10.4088/JCP.20f13804. [DOI] [PubMed] [Google Scholar]
  • 44.Elliott K.C. A taxonomy of transparency in science. Can. J. Philos. 2022;52(3):342–355. doi: 10.1017/can.2020.21. [DOI] [Google Scholar]
  • 45.Dekker S. CRC Press; 2016. Drift into Failure: from Hunting Broken Components to Understanding Complex Systems. [Google Scholar]
  • 46.Frese M., Keith N. Action errors, error management, and learning in organizations. Annu. Rev. Psychol. 2015;66(1):661–687. doi: 10.1146/annurev-psych-010814-015205. [DOI] [PubMed] [Google Scholar]
  • 47.Reason J. Human error: models and management. BMJ. 2000;320(7237):768. doi: 10.1136/bmj.320.7237.768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Edmond G. The admissibility of forensic science and medicine evidence under the Uniform Evidence Law. Crim. Law J. 2014;38(3):136–158. [Google Scholar]
  • 49.Heavey A.L., Turbett G.R., Houck M.M., Lewis S.W. Management and disclosure of quality issues in forensic science: a survey of current practice in Australia and New Zealand. Forensic Sci. Int.: Synergy. 2023;7 doi: 10.1016/j.fsisyn.2023.100339. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.ISO/IEC 17025 . vol. 17025. International Organisation for Standardisation; 2017. (General Requirements for the Competence of Testing and Calibration Laboratories). [Google Scholar]
  • 51.ISO/IEC 21043 . 2018. Forensic Sciences. [Google Scholar]
  • 52.Doyle S. A review of the current quality standards framework supporting forensic science: risks and opportunities. WIREs Forensic Science. 2020;2(3) doi: 10.1002/wfs2.1365. [DOI] [Google Scholar]
  • 53.Du M. Analysis of errors in forensic science. Journal of Forensic Science and Medicine. 2017;3(3):139–143. https://journals.lww.com/jfsm/fulltext/2017/03030/analysis_of_errors_in_forensic_science.6.aspx [Google Scholar]
  • 54.Heavey A.L., Turbett G.R., Houck M.M., Lewis S.W. Toward a common language for quality issues in forensic science. WIREs Forensic Science. 2022;4(4) doi: 10.1002/wfs2.1452. [DOI] [Google Scholar]
  • 55.Aczel B., Szaszi B., Sarafoglou A., Kekecs Z., Kucharský Š., Benjamin D., Chambers C.D., Fisher A., Gelman A., Gernsbacher M.A., Ioannidis J.P., Johnson E., Jonas K., Kousta S., Lilienfeld S.O., Lindsay D.S., Morey C.C., Munafò M., Newell B.R.…Wagenmakers E.-J. A consensus-based transparency checklist. Nat. Human Behav. 2020;4(1):4–6. doi: 10.1038/s41562-019-0772-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Vazire S., Holcombe A.O. Where are the self-correcting mechanisms in science? Rev. Gen. Psychol. 2021;26(2):212–223. doi: 10.1177/10892680211033912. [DOI] [Google Scholar]
  • 57.Feynman R.P. In: The Art and Science of Analog Circuit Design. Williams J., editor. Newnes; 1998. 6 - cargo cult science; pp. 55–61. [DOI] [Google Scholar]
  • 58.Merton R.K. University of Chicago press; 1973. The Sociology of Science: Theoretical and Empirical Investigations. [Google Scholar]
  • 59.Martire K.A., Edmond G. Rethinking expert opinion evidence [Journal Article] Melb. Univ. Law Rev. 2017;40(3):967–998. https://search.informit.org/doi/pdf/10.3316/informit.979608274688542 [Google Scholar]
  • 60.Martire K.A., Kemp R.I. Considerations when designing human performance tests in the forensic sciences. Aust. J. Forensic Sci. 2018;50(2):166–182. doi: 10.1080/00450618.2016.1229815. [DOI] [Google Scholar]
  • 61.Carr S., Piasecki E., Gallop A. Demonstrating reliability through transparency: a scientific validity framework to assist scientists and lawyers in criminal proceedings. Forensic Sci. Int. 2020;308 doi: 10.1016/j.forsciint.2019.110110. [DOI] [PubMed] [Google Scholar]
  • 62.van Dyck C., Frese M., Baer M., Sonnentag S. Organizational error management culture and its impact on performance: a two-study replication. J. Appl. Psychol. 2005;90(6):1228–1240. doi: 10.1037/0021-9010.90.6.1228. [DOI] [PubMed] [Google Scholar]
  • 63.Weinzimmer L.G., Esken C.A. Learning from mistakes: how mistake tolerance positively affects organizational learning and performance. J. Appl. Behav. Sci. 2017;53(3):322–348. doi: 10.1177/0021886316688658. [DOI] [Google Scholar]
  • 64.Busey T., Sudkamp L., Taylor M.K., White A. Stressors in forensic organizations: risks and solutions. Forensic Sci. Int.: Synergy. 2022;4 doi: 10.1016/j.fsisyn.2021.100198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Eldridge H., Stimac J., Vanderkolk J. The benefits of errors during training. Forensic Sci. Int.: Synergy. 2022;4 doi: 10.1016/j.fsisyn.2021.100207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Bonventre C.L. Wrongful convictions and forensic science. WIREs Forensic Science. 2021;3(4) doi: 10.1002/wfs2.1406. [DOI] [Google Scholar]
  • 67.Maloney E. Two more problems and too little money: can congress truly reform forensic science? Note. Minn. J. Law Sci. Technol. 2013;14(2):923–949. https://heinonline.org/HOL/P?h=hein.journals/mipr14&i=922 [Google Scholar]
  • 68.Sofronoff W. 2022. Commission of Inquiry into Forensic DNA Testing in Queensland. [Google Scholar]
  • 69.Alexander K.L. National accreditation board suspends all DNA testing at D.C. crime lab. Wash. Post. 2015 https://www.washingtonpost.com/local/crime/national-accreditation-board-suspends-all-dna-testing-at-district-lab/2015/04/26/2da43d9a-ec24-11e4-a55f-38924fca94f9_story.html [Google Scholar]
  • 70.Casarez N.B., Thompson S.G. Three transformative ideals to build a better crime lab symposium - from the crime scene to the courtroom: the future of forensic science reform. Ga. State Univ. Law Rev. 2017;34(4):1007–1072. https://heinonline.org/HOL/P?h=hein.journals/gslr34&i=1049 [Google Scholar]
  • 71.Thompson W.C. Beyond bad apples: analyzing the role of forensic science in wrongful convictions symposium - wrongful convictions: causes and curses - panel two: experts and forensic evidence. Sw. U. L. Rev. 2008;37:1027. https://heinonline.org/HOL/P?h=hein.journals/swulr37&i=1037 [Google Scholar]
  • 72.Edmond G. Actual innocents? Legal limitations and their implications for forensic science and medicine. Aust. J. Forensic Sci. 2011;43(2–3):177–212. doi: 10.1080/00450618.2011.555419. [DOI] [Google Scholar]
  • 73.Edmond G., Found B., Martire K., Ballantyne K., Hamer D., Searston R., Thompson M., Cunliffe E., Kemp R., San Roque M., Tangen J., Diosa-Villa R., Ligertwood A., Hibbert D., White D., Ribeiro G., Porter G., Towler A., Roberts A. Model forensic science. Aust. J. Forensic Sci. 2016;48(5):496–537. [Google Scholar]
  • 74.Edmond G. Forensic science and the myth of adversarial testing. Curr. Issues Crim. Justice. 2020;32(2):146–179. [Google Scholar]
  • 75.Edmond G., San Roque M. The cool crucible: forensic science and the frailty of the criminal trial. Curr. Issues Crim. Justice. 2012;24(1):51–68. doi: 10.1080/10345329.2012.12035944. [DOI] [Google Scholar]
  • 76.Baudry L., Leroy D., Thouvarecq R., Chollet D. Auditory concurrent feedback benefits on the circle performed in gymnastics. J. Sports Sci. 2006;24(2):149–156. doi: 10.1080/02640410500130979. [DOI] [PubMed] [Google Scholar]
  • 77.Hattie J., Timperley H. The power of feedback. Rev. Educ. Res. 2007;77(1):81–112. doi: 10.3102/003465430298487. [DOI] [Google Scholar]
  • 78.Trehan A., Barnett-Vanes A., Carty M.J., McCulloch P., Maruthappu M. The impact of feedback of intraoperative technical performance in surgery: a systematic review. BMJ Open. 2015;5(6) doi: 10.1136/bmjopen-2014-006759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.White D., Kemp R.I., Jenkins R., Burton A.M. Feedback training for facial image comparison. Psychonomic Bull. Rev. 2014;21(1):100–106. doi: 10.3758/s13423-013-0475-3. [DOI] [PubMed] [Google Scholar]
  • 80.Wisniewski B., Zierer K., Hattie J. The power of feedback revisited: a meta-analysis of educational feedback research. Front. Psychol. 2020;10:3087. doi: 10.3389/fpsyg.2019.03087. [Review] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Almazrouei M.A., Dror I.E., Morgan R.M. Organizational and human factors affecting forensic decision-making: workplace stress and feedback. J. Forensic Sci. 2020;65(6):1968–1977. doi: 10.1111/1556-4029.14542. [DOI] [PubMed] [Google Scholar]
  • 82.Nittis M., Stark M. Evidence based practice: laboratory feedback informs forensic specimen collection in NSW. Journal of Forensic and Legal Medicine. 2014;25:38–44. doi: 10.1016/j.jflm.2014.04.008. [DOI] [PubMed] [Google Scholar]
  • 83.Houck M.M. Tigers, black swans, and unicorns: the need for feedback and oversight. Forensic Sci. Int.: Synergy. 2019;1:79–82. doi: 10.1016/j.fsisyn.2019.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.National Commission on Forensic Science . 2016. Views of the Commission: Facilitating Research on Laboratory Performance.https://www.justice.gov/ncfs/file/888586/dl Retrieved from. [Google Scholar]
  • 85.Davis D., O’Brien M.A.T., Freemantle N., Wolf F.M., Mazmanian P., Taylor-Vaisey A. Impact of formal continuing medical Education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282(9):867–874. doi: 10.1001/jama.282.9.867. [DOI] [PubMed] [Google Scholar]
  • 86.Howes L.M. The communication of forensic science in the criminal justice system: a review of theory and proposed directions for research. Sci. Justice. 2015;55(2):145–154. doi: 10.1016/j.scijus.2014.11.002. [DOI] [PubMed] [Google Scholar]
  • 87.Howes L.M., Kemp N. Discord in the communication of forensic science: can the science of language help foster shared understanding? J. Lang. Soc. Psychol. 2016;36(1):96–111. doi: 10.1177/0261927X16663589. [DOI] [Google Scholar]
  • 88.Eldridge H. Juror comprehension of forensic expert testimony: a literature review and gap analysis. Forensic Sci. Int.: Synergy. 2019;1:24–34. doi: 10.1016/j.fsisyn.2019.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Martire K.A., Edmond G. How well do lay people comprehend statistical statements from forensic scientists. Handbook of Forensic Statistics. 2020:201–224. [Google Scholar]
  • 90.van Straalen E.K., de Poot C.J., Malsch M., Elffers H. The interpretation of forensic conclusions by criminal justice professionals: the same evidence interpreted differently. Forensic Sci. Int. 2020;313 doi: 10.1016/j.forsciint.2020.110331. [DOI] [PubMed] [Google Scholar]
  • 91.Howes L.M., Kirkbride K.P., Kelty S.F., Julian R., Kemp N. The readability of expert reports for non-scientist report-users: reports of forensic comparison of glass. Forensic Sci. Int. 2014;236:54–66. doi: 10.1016/j.forsciint.2013.12.031. [DOI] [PubMed] [Google Scholar]
  • 92.Gigerenzer G., Hertwig R., Van Den Broek E., Fasolo B., Katsikopoulos K.V. “A 30% chance of rain tomorrow”: how does the public understand probabilistic weather forecasts? Risk Anal. 2005;25(3):623–629. doi: 10.1111/j.1539-6924.2005.00608.x. [DOI] [PubMed] [Google Scholar]
  • 93.Thompson W.C., Newman E.J. Lay understanding of forensic statistics: evaluation of random match probabilities, likelihood ratios, and verbal equivalents. Law Hum. Behav. 2015;39(4):332–349. doi: 10.1037/lhb0000134. [DOI] [PubMed] [Google Scholar]
  • 94.Thompson W.C., Schumann E.L. Expert Evidence and Scientific Proof in Criminal Trials. Routledge; 2017. Interpretation of statistical evidence in criminal trials: the prosecutor's fallacy and the defense attorney's fallacy; pp. 371–391. [Google Scholar]
  • 95.Budescu D.V., Por H.-H., Broomell S.B., Smithson M. The interpretation of IPCC probabilistic statements around the world. Nat. Clim. Change. 2014;4(6):508–512. doi: 10.1038/nclimate2194. [DOI] [Google Scholar]
  • 96.Martire K.A., Watkins I. Perception problems of the verbal scale: a reanalysis and application of a membership function approach. Sci. Justice. 2015;55(4):264–273. doi: 10.1016/j.scijus.2015.01.002. [DOI] [PubMed] [Google Scholar]
  • 97.Neal T.M.S., PytlikZillig L.M., Shockley E., Bornstein B.H. In: Interdisciplinary Perspectives on Trust: towards Theoretical and Methodological Integration. Shockley E., Neal T.M.S., PytlikZillig L.M., Bornstein B.H., editors. Springer International Publishing; 2016. Inspiring and advancing the many-disciplined study of institutional trust; pp. 1–16. [DOI] [Google Scholar]
  • 98.Adams J., Light R. Mapping interdisciplinary fields: efficiencies, gaps and redundancies in HIV/AIDS research. PLoS One. 2014;9(12) doi: 10.1371/journal.pone.0115092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Mitchell P.H. What's in a name? Multidisciplinary, interdisciplinary, and transdisciplinary. J. Prof. Nurs. 2005;21(6):332–334. doi: 10.1016/j.profnurs.2005.10.009. [DOI] [PubMed] [Google Scholar]
  • 100.Pennington D.D., Simpson G.L., McConnell M.S., Fair J.M., Baker R.J. Transdisciplinary research, transformative learning, and transformative science. Bioscience. 2013;63(7):564–573. doi: 10.1525/bio.2013.63.7.9. [DOI] [Google Scholar]
  • 101.Rosenfield P.L. The potential of transdisciplinary research for sustaining and extending linkages between the health and social sciences. Soc. Sci. Med. 1992;35(11):1343–1357. doi: 10.1016/0277-9536(92)90038-R. [DOI] [PubMed] [Google Scholar]
  • 102.Boyack K.W., Klavans R., Börner K. Mapping the backbone of science. Scientometrics. 2005;64(3):351–374. doi: 10.1007/s11192-005-0255-6. [DOI] [Google Scholar]
  • 103.Manton K.G., Gu X.-L., Lowrimore G., Ullian A., Tolley H.D. NIH funding trajectories and their correlations with US health dynamics from 1950 to 2004. Proc. Natl. Acad. Sci. USA. 2009;106(27):10981–10986. doi: 10.1073/pnas.0905104106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Morrissey J., Stodter A., Sherratt F., Cole M.D. Partnership between academics and practitioners – addressing the challenges in forensic science. Sci. Justice. 2023;63(1):74–82. doi: 10.1016/j.scijus.2022.11.005. [DOI] [PubMed] [Google Scholar]

Articles from Forensic Science International: Synergy are provided here courtesy of Elsevier

RESOURCES