Skip to main content
Injury Epidemiology logoLink to Injury Epidemiology
. 2025 Oct 21;12:68. doi: 10.1186/s40621-025-00625-6

Validating open-source data on fatal police shootings against self-reports from a national sample of police agencies

Christopher S Koper 1,, Gretchen Baas 2, Bruce G Taylor 3, Weiwei Liu 3, Jackie Sheridan-Johnson 3
PMCID: PMC12538985  PMID: 41121442

Abstract

Objective

Because of limitations to government data on police-related violence, researchers commonly use open-source data as the best approximation for studying the prevalence, causes, and prevention of police killings and other police-related violence in the United States. However, the comprehensiveness and accuracy of these open sources are not well known.

Methods

We compared fatal police shootings in three commonly used open sources to self-reports from a national sample of 573 U.S. police agencies from 2015 to 2019. Using ANOVA and regression methods, we assessed patterns of agreement and discrepancy by open source, year, and agency characteristics. We also examined media reports to assess factors contributing to overcounts in open sources.

Results

Annual open-source counts were higher or lower than self-reports for 5%-9% of agencies depending on year and open source. Discrepancies varied between open sources but not consistently across years. Discrepancies were more likely and greater in magnitude for large agencies and state police, with less consistent evidence of regional variation. Overcounts in open sources appear linked to incidents involving multiple police agencies, multiple shooters, officer deaths, unclear causes of death, and agency misidentification.

Conclusions

Open-source data on fatal police shootings are largely accurate but should be used cautiously, particularly in agency-level analyses. Coordinated efforts by police agencies, open-source compilers, and other researchers could potentially improve the accuracy of data on fatal police shootings.

Supplementary Information

The online version contains supplementary material available at 10.1186/s40621-025-00625-6.

Introduction

A lack of data hampers efforts to understand and prevent police-related violence in the United States as most police agencies are not required to publicly report information on incidents in which their officers use lethal or non-lethal physical force against civilians (use-of-force incidents). National data systems, including the Federal Bureau of Investigation’s Supplemental Homicide Reports, the federal Bureau of Justice Statistics’ Arrest-Related Deaths program, and the Centers for Disease Control’s National Vital Statistics System, track incidents of fatal police force but undercount them considerably due to non-reporting and classification errors [17]. To provide more complete data on police-related violence, several media and non-government entities have developed alternative databases on police killings, and sometimes non-fatal use-of-force incidents, using open and crowdsourced information gathered from online media, police, and court documents. Research on police killings has shown that open-source databases capture many incidents not included in national reporting systems, with some estimates suggesting that the latter undercount fatal police use-of-force incidents by roughly half compared to open sources [2, 3, 812]. Further, research has shown systematic patterns of underreporting in national reporting systems which can bias analyses of fatal police violence across jurisdictions [3]. Such findings have strengthened calls to establish better voluntary use-of-force reporting systems such as the Federal Bureau of Investigation’s National Use of Force Data Collection program, which collects data on police force incidents involving serious or fatal injuries as well as firearm discharges [13]. Nevertheless, only 59% of law enforcement agencies contributed to this system in 2024 [14].

For these reasons, researchers have increasingly used open-source data on police violence, particularly killings by police, to examine the prevalence of these events and to investigate how their occurrence relates to the characteristics of communities, organizations, situations, and individuals (including those who use force and those against whom force is used) [1527]. They have also used open sources to assess the effectiveness of police reform measures, including federal consent decrees, civilian oversight, and adoption of body-worn cameras, in reducing fatal police shootings [17, 22, 28].

Nevertheless, the comprehensiveness and accuracy of open-source databases on police-related violence are not well known. Published studies validating open-source databases on fatal police shootings against police agency records have only been conducted in large cities; two studies compared multiple open-source databases to police records in five cities and one compared a single open-source database to police records aggregated from 166 cities [2931]. These analyses suggest that open sources are largely accurate but can both undercount and overcount police killings at the agency level, with notable variation across sites and open-source databases. Our study extends this line of methodological inquiry to a national level by comparing fatal police shootings (the most widely tracked and analyzed form of police-related violence) in three prominent open-source databases to self-reports from a national sample of 573 U.S. police agencies, representing agencies and jurisdictions of varying size, over five years. We examine discrepancies between the open source and agency reports, assess patterns in discrepancies across open sources and years, and examine agency characteristics associated with discrepancies. We also examine factors that likely contribute to particular types of miscounts in open-source data based on media reports cited in the open sources.

Methods

Data

We examined three of the most extensive open-source databases on fatal police force incidents: (1) Mapping Police Violence, (2) Fatal Encounters, and (3) the Washington Post. Mapping Police Violence includes incidents in which officers fatally injured civilians by any means and includes cases that involved both on-duty and off-duty officers. Similarly, Fatal Encounters include fatal incidents of all types that were caused by police, on- or off-duty, or occurred in their presence. The Washington Post database includes only fatal police shootings and excludes those that involved off-duty officers or deaths of people in police custody. Further information about the persons and organizations maintaining these databases and the methods used to gather and review cases for each data source are available on their respective websites [2, 3234]. We collected cases from each data source that involved fatal shootings involving police and aggregated these data by agency and year for the period of 2015 to 2019. We included cases that listed multiple causes of death (e.g., gunshot and Taser) if one of the causes involved use of a firearm.

Fatal shooting counts from the open sources were compared to self-reported counts of fatal officer shootings provided by police agencies in the National Survey of Force Used During Law Enforcement Encounters conducted by the National Opinion Research Center (NORC). This survey, part of a wide-ranging study examining the use of force by police and violence against police in the United States, was sent to a stratified random sample of 4,000 police agencies selected from the 2017 National Directory of Law Enforcement Agencies [35]. Due to the rarity of police shootings, larger agencies (those with 25 or more officers) were oversampled at rates that increased with agency size. The survey was administered using multiple methods and waves from January 2021 through February 2022 and yielded responses from 800 agencies (a 20.2% response rate). We used the full sample of responding agencies to produce national estimates of fatal police shootings (weighted to be nationally representative and adjusted for non-response) but focused our agency-level validation analysis on 573 agencies that provided complete data on their annual counts of fatal shootings from 2015 to 2019. Table 1 displays the sizes, regions, and types of agencies included in the primary and full samples. Agencies in the primary and full samples had very similar characteristics overall; however, relative to excluded agencies, those in the primary sample were significantly more likely to be small (< = 50 officers), less likely to be very large (>= 250 officers), more likely to be in the Midwest, and more likely to be municipal rather than county agencies (differences were in the range of 5–10% points).

Table 1.

Characteristics of study agencies from U.S. national sample, 2015–2019

Primary sample (n = 573) Full sample (n = 800)
N (%) N (%)
Agency size
 0–24 85 (14.8) 107 (13.4)
 25–49 106 (18.5) 134 (16.8)
 50–99 140 (24.4) 189 (23.6)
 100–249 145 (25.3) 212 (26.5)
 250+ 97 (16.9) 158 (19.8)
Region
 Northeast 90 (15.7) 124 (15.5)
 Midwest 138 (24.1) 173 (21.6)
 South 222 (38.7) 317 (39.6)
 West 123 (21.5) 186 (23.3)
Agency type
 Municipal 460 (80.3) 622 (77.8)
 County 100 (17.5) 156 (19.5)
 State 13 (2.3) 22 (2.8)

Full sample includes all agencies responding to the NORC National Survey of Force Used During Law Enforcement Encounters. Primary sample includes responding agencies that provided complete data on fatal officer shootings from 2015 to 2019 and were used in the agency-level analysis 

We matched responding agencies from the NORC survey to each of the open-source databases by agency name and state and conducted extensive verification checks to ensure that discrepancies in shooting counts between agency self-reports and open sources were not caused by misspellings of agency names, non-standardized agency names (for example, “Sheriff’s Office” versus “Sheriff’s Department” or “State Police” versus “State Patrol”), or other forms of agency mis-identification. We also verified agencies in the NORC survey that were not linked to any fatal shootings in the open-source databases.

Analytic plan

We calculated fatal shooting discrepancies for each agency, year, and open-source database by subtracting the agency’s self-reported fatal shooting count from the open-source count (i.e., agency X’s count in open-source A and year T minus agency X’s self-reported count in year T). We conducted descriptive analyses of how much the open sources undercounted or overcounted fatal police shootings relative to the agency counts by year and used one-way analysis-of-variance (ANOVA) to determine whether discrepancies with agency counts varied across open data sources and across years for each open data source. We also conducted logistic and linear regression analyses to determine whether the occurrence and absolute magnitude, respectively, of self-report and open-source discrepancies varied across agencies, focusing on agency type, size, and geographical region. We judged the statistical significance of the ANOVA and regression analyses based on a probability level of p ≤.05.

Our study is exploratory, and we did not have strong a priori hypotheses about expected differences in the accuracy of the open sources. Based on their inclusion criteria, Fatal Encounters and Mapping Police Violence seem more likely to have comprehensive coverage of police shootings but also to overcount them relative to the Washington Post, which seems more likely to undercount them [31, 36]. However, observed patterns are also likely to depend on the thoroughness and accuracy of procedures used by each organization to search for and confirm reports. Prior studies comparing reports from Fatal Encounters and the Washington Post to police reports in some cities have shown that both open sources miss some incidents reported by police but, for reasons that are not always clear, include additional incidents not reported by police [2931]. Similar agency-level analyses have not been conducted with Mapping Police Violence.

Regarding agency characteristics, we expected that reporting discrepancies may be more likely and greater in magnitude for larger agencies, which generally have higher numbers of police shooting incidents. However, we also anticipated that media reporting of such incidents may be better in larger, metropolitan areas.

Finally, we conducted a qualitative assessment of factors that may contribute to overcounts of fatal police shootings in open sources. This portion of the study was limited to cases occurring in 2019. When an open-source database reported more fatal shootings than did a police agency for that year, we examined the media stories linked to the agency in the open-source database, searching them for factors that might explain the overcount (such as reporting errors, ambiguous causes of death, or the involvement of multiple agencies in an incident). For this purpose, we examined all reported incidents associated with 20 agencies in Mapping Police Violence, 23 agencies in Fatal Encounters, and 16 agencies in the Washington Post database.

Results

National estimates of fatal police shootings from the full NORC sample ranged from 1579 in 2017 to 1825 in 2015 (Table 2). National counts of these incidents from the open sources were 513 to 831 lower than the survey estimates for each year but mostly fell within the 95% confidence intervals of the survey estimates (an exception was the Washington Post count in 2019). Mapping Police Violence and Fatal Encounters produced very similar numbers and were closer to the survey estimates. Washington Post estimates were consistently lower.

Table 2.

National estimates of fatal police shootings in U.S. by data source and year, 2015–2019

Year National survey Mapping Police violence Fatal Encounters Washington Post
2015

1825

[95% CI: 991–2656]

1031 1047 994
2016

1625

[95% CI: 801–2450]

1017 1046 958
2017

1579

[95% CI: 809–2349]

1046 1066 981
2018

1705

[95% CI: 896–2513]

1102 1136 992
2019

1792

[95% CI: 1047–2538]

1049 1083 999

National survey estimates based on the NORC National Survey of Force Used During Law Enforcement Encounters (full sample, n = 800). CI = confidence interval

In the agency-level analysis, open-source counts of fatal shootings matched agency self-reports for 91% to 95% of agencies for each year and open data source (Table 3). Each open source had discrepancies with 21% of agencies across the full 5-year period (though not all the same agencies). Discrepancies between self-reports and open sources were close to zero on average and generally ranged from 1 to 4 in absolute value. Open-source databases produced both overcounts and undercounts relative to agency self-reports, but the former were slightly more common, particularly for Mapping Police Violence and Fatal Encounters. The magnitude of the discrepancies relative to agency counts varied. For example, open-source overcounts of two or more (representing the largest cases) involved agencies with self-reported shootings ranging from zero to more than 20.

Table 3.

Discrepancies in open source and agency self-reports of fatal police shootings by open source and year, U.S. sample, 2015–2019

Open source and year Discrepancies
Mean [Min, Max], SD
Cases with higher open-source value
N, %
Cases with lower open-source value
N, %
Washington Post
 2015 0.01 [−3, 3] 0.35 25 4.4 21 3.7
 2016 0.02 [−1, 2] 0.25 22 3.8 12 2.1
 2017 − 0.03 [−2, 3] 0.35 16 2.8 29 5.1
 2018 − 0.02 [−6, 2] 0.45 23 4.0 26 4.5
 2019 − 0.01 [−2, 2] 0.31 16 2.8 23 4.0
Mapping Police Violence
 2015 0.00 [−3, 4] 0.40 23 4.0 23 4.0
 2016 0.02 [−2, 2] 0.29 25 4.4 13 2.3
 2017 0.01 [−2, 2] 0.28 22 3.8 17 3.0
 2018 0.01 [−3, 2] 0.35 26 4.5 19 3.3
 2019 0.02 [−2, 3] 0.30 20 3.5 8 1.4
Fatal Encounters
 2015 0.03 [−2, 4] 0.36 25 4.4 16 2.8
 2016 0.04 [−1, 2] 0.28 31 5.4 10 1.7
 2017 0.02 [−2, 2] 0.27 24 4.2 13 2.3
 2018 0.01 [−3, 2] 0.33 27 4.7 17 3.0
 2019 0.03 [−2, 3] 0.29 23 4.0 7 1.2

Based on 573 agencies that provided self-reports of fatal shootings in the NORC National Survey of Force Used During Law Enforcement Encounters

Discrepancies with agency self-reports did not vary significantly over time for any of the open-source databases, but discrepancies did vary across open-source databases for each year (Tables A1 and A2 in Supplementary Materials). Most notably, discrepancies calculated from the Washington Post database differed significantly from those calculated from Mapping Police Violence and Fatal Encounters from 2017 to 2019 (Table A3 in Supplementary Materials). The Washington Post was more likely to show fewer shootings relative to agency self-reports in those years, while Mapping Police Violence and Fatal Encounters were more likely to show higher counts than those in agency self-reports.

Logistic regression models revealed patterns in shooting count discrepancies across agencies that were very similar for each open-source database (Table 4). Across all open sources, the odds of discrepancies between agency self-reports and open sources were up to 99% more likely for agencies with 250 or more officers in comparison to smaller agencies and up to 73% less likely for agencies in the Midwest in comparison to those from the South. The odds of discrepancies were also up to 970% more likely for state police agencies in comparison to municipal ones, though this pattern only reached statistical significance for Fatal Encounters and the Washington Post. Linear regression models also showed that the absolute magnitude of differences in shooting counts between open sources and agency self-reports were consistently higher for larger agencies and state agencies (Table A4 in Supplementary Materials).

Table 4.

Regression models predicting occurrence of discrepancies between open source and agency self-reports of fatal police shootings, U.S. sample, 2015–2019

Variable (Reference group) Mapping Police Violencea Fatal Encountersb Washington Postc
AOR [95% CI] AOR [95% CI] AOR [95% CI]
Agency size (250+)
 0–24 0.01 [0.00, 0.09]*** 0.02 [0.00, 0.12]*** 0.01 [0.00, 0.10]***
 25–49 0.03 [0.01, 0.10]*** 0.05 [0.02, 0.15]*** 0.02 [0.00, 0.07]***
 50–99 0.08 [0.04, 0.19]*** 0.10 [0.04, 0.22]*** 0.05 [0.02, 0.12]***
 100–249 0.15 [0.08, 0.31]*** 0.19 [0.09, 0.37]*** 0.18 [0.09, 0.35]***
Region (South)
 Northeast 0.65 [0.24, 1.75] 0.41 [0.14, 1.23] 0.49 [0.16, 1.54]
 Midwest 0.30 [0.11, 0.80]* 0.35 [0.14, 0.87]* 0.27 [0.09, 0.79]*
 West 1.44 [0.76, 2.72] 1.30 [0.69, 2.45] 1.84 [0.94, 3.58]
Agency type (Municipal)
 County 0.59 [0.29, 1.20] 0.96 [0.50, 1.86] 0.92 [0.46, 1.83]
 State 7.86 [0.88, 70.29] 10.70 [1.21, 94.55]* 10.03 [1.11, 90.94]*

Multivariate logistic regression models with 573 agencies that provided self-reports of fatal shootings in the NORC National Survey of Force Used During Law Enforcement Encounters. AOR = adjusted odds ratio. CI = confidence interval. a Nagelkerke R-Squared: 0.409, b Nagelkerke R-Squared: 0.390, c Nagelkerke R-Squared: 0.470. *p <.05, **p <.10, ***p <.001

Finally, our examination of open-source overcounts in 2019 illuminated several factors that help to explain the differences between agency self-reports and open sources. Note that these considerations may not fully account for agency and open-source differences, nor are they mutually exclusive (multiple issues below could apply to any agency). One common issue was the involvement of multiple agencies in a fatal shooting incident. In such cases, it was often unclear from media accounts as to which agency’s officers fired the fatal shots. Multiple agency incidents may have contributed to as many as 35% of the agency-level overcounts in Mapping Police Violence, 26% of those in Fatal Encounters, and 12.5% of those in the Washington Post. (Notably, state police agencies were commonly involved in such incidents, contributing to their large effect sizes in the regression models described above.) Other potential sources of misattribution in open sources included incidents in which suspects killed themselves or others, reports in which it was not clear if or how people died (including incidents in which fatal victims may have been shot by persons other than police), shootings of police officers by other officers, and clear instances of agency misidentification (i.e., discrepancies between the agencies identified in open sources and the linked media reports). Such ambiguities and errors were apparent for incidents involving 20% to 30% of agencies that reported lower shooting counts, depending on the open source used for comparison. However, some police agencies appeared likely to have underreported their shootings based on media accounts that clearly identified their involvement in one or more incidents within the applicable timeframe. Though difficult to estimate precisely, this seemed likely to have occurred with at least 20% to 38% of the agencies examined across the different open-source comparisons. Of note, approximately one-third of agencies reporting fewer shootings in each open-source comparison were involved in incidents that other police agencies were investigating, though it is not clear if or how this contributed to observed differences.

Discussion

Our study represents the most extensive effort to date to validate open-source data on fatal police shootings against data reported by police agencies. Our examination of three of the most extensive and commonly used open-source databases on police violence found that national counts of fatal police shootings from these sources were consistently lower than those based on self-reports from a national sample of police agencies over five years, but they were generally within the range of sampling error for the national self-report estimates. This suggests that open sources may undercount fatal police shootings at the national level overall, but this cannot be determined with certainty.

Comparisons at the agency level revealed that open sources and agency self-reports were largely in agreement but differed for 5% to 9% of agencies annually, depending on the year and open data source. Open-source reports both undercounted and overcounted fatal police shootings relative to agency self-reports, though undercounts were more common from the Washington Post and overcounts were more common from Mapping Police Violence and Fatal Encounters. Although reporting errors by police agencies may have contributed to the observed results, the study suggests that open sources fail to capture some shootings by police and also overcount shootings for some agencies due to errors, ambiguous circumstances, and other aspects of how the open-source databases are compiled (e.g., some sources include cases in which police were present even if police did not cause the reported death). These findings, which are broadly similar to those of earlier studies that have compared selected open sources (including Fatal Encounters and the Washington Post) to police records in cities [2931], reinforce and extend prior efforts to validate open-source data on fatal police shootings. Our results are also consistent with prior work suggesting that the Washington Post undercounted fatal police shootings relative to Mapping Police Violence and Fatal Encounters during our study period (2015–2019), despite substantial consistency between the sources overall [36].

Our study also extends prior research by showing that discrepancies between open-source and police reports of fatal police shootings were more likely and greater in magnitude for large agencies and state police. Agencies operating across larger and more populated areas encounter a greater number and variety of violent incidents, increasing the potential for miscounts. Further, they are more likely to be involved in multi-agency incidents, which create ambiguity in the attribution of police shootings, due to geographic proximity with other agencies (which increases the likelihood of officers from multiple agencies responding to the same incident) as well as participation in multi-agency investigations and task forces. Shootings may also sometimes be erroneously attributed to agencies, like state police, that are involved in investigations of shootings by other agencies.

Our findings are subject to several limitations. Although our sample of police agencies used for the agency-level matching analysis was large and national in scope, it may not be representative of other U.S. police agencies. Agencies willing to participate in the NORC survey may differ systematically from non-participants in ways that influence their levels and reporting of fatal force cases, thus introducing self-selection bias into the results. Further, we could not determine the extent to which observed discrepancies were due to errors in the open sources, police reports, or both. While our study assumes that agency self-reports are the benchmark for accuracy, some agencies may undercount (or occasionally overcount) fatal shootings, whether intentionally or due to inconsistent record-keeping. Because there is no external validation of the agencies’ self-reports, any systematic misreporting on their part would go undetected.

We focused on fatal police shootings in three prominent open-source databases; thus, our results may not generalize to other current or older open sources documenting fatal police shootings (e.g., The Guardian, Gun Violence Archive, SPOTLITE, and the Death-Modulated Fatal Shootings [DM-FS] database) or to other forms of police violence recorded in open-source data. It is also possible that more recent data from the open sources we examined are more or less accurate than those used in this study, which were 6–10 years old. (Notably, Mapping Police Violence went through a leadership change in 2022 that may affect the comparability of its current data with those used here.) Nevertheless, data from the period of this study are still commonly used in historical and longitudinal studies of police violence.

Additionally, we could not match open-source and agency records at the incident level. This precluded more refined assessments of open-source and agency self-report accuracy, as well as any examination of situational or actor characteristics that may have influenced the results. Our ability to examine agency characteristics and policies that contributed to the observed patterns was also limited. For example, agencies’ reporting practices may be linked to factors like the availability of support staff, the presence of civilian oversight bodies, the characteristics of local leaders, or other aspects of shooting cases (such as whether a case is under investigation or the subject of a lawsuit) [31]. Similarly, we could not readily identify the agency, policy, sampling, and/or media factors (e.g., variation in the prevalence and coverage of news outlets across areas) that may explain the geographical patterns found in some analyses.

Conclusions

Notwithstanding these limitations, our study has significant implications for research and evaluation regarding police-related violence. Our results suggest that open-source data on fatal police shootings should be used cautiously, particularly in studies that use this information at the agency level. Miscounts in these data and patterns in those miscounts could potentially bias the results of analyses examining characteristics of police agencies and localities linked to fatal police shootings. Researchers using open-source data for agency-level studies are thus urged to make careful assessments of the accuracy and pertinence of these data at the incident level, preferably using confirmatory information from agencies under study when possible. These concerns may also extend to other forms of police violence documented in open sources (such as non-firearm fatalities and non-fatal shootings). However, such assessments are beyond the scope of our study.

It is likely that researchers will continue for some time to rely on open-source databases to study police-related violence given the superiority of these sources to most other national data collections and their historical span. Although the National Violent Death Reporting System appears to have similarly comprehensive coverage of police-related fatalities, it was only recently expanded to all 50 states, and its coverage of these events has only been compared to that of open sources for 27 states [37]. Our study provides additional evidence for the utility of open sources for studying police-related violence but also illuminates caveats that should be considered in their use.

A further implication is that data on police violence could be improved if police agencies, open-source compilers, and other researchers collaborated to develop common methods and standards for reporting incidents, sharing records, and reconciling differences across sources. A coordinated effort among these entities could potentially reduce discrepancies in reports of police-related violence and foster a more robust, validated body of data. In the absence of stronger requirements or incentives from the federal or state governments for reporting police use-of-force cases, this could be an important step in strengthening public trust, guiding reform efforts, and shaping future policies aimed at preventing police-related violence.

Supplementary Information

Supplementary Material 1 (22.2KB, docx)

Author contributions

CSK led the conceptualization and design of the analysis and served as the primary author of the manuscript. He was also an advisor to the National Survey of Force Used During Law Enforcement Encounters project. GB conducted the analyses and contributed to writing and editing the manuscript. BGT co-directed the National Survey of Force Used During Law Enforcement Encounters project, provided feedback on the study analyses, and contributed to editing the manuscript. WL co-directed the National Survey of Force Used During Law Enforcement Encounters project, provided feedback on the study analyses, and contributed to editing the manuscript. JSJ managed data collection and analysis for the National Survey of Force Used During Law Enforcement Encounters project, provided feedback on the study analyses, and contributed to editing the manuscript.

Funding

The study was supported by funding from the National Collaborative on Gun Violence Research.

Data availability

Data from Fatal Encounters, Mapping Police Violence, and the Washington Post are available from the websites of those organizations (see references). Data from the National Survey of Force Used During Law Enforcement Encounters are available (in de-identified form) from the Open Science Framework (https:/osf.io/4ruac).

Declarations

Ethics approval and consent to participate

The Institutional Review Board of the National Opinion Research Center approved the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Barber C, Azrael D, Cohen A, et al. Homicides by police: comparing counts from the National violent death reporting system, vital statistics, and supplementary homicide reports. Am J Public Health. 2016;106(5):922–7. 10.2105/AJPH.2016.303074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Finch BK, Beck A, Burghart DB, Johnson R, Klinger D, Thomas K. Using crowd-sourced data to explore police-related-deaths in the united States (2000–2017): the case of fatal encounters. Open Health Data. 2019;6(1). 10.5334/ohd.30. [DOI] [PMC free article] [PubMed]
  • 3.Finch BK, Thomas K, Beck AN, Burghart DB, Klinger D, Johnson RR. Assessing data completeness, quality, and representativeness of justifiable homicides in the FBI’s supplementary homicide reports: a research note. J Quant Criminol. 2022;38(1):267–93. 10.1007/s10940-021-09493-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Klinger DA. On the problems and promise of research on lethal police violence: a research note. Homicide Stud. 2012;16(1):78–96. 10.1177/1088767911430861. [Google Scholar]
  • 5.Loftin C, Wiersema B, McDowall D, Dobrin A. Underreporting of justifiable homicides committed by police officers in the United States, 1976–1998. Am J Public Health. 2003;93(7):1117–21. 10.2105/AJPH.93.7.1117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Loftin C, McDowall D, Xie M. Underreporting of homicides by police in the United States, 1976–2013. Homicide Stud. 2017;21(2):159–74. 10.1177/1088767917693358. [Google Scholar]
  • 7.Planty M, Burch AM, Banks D, Couzens L, Blanton C, Cribb D. Arrest-Related deaths program: data quality profile. U.S. Department of Justice; 2015.
  • 8.Feldman JM, Gruskin S, Coull BA, Krieger N. Quantifying underreporting of law-enforcement-related deaths in United States vital statistics and news-media-based data sources: a capture–recapture analysis. Tsai AC, ed. PLOS Med. 2017;14(10):e1002399. 10.1371/journal.pmed.1002399 [DOI] [PMC free article] [PubMed]
  • 9.Feldman JM, Gruskin S, Coull BA, Krieger N. Killed by police: validity of media-based data and misclassification of death certificates in Massachusetts, 2004–2016. Am J Public Health. 2017;107(10):1624–6. 10.2105/AJPH.2017.303940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.GBD 2019 Police Violence US Subnational Collaborators. Fatal police violence by race and state in the USA, 1980–2019: a network meta-regression. Lancet. 2021;398(10307):1239–55. 10.1016/S0140-6736(21)01609-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Williams HE, Bowman SW, Jung JT. The limitations of government databases for analyzing fatal officer-involved shootings in the United States. Crim Justice Policy Rev. 2019;30(2):201–22. 10.1177/0887403416650927. [Google Scholar]
  • 12.Zimring FE. How many killings by police? Univ Chic Leg Forum. 2016;2016:691–710. https://heinonline.org/HOL/P?h=hein.journals/uchclf2016&i=697. Accessed March 14, 2025.
  • 13.National use-of-force data collection. Federal Bureau of Investigation. Accessed June 2. 2025. https://www.fbi.gov/how-we-can-help-you/more-fbi-services-and-information/ucr/use-of-force
  • 14.Federal Bureau of Investigation. Participation. FBI Crime Data Explorer. Accessed June 2. 2025. https://cde.ucr.cjis.gov/LATEST/webapp/#/pages/le/uof
  • 15.Bui AL, Coates MM, Matthay EC. Years of life lost due to encounters with law enforcement in the USA, 2015–2016. J Epidemiol Community Health. 2018;72(8):715–8. 10.1136/jech-2017-210059. [DOI] [PubMed] [Google Scholar]
  • 16.Cesario J, Johnson DJ, Terrill W. Is there evidence of racial disparity in police use of deadly force? Analyses of officer-involved fatal shootings in 2015–2016. Soc Psychol Personal Sci. 2019;10(5):586–95. 10.1177/1948550618775108. [Google Scholar]
  • 17.Goh LS. Going local: do consent decrees and other forms of federal intervention in municipal police departments reduce police killings? Justice Q. 2020(5). 10.1080/07418825.2020.1733637.
  • 18.Gray AC, Parker KF. Race, structural predictors, and police shootings: are there differences across official and unofficial accounts of lethal force? Crime Delinq. 2019;65(1):26–45. 10.1177/0011128718788044. [Google Scholar]
  • 19.Hemenway D, Azrael D, Conner A, Miller M. Variation in rates of fatal police shootings across US states: the role of firearm availability. J Urban Health. 2019;96(1):63–73. 10.1007/s11524-018-0313-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Johnson TL, Johnson NN, Sabol WJ, Snively DT. Law enforcement agencies’ college education hiring requirements and racial differences in police-related fatalities. J Police Crim Psychol. 2022;37(3):681–98. 10.1007/s11896-022-09534-6. [Google Scholar]
  • 21.Kivisto AJ, Ray B, Phalen PL. Firearm legislation and fatal police shootings in the United States. Am J Public Health. 2017;107(7):1068–75. 10.2105/AJPH.2017.303770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Koslicki WM, Willits D, Simckes M. The ‘civilizing effect’ and ‘deterrence spectrum’ revisited: results of a national study of body-worn cameras on fatal police force. Polic Soc. Published online September 14, 2023. Accessed March 14, 2025. https://www.tandfonline.com/doi/full/10.1080/10439463.2023.2213804
  • 23.Nagin DS. Firearm availability and fatal police shootings. Ann Am Acad Pol Soc Sci. 2020;687(1):49–57. 10.1177/0002716219896259. [Google Scholar]
  • 24.Nix J, Campbell BA, Byers EH, Alpert GP. A bird’s eye view of civilians killed by police in 2015. Criminol Public Policy. 2017;16(1):309–40. 10.1111/1745-9133.12269. [Google Scholar]
  • 25.Ross CT. A multi-level bayesian analysis of racial bias in police shootings at the county-level in the United States, 2011–2014. PLoS ONE. 2015;10(11):e0141854. 10.1371/journal.pone.0141854. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Schwartz GL, Jahn JL. Mapping fatal police violence across U.S. metropolitan areas: overall rates and racial/ethnic inequities, 2013–2017. PLoS ONE. 2020;15(6):e0229686. 10.1371/journal.pone.0229686. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Weisheit R, Harmon MG, Ingram J, Cottle C. The geography of police killings utilising crowdsourced data. Howard J Crime Justice. 2022;61(2):127–47. 10.1111/hojo.12443. [Google Scholar]
  • 28.Ali MU, Wright II. Use of performance information and external accountability: the role of citizen oversight in mitigating the motivated evaluation of body-worn camera evidence. Am Rev Public Admin. 2024;54(6):590–616. 10.1177/02750740241229998. [Google Scholar]
  • 29.Baćak V, Mausolf JG, Schwarz C. How comprehensive are media-based data on police officer–involved shootings? J Interpers Violence. 2021;36(17–18):NP10055-65. 10.1177/0886260519860897. [DOI] [PubMed] [Google Scholar]
  • 30.Ozkan T, Worrall JL, Zettler H. Validating media-driven and crowdsourced police shooting data: a research note. J Crime Justice. 2018;41(3):334–45. 10.1080/0735648X.2017.1326831. [Google Scholar]
  • 31.Clark TS, Glynn AN, Owens ML. Deadly force: Police shootings in urban America. Princeton University Press; 2025.
  • 32.Campaign Zero. Mapping Police Violence. Mapping Police Violence. Accessed June 2. 2025. https://mappingpoliceviolence.org/
  • 33.Burghart DB. Fatal Encounters. Fatal Encounters. Accessed June 2, 2025. https://fatalencounters.org/
  • 34.Washington Post. Fatal Force. Fatal Force. December 31. 2024. Accessed June 2, 2025. https://www.washingtonpost.com/graphics/investigations/police-shootings-database/
  • 35.Taylor BG, Liu W, Sheridan J. Prevalence and correlates of violence against law enforcement officers in the united states: a National portrait. In: Staller MS, Koerner S, Zaiser B, editors. Police conflict Management, Vol. 1: challenges and opportunities in the 21st century. Palgrave MacMillan; 2023. pp. 111–39.
  • 36.Comer BP, Ingram JR. Comparing fatal encounters, mapping police violence, and Washington post fatal police shooting data from 2015–2019: a research note. Crim Justice Rev. 2023;48(2):249–61. 10.1177/07340168211071014. [Google Scholar]
  • 37.Conner A, Azrael D, Lyons VH, Barber C, Miller M. Validating the national violent death reporting system as a source of data on fatal shootings of civilians by law enforcement officers. Am J Public Health. 2019;109(4):578–84. 10.2105/AJPH.2018.304904. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (22.2KB, docx)

Data Availability Statement

Data from Fatal Encounters, Mapping Police Violence, and the Washington Post are available from the websites of those organizations (see references). Data from the National Survey of Force Used During Law Enforcement Encounters are available (in de-identified form) from the Open Science Framework (https:/osf.io/4ruac).


Articles from Injury Epidemiology are provided here courtesy of BMC

RESOURCES