Abstract
Comparative drug policy analysis (CPA) is alive and well, and the emergence of robust alternatives to strict prohibition provides exciting research opportunities. As a multidisciplinary practice, however, CPA faces several methodological challenges. This commentary builds on a recent review of CPA by Ritter and colleagues to argue that the practice is hampered by a hazy definition of policy that leads to confusion in the specification and measurement of the phenomena being studied. This problem is aided and abetted by the all-too-common omission of theory from the conceptualization and presentation of research. Drawing on experience from the field of public health law research, this commentary suggests a distinction between empirical and non-empirical CPA, a simple taxonomic model of CPA policy-making, mapping, implementation and evaluation studies, a narrower definition of and rationale for “policy” research, a clear standard for measuring policy, and an expedient approach (and renewed commitment) to using theory explicitly in a multi-disciplinary practice. Strengthening CPA is crucial for the practice to have the impact on policy that good research can.
Keywords: policy research, public health law research, methodology, policy surveillance, theory
Introduction
Policy is a device for scaling a practice within a legal or institutional framework. At the core of policy research is the question of what effects a practice codified in a policy produces. If the policy has positive effects, at reasonable cost and without significant side effects, research can spur the policy’s refinement and wider adoption. If a policy causes harm, or provides too few benefits to justify the costs of enforcement, research can help speed its modification or repeal. Comparative research has been used to assess the impact of policies on health (Burris & Anderson, 2013). Alcohol research, particularly in the road safety domain, has demonstrated the potential for rigor and impact in this tradition. In a 2016 review of 62 comparative policy analysis studies published since 2010, Ritter et al. report that the practice lacks “a clear definition of what counts as CPA” and consensus on methods of “policy specification” (Ritter, Livingston, Chalmers, Berends, & Reuter, 2016) (“Ritter et al.”). In an unpublished portion of the study, which I read as a peer reviewer and allude to with permission, the authors also noted the absence of explicit theory in nearly half the papers reviewed.
There is good reason to be talking now about the state of CPA. Though prohibitionist policies remain widespread, there are also many signs of declining faith in that model, leading to more openness to policy innovation and research that tests the current approach or evaluates innovations. This includes not just outright legalization of some drugs, and substantial abandonment of criminalization for others, but also policies of non-enforcement and the creation of safe spaces for services and treatment (Csete, et al.). There has always been “considerable room for manoeuvre” in the international conventions (Bewley-Taylor & Jelsma, 2011), and that space grows as support for prohibition declines.
As the policy environment features more legalization, or at least changes in policy and practice that make drugs a little less illegal, drug research gradually looks more like alcohol research, with greater interjurisdictional variation, better access to data and more opportunities for research funding. Caroline Chatwin suggests that this is a time of opportunity to promote policy innovation as a virtue and evaluation of innovation as a primary role of national and international drug control and health agencies (Chatwin, 2016). From the epistemologically Machiavellian point of view, even jurisdictions that adhere to rigid prohibition can assist in the identification of positive innovation by serving as die-hard counterfactuals.
Yet there are those bumps in the road of CPA highlighted by Ritter et al.: lack of a clear definition and taxonomy of CPA, problems in the specification of the policy under study, and a failure to exploit theory as a way to strengthen research and its utility across disciplines. I confronted these same obstacles in my work providing funding and technical assistance in the overlapping field of public health law research (PHLR)—“the scientific study of the relation of law and legal practices to population health” (Burris, et al., 2010) (A. Wagenaar & Burris, 2013). This paper draws on the PHLR experience to suggest ways to address these key challenges in CPA.
The Importance of Defining and Classifying CPA
Ritter et al. start starts with a crucial observation: “Comparative policy analysis is a diverse set of activities, undertaken by many different disciplines, all with their own approaches – it is not a unified field of study” (Ritter, et al., 2016)(p40). It is a good thing that contributors to CPA do not use the same theoretical frameworks, methods or designs. It is good that they may be interested in different aspects of policy phenomena, different dimensions of implementation processes, or different outcomes. Unfortunately, the other side of a diverse multi-disciplinary practice is that we cannot assume that practitioners care about what participants in different disciplines are doing, or have any interest in defining themselves as comparative policy researchers. A coherent and consistent practice of “comparative drug policy research” may be as much a wish as a description.
We must see Ritter et al., and other, similar papers (Burris, Mays, Douglas Scutchfield, & Ibrahim, 2012; Burris, et al., 2010; Gilson & Raphaely, 2008; Walt, et al., 2008), as efforts to make that wish for confrontation, complementarity and coherence among a diverse group of scholars come true. Setting reasonable and transparent boundaries around the practice, and describing the population of studies within those boundaries, is essential. From there, one can compare the research, identifying strengths and weaknesses in design and execution, and, ultimately, assemble a picture of what we know about policy phenomena that is the sum of parts that might not otherwise be summed.
Ritter et al. propose a definition of CPA as a study that “explicitly examined an alcohol and/or drug policy” in a comparison of two or more states (Ritter, et al., 2016) (p41). But what sort of “analysis”? Ritter et al. make the important distinction between research that primarily examines a policy (included in the definition of CPA) and research that has policy implications -- an epidemiological study of drug-related harms across countries -- or that uses policy as a control variable (neither of which is CPA). Ritter et al. do not address what we found in PHLR to be an even equally important criterion: the distinction between work that uses a recognized and explicit empirical method versus work that does not. This criterion is deliberately broad, to separate empirical work in all its diversity (from history through systematic reviews to randomized controlled trials) from work in which lawyers analyze legal rules or commentators offer views on what policy ought to be or can be expected to achieve. Empirical research is so different from normative analysis and commentary in so many ways that it is impossible to assess them in a single framework. In our public health law research work, we distinguished between PHLR (the empirical work) and “legal scholarship” (the commentary and non-empirical analysis) (Burris, et al., 2010). Non-empirical work can illuminate CPA, but for purposes of discussing methods and results of scientific evaluation, it is useful to define a similar distinction. From here on in this paper, I am confining CPA to empirical research.
If the biggest problem was failing to distinguish empirical work from other forms of commentary, we would be in decent shape. But we also tend to ignore the biggest question of all: What, exactly, does the P in CPA stands for? The typical definition of policy is so capacious that almost anything can creep across its fuzzy border. The Centers for Disease Control and Prevention (CDC), for example, defines policy as “a law, regulation, procedure, administrative action, incentive, or voluntary practice of governments and other institutions” (Office of the Associate Director for Policy, 2015). Policy is the thing we all are supposed to care about, and in evaluation research it is the primary thing to be measured. If we are not on the same general page about that, it will be hard to thrive as a field.
Any definition of key concepts in a field like CPA will inevitably have fuzzy borders, and any scheme for classifying studies can easily be dismissed as arbitrary or analytically imperfect. Policy is a useful concept for many purposes and in colloquial use precisely because it is broad. But when we are in science mode, concerned with causal models, theories, measurement and inference -- and trying to incorporate methods, tools and results from many different disciplinary perspectives -- this fuzziness is a big problem that manifests in a number of more but also less obvious ways. A good place to start this discussion is with Ritter et al.’s classification of the five “way[s] in which ‘policy’ (the unit of study) is identified, measured and/or coded” (p.42): “policy classification,” “policy index score”, “implied policy differences,” “data-driven policy coding,” and “descriptive policy differences.”
“Policy classification,” the simplest and most often employed approach, indicates presence versus absence of a broad type of policy, such as medical marijuana, cross-sectionally (Cerdá, Wall, Keyes, Galea, & Hasin) or longitudinally (Bachhuber, Saloner, Cunningham, & Barry, 2014). “Policy index scores,” used to rate or rank policies, encompass multiple important policy components within a single measure. The components may be explicit elements of the law, like size of fines or activities prohibited, or more elaborate constructs like “stringency” or “comprehensiveness.” In the “implied policy differences” approach, policies are broadly characterized (e.g. “restrictive” vs. “permissive”) without explication of specific, observed policy differences, hence Ritter et al.’s use of “implied.” For example, a comparative study of U.S. and Australian “policy” on adolescent alcohol use characterized Washington, United States as a “zero-tolerance” state and Victoria, Australia as a “harm-minimization” one based on secondary sources (McMorris, Catalano, Kim, Toumbourou, & Hemphill, 2011). “Data driven policy coding” is described as using non-legal data, either inputs (like enforcement staffing) or outputs (like tickets) as the policy measure, rather than relying on the policy “as written” to classify and compare policy. For example, rather than use a jurisdiction’s marijuana law, a study might use the possession arrest rate as its measure of “decriminalization policy” (Vuolo). The “descriptive policy” approach involves collecting, organizing and describing policies in detail. For example, Pardo provides a narrative review of three policies where their chief characteristics are captured and compared in a table, without coding to transform the observed words into structured data (Pardo, 2014).
As Ritter et al.’s taxonomy demonstrates, what the field calls “policy” is usually a combination (if not a mashup) of several distinct elements: the observable form of the rule (the “law on the books”), implementation practices and outputs and, in some studies, the social constructions of the “policy.” The premise of Ritter et al.’s taxonomy of policy specification methods is that all these studies are looking at the same thing, “policy,” just in different ways. This classification served the purpose of review, but its focus on methods of policy specification rather than categories of CPA inquiry limit its utility as a model or conceptual interface for the field that allows us all, from our different backgrounds, to see that we are interested in the same or related things, and to share methods and evidence with each other. I offer a different picture, adapted from PHLR.
A New Definition of CPA Based on a Model of Its Subject
I suggest that empirical papers in CPA fall for the most part into four groups: 1) policymaking studies, which investigate the determinants of policy and the policy-making process; 2) mapping studies, which measure policy prevalence, characteristics and/or change over time; 3) implementation studies, which investigate the process of putting a policy into practice and its proximate outcomes; and 4) evaluation studies, which try to assess the impact of policies on the world. This taxonomy captures the main phenomena people study in CPA, and also accommodates the difference in who typically does the research in each area and from what theoretical standpoint. Political scientists are more likely to conduct policymaking studies than epidemiologists, who in turn may be more interested in associating health outcomes with laws on the books than digging into how laws on the books are transformed as they become laws on the streets, the province of implementation researchers. Mapping studies are often the province of lawyers.
These are distinguishable kinds of research, but they are related across a causal and knowledge continuum. Implementation data are often integrated in evaluation studies (and, as Ritter et al. notes, often stand in as the independent variable in evaluation studies). Sometimes, though not often, the impact of policymaking processes or characteristics may be part of an implementation or evaluation study (for example, when legislative history is hypothesized to predict implementation). Mapping research produces the data that constitute the outcome for policy-making studies and the independent variable in implementation and evaluation studies. So we can have, in broad terms, a model like Figure 1.
Figure 1.
A Model of Comparative Drug and Alcohol Policy Analysis
The pivotal box in Figure 1 is not “policy” but “formal policy” -- a statute, regulation, order, ordinance, guideline, standard operating procedure, contractual clause or other formalizing text purporting to define the required course or standard of action. Such an observable rule is usually present in CPA, or could be. Insisting on an observable rule in CPA is key to arriving at a narrower definition of policy for the practice, so I will justify the choice on a number of bases, starting with how much simpler it makes CPA classification when combined with the four kinds of studies just identified.
Starting with laws on the books—the observable policy manifestation—allows us to distinguish policymaking, implementation and impact from the policy itself. The “policy” is the formally adopted, written expression and instrument. It can be measured and mapped across space and time. It is an outcome of policymaking; it is the independent variable in an evaluation study, or at least the thing that defines what actions are to be implemented. It is the independent variable for evaluations of policy impact. The overall policy process works through changes in behaviors and environments to produce (or not) the desired policy outcomes.
Taking the written form of the policy seriously in CPA starts with observing the differences between formal policies in detail, as well as key features like effective dates that, along with theory, help define the time points and resolution for observing effects (Alexander C. Wagenaar & Komro, 2013). Mapping studies use transparent and reproducible methods to capture these details (Anderson, Tremper, Thomas, & Wagenaar, 2013; Tremper, Thomas, & Wagenaar, 2010); results are not only illuminating in themselves but are also, by virtue of scientific methods, readily used in evaluation studies (Patrick, Fry, Jones, & Buntin, 2016) Detailed coding of the characteristics of the formal policy is also essential to creating an index, whether the index is based solely on features observable in the text (like size of fine) or incorporates implementation measures (like the number of fines assessed). In light of Figure 1, what Ritter et al. describe as “using data as the policy measure” is seen as the common and reliable technique of using implementation measures like tickets or arrests as a proxy for the formal policy in an evaluation study. Ritter et al.’s finding that this is “rare” in their data should concern us, because it reflects either a shortage of valid, accessible enforcement data in CPA domains, or a missed opportunity for better evaluation. What Ritter et al. call the “descriptive approach” might better be understood simply as thorough observation of the formal policy (legal mapping using qualitative methods) and/or its implementation. Bakke and Endal, for example, used qualitative methods to observe and code in detail the key characteristics of draft national alcohol policies in four sub-Saharan countries as part of a policy-making study of industry influence (Bakke & Endal, 2010). Interestingly, none of the mapping studies in this category take the form of comprehensive multi-jurisdictional coding of primary features of the formal policy to create data for quantitative research, despite the frequency these kinds of studies in CPA (Marynak, et al., 2014; Sanders-Jackson, Gonzalez, Zerbe, Song, & Glantz, 2013) and the fact that CPA has the most extensive set of publicly available legal datasets of any field of work, as least in the US, including the Alcohol Policy Information System (APIS) (National Institute on Alcohol Abuse and Alcoholism, 2011), the Prescription Drug Abuse Monitoring System (PDAPS) (Legal Science, 2016) and the CDC STATE System (Centers for Disease Control and Prevention, 2015).
We should now see that ALL these strategies depend on observing and coding in some way the formal rule or its implementation. If we are studying the policy-making process, policy specification produces outcome measures. If we are evaluating outcomes, the process produces the independent (and mediating) variables. If the study concerns implementation, proper mapping of the law on the books defines the policy to be implemented. By contrast, the weakness of the “implied policy characteristics” strategy, in which “evidence is not necessarily provided for the characterization of the policy” is even more obvious, since such implications can only be validly drawn from the detailed examination of the formal policies and their implementation. From this perspective, an “implied policy difference” is merely an unspecified index. The important question is not so much which strategy is chosen, but whether the investigator is capturing the details of the formal policy and its implementation in a rigorous and transparent way.
A recognition that the specification issue is a little simpler than suggested in Ritter et al. is particularly important given the history of policy measurement in CPA and the emergence of new tools and technologies for researchers in the Internet age. In our work in PHLR, we built on seminal enterprises like APIS to advance the concept and practice of policy surveillance, the ongoing scientific tracking and analysis of policies of public health significance (Burris, Hitchcock, Ibrahim, Penn, & Ramanathan, 2016). We supported work to describe rigorous scientific methods for measuring law (Anderson, et al., 2013; Tremper, et al., 2010), and developed software designed for coding legal text to make resources like APIS easier to develop and maintain (Legal Science LLC, 2015). As APIS – which has been used in over 100 studies ("Peer-Reviewed Publications Using APIS Data," 2015)– shows, the proper measurement of law, done properly once, does not need to be repeated by other researchers, and can be used again and again for different studies. The same methods and tools that have worked well in the U.S. domestic setting are now possible, if not overdue, in international CPA, where the additional labor of collecting formal policies makes collaborative, re-usable open-source policy datasets even more valuable.
At this point, it is possible to offer a revised definition of CPA. CPA can be defined as an empirical study of the development, characteristics, implementation or effects of a drug or alcohol policy across more than one jurisdiction. That leaves only the question of what we mean by policy.
The “Policy” Problem – Defining the Independent Variable
As the reader may well have noticed, the model in Figure 1 supposes that “policy” does not include “settled practices” or “routine functions” or any other phenomenon in an organization that cannot be tied to a rule or other rule-like, formal statement of the actions required or systematically encouraged. We got here on an argument from simplicity, but there are several more important reasons to insist that CPA confine its primary gaze to formalized policies.
Such a restriction does not change much at the core of CPA. Most important policies are actually laws, regulations or organizational rules at least written down in a handbook or memo, even in private organizations. By contrast, when there is no explicit policy form, we easily bleed into a conceptual twilight zone in which research on any practice that occurs regularly or could be adopted as a rule can be “policy research.” Consider, for example, a study of the impact of liberalized social health insurance eligibility on drug users’ access to treatment, which is similar to a study in the Ritter et al. sample (Zur & Mojtabai, 2013). It studies the effect of an observable change in a formal rule on the prevalence of illicit drug use in poor families. Compare that with a study of the effect of treatment access on drug-user health. The latter study clearly has policy implications, and may entail observation of a standard practice of treating drug users, but it does not tell us what policy mechanism (single-payer insurance, employer-based insurance, government subsidy) or implementation model is the best way to generalize treatment for drug user. If we don’t draw the line, then any research on a repeated behavior that could become a policy becomes policy research. Ritter et al.’s definitional proviso on “policy implications” is lost.
It is not merely lawyer’s bias to insist that the written form matters. It is the rule-makers’ best effort to define the purpose and mechanism of the policy: who will do what to what end. The operative feature is set out in the law: a Blood Alcohol Concentration of x, an excise tax of y, a scheme for who may obtain a license to sell recreational marijuana. It is true that implementation typically “transforms” laws on the books, but even that truth that can be overstated. We have seen many times in CPA that using law on the books as the independent variable can produce valid evaluation results, because implementation does not vary enough across sites or from the rule to make a big difference: in tax research, for example, the stated rate is often used as the independent variable (A. C. Wagenaar, Maldonado-Molina, & Wagenaar, 2009). And of course the law on the books is the best outcome measure of a policy making study and the starting point for implementation research to study how the policy is transformed in practice.
But the most important reason for insisting that the policy in CPA be in a tangible, law-like form is that character of a law or law-like rule is what makes policy important to study comparatively in the first place. Policy is the primary a means through which a practice (beneficial or harmful) is deliberately put into wide use. When we make something a policy, our first claim – and a therefore a starting hypothesis for policy evaluation research – is that that practice will now take place more widely, more thoroughly, more consistently, more effectively. If our research shows that the policy works or doesn’t, the most expeditious and appropriate response is to adopt or revoke the formal policy. If our implementation research shows problems in translating law on the books into law on the streets, changing the law on the books will be one immediate, plausible response in many cases. Research on “things an individual or organization consistently does” is the least compelling form of policy inquiry, because policy research is not simply about the effect of doing good things, but the effect of making those practices the rule.
Having made the case that CPA should root its definition of policy in formal, written enactments, I confess the limitations that come with my initial acknowledgement of fuzzy boundaries. There are instances in which an organization will adopt a consistent practice without formalizing it in a written rule, often to avoid admitting publicly to what it is doing. A police department might, for example, systematically instruct its officers at roll-call not to arrest people for simple possession of marijuana, even though the law in the district persists in making possession a crime. Tacit decriminalization is certainly a practice of interest to drug policy researchers, and could be observed in arrest statistics or by qualitative methods. Strictly speaking, I could classify a study about such a police department’s behavior is properly as research on implementation of the laws criminalizing marijuana, but I will forebear. The study of how a police department decided to systematically cease marijuana enforcement, how it implemented this decision within the organization, and/or its outcomes would come within the fuzzy borders of my definition of policy, because the phenomenon would strike many of us as a policy in every respect except written form. My claim has been that putting formal, written policy at the core of CPA will help clarify basic questions of specification and taxonomy, and help draw an important, if fuzzy, line between research that seeks to understand the nature, operation and effects of policy and those that practices that might be embodied in policies or have policy implications.
The Problem of Missing Theory
In an unpublished portion of Ritter et al., Ritter and colleagues examined the CPA literature’s deployment of theory for the conceptualization of the relationship between the policy under study and its impacts. In a preliminary analysis, the authors found that almost half of the papers in their sample did not specify any theory, and many of the remaining articles used an economic theory, often in a cursory way. Although these findings did not make it into the final paper, the problem has been noted as common in similar reviews of health policy research (Gilson & Raphaely, 2008), and is evident in many of the papers in the published Ritter et al. review.
The basic methodological argument for building research on explicit theory is about as controversial as the dietary argument for eating your vegetables – and, it seems, just about as effective in influencing practice. As illustrated in Figure 1, policy works by influencing behaviors and environments. Theories of how policy does so helps identify effects to measure, suggests the point in time one might expect to see effects, how effects might evolve over time, and what sort of intended and unintended effects should be observed. Theory helps investigators understand the number and kind of intermediate steps that must occur before an effect on health outcomes is expected and shapes the selection of statistical models by presenting hypothesized distributions of effects across groups, time, and space (A. Wagenaar & Burris, 2013).
We want to add another reason, one that is of special salience to practices like CPA: explicit use of theory is essential to help researchers from the many disciplines of CPA share each other’s findings and build on each other’s work. Research today is inevitably multi-disciplinary and must become transdisciplinary, which consists in the true integration of theories, methods and tools from the constituent disciplines (Balsiger, 2004; Stokols, Hall, Taylor, & Moser, 2008). The impact of a drug control law can and should be expected to operate in complex ways that may vary over time, populations and enforcement strategies. More often than not, laws have multiple routes of effect, and understanding the whole range of mechanisms and how they operate in particular contexts is necessary to maximize beneficial effects while minimizing deleterious ones. Many theories overlap, with similar mechanisms of action described using quite different terms across disciplines. Other times, the differing terms suggest subtle but important differences in understanding how a law works to affect health. Taken together, they provide a rich menu of theoretical options and practical tools for the CPA researcher – provided they are explicitly set out.
To promote this form of disciplinary integration in PHLR, we used the concept of “mechanism of legal effect,” the process through which a policy works through environmental or behavioral change to influence an outcome (Burris & Wagenaar, 2013). If we reject the idea that policy is fundamentally different than other modes of social influence, we have a plethora of theories we can draw on to understand policy effects. Both deterrence and economic theorists posit that people will behave rationally given what they know about the policy and the consequences of disobedience. Labeling theory posits that law can work by defining proscribed behaviors as “wrong” and people who engage in it as “criminals.” Procedural justice theory focuses on the internal motivation to comply, and how it is influenced by the perceived fairness of legal authorities. Epidemiological models posit reciprocal processes of change in social and physical environments and individual behaviors to reduce risk exposures. Well-established behavioral theories like the Theory of Planned Behavior and the Theory of Triadic Influences (TTI) define the many pyschosocial factors that produce an intention to behave in a certain way and, ultimately, the behavior itself, and are readily applied to understanding how law and policy can influence behavior through social normative processes (Burris & Wagenaar, 2013).
Like the simple model in Figure 1, the mechanism of legal effect is a conceptual interface among disciplines. If both the economist and the social psychologist understand that they are studying how a policy influences behavior, the fact that one uses deterrence and the other the TTI becomes a facilitator rather than a barrier to cross-disciplinary understanding. Attention to the mechanism of legal effect can help build a broader, transdisciplinary understanding of how policy influences structures, behaviors and environments. Because much CPA is necessarily observational, data on how policy works supports causal inference by providing evidence of plausible mechanisms to explain an observed association. Assuming confidence that policy is causing an effect, research on how it does so provides important guidance on ways to influence the magnitude of the effect, reduce unintended consequences or produce the effect more efficiently. Understanding of how law works can also guide legislators and regulators in crafting innovative interventions aimed at newly recognized problems (Anderson & Burris, 2014).
Conclusion
How drug policies are made, their characteristics and distribution, how they are implemented and what impact they have are crucial social questions that raise profound questions of social justice and welfare. Building on a thorough review of recent CPA studies, I have proposed the adoption of a few basic definitions and standards for the practice. CPA is pragmatically defined as the empirical study of the development, characteristics, implementation or effects of a drug policy across more than one jurisdiction. By “policy” we normally mean a law, regulation, order or other written, rule-like specification of the conduct required within a legal or institutional framework. Research that studies practices that could be embodied in a policy, or otherwise has policy implications, should not be considered CPA. Research that analyzes policies in a normative or other non-empirical way should also, for practical reasons, be distinguished from empirical CPA. Measuring or mapping policy is best approached as an empirical project, requiring the use of transparent, reliable method to produce detailed observations of the apparent characteristics of policy that can be used in evaluation research. Explicit use of theory, combined with greater attention to the mechanisms through which policies have their effect, will help integrate the work and results of researchers from the many disciplines of CPA.
In Ulysses, Leopold Bloom, pressed to define “nation,” offers “the same people living in the same place.” “By God, then,” a mocking bystander retorts, “If that’s so I’m a nation, for I’m living in the same place for the past five years” (Joyce, 2002)[317]. Definitions are hard. Ritter and colleagues in their review show that we can define a field of CPA around the topics being studied and the methods being used. This paper has added to the conversation. It is for the readers to continue it, because this effort at self-definition for the practice of CPA is vitally important. Taking the practice seriously builds cohesion and promotes collaboration among those engaging in it, and can help them gain funding and professional advancement. More important, reaching a shared understanding of what constitutes CPA, and the methods and tools used in conducting it, is essential to enhancing the rigor and usefulness of our research. Doing what we do better increases the chances that our work will be valuable to scholars beyond the boundaries of CPA, and influence the course of the policies we study.
Highlights.
Comparative Drug and Alcohol Policy Analysis (CPA) can be defined as the empirical study of the development, characteristics, implementation or effects of a drug or alcohol policy across more than one jurisdiction.
CPA generally falls into four types: policymaking studies, mapping studies, implementation studies, and evaluation studies.
It is important to distinguish empirical CPA from commentary, legal analysis and other studies that do not use a recognized empirical method to collect and/or analyze data.
“Policy” in CPA should be understood to have an observable manifestation in the form of a law, regulation, institutional rule or standard operating procedure.
It is the character of a policy as a rule-like statement that can be widely adopted and consistently applied that makes policy of interest in CPA.
Use of theory in research strengthens causal inference, facilitates sharing of knowledge across disciplines, and supports translation of knowledge into practice.
Acknowledgments
Work on this paper was supported in part by the National Institute on Drug Abuse (Contract Number HHSN271201500081C). The author thanks Sarah B. Klieger, MPH, for advice and assistance. The opinions expressed in the paper are solely those of the author.
Footnotes
AUTHOR DECLARATION
Work on this paper was supported in part by NIDA, as stated in the paper. The author is a founder of Legal Science, LLC, which is a provider of software for legal mapping. I confirm that there are no known conflicts of interest associated with this publication and there has been no otherfinancial support for this work that could have influenced its outcome. I confirm that there are no other persons who satisfied the criteria for authorship but are not listed. I confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing I confirm that I have followed the regulations of my institution concerning intellectual property. I understand that the Corresponding Author is the sole contact for the Editorial process (including Editorial Manager and direct communications with the office). I am responsible for submissions of revisions and final approval of proofs. I confirm that I have provided a current, correct email address.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Anderson E, Burris S. Researchers and Research Knowledge in Evidence-Informed Policy Innovation. In: Voon T, Mitchelll AD, Liberman J, editors. Regulating Tobacco, Alcohol and Unhealthy Foods: The Legal Issues. Routledge; Abingdon, UK: 2014. pp. 36–63. [Google Scholar]
- Anderson E, Tremper C, Thomas S, Wagenaar AC. Measuring Statutory Law and Regulations for Empirical Research. In: Wagenaar A, Burris S, editors. Public Health Law Research: Theory and Methods. John Wiley & Sons; San Francisco: 2013. pp. 237–260. [Google Scholar]
- Bachhuber MA, Saloner B, Cunningham CO, Barry CL. Medical cannabis laws and opioid analgesic overdose mortality in the united states, 1999-2010. JAMA Internal Medicine. 2014;174:1668–1673. doi: 10.1001/jamainternmed.2014.4005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bakke Ø, Endal D. Vested Interests in Addiction Research and Policy Alcohol policies out of context: drinks industry supplanting government role in alcohol policies in sub-Saharan Africa. Addiction. 2010;105:22–28. doi: 10.1111/j.1360-0443.2009.02695.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Balsiger PW. Supradisciplinary research practices: History, objectives and rationale. Futures. 2004;36:407–421. [Google Scholar]
- Bewley-Taylor D, Jelsma M. Series on Legislative Reform of Drug Policies. Transnational Institute; Amsterdam: 2011. Fifty Years of the 1961 Single Convention on Narcotic Drugs: A Reinterpretation. [Google Scholar]
- Burris S, Anderson E. Legal Regulation of Health-Related Behavior: A Half Century of Public Health Law Research. Annual Review of Law and Social Science. 2013;9:95–117. [Google Scholar]
- Burris S, Hitchcock L, Ibrahim JK, Penn M, Ramanathan T. Policy Surveillance: a Vital Public Health Practice Comes of Age. Journal of Health Politics, Policy & Law. 2016;41:1151–1167. doi: 10.1215/03616878-3665931. [DOI] [PubMed] [Google Scholar]
- Burris S, Mays GP, Douglas Scutchfield F, Ibrahim JK. Moving from intersection to integration: public health law research and public health systems and services research. Milbank Q. 2012;90:375–408. doi: 10.1111/j.1468-0009.2012.00667.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burris S, Wagenaar A. Integrating Diverse Theories for Public Health Law Evaluation. In: Wagenaar A, Burris S, editors. Public Health Law Research: Theory and Methods. John Wiley & Sons; San Francisco: 2013. pp. 193–214. [Google Scholar]
- Burris S, Wagenaar AC, Swanson J, Ibrahim JK, Wood J, Mello MM. Making the case for laws that improve health: a framework for public health law research. Milbank Q. 2010;88:169–210. doi: 10.1111/j.1468-0009.2010.00595.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Centers for Disease Control and Prevention [Retrieved July 1 2015];State Tobacco Activities Tracking and Evaluation (STATE) System. from http://www.cdc.gov/tobacco/statesystem.
- Cerdá M, Wall M, Keyes KM, Galea S, Hasin D. Medical marijuana laws in 50 states: Investigating the relationship between state legalization of medical marijuana and marijuana use, abuse and dependence. Drug & Alcohol Dependence. 120:22–27. doi: 10.1016/j.drugalcdep.2011.06.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chatwin C. UNGASS 2016: Insights from Europe on the development of global cannabis policy and the need for reform of the global drug policy regime. International Journal of Drug Policy. 2016 doi: 10.1016/j.drugpo.2015.12.017. [DOI] [PubMed] [Google Scholar]
- Csete J, Kamarulzaman A, Kazatchkine M, Altice F, Balicki M, Buxton J, Cepeda J, Comfort M, Goosby E, Goulão J, Hart C, Kerr T, Lajous AM, Lewis S, Martin N, Mejía D, Camacho A, Mathieson D, Obot I, Ogunrombi A, Sherman S, Stone J, Vallath N, Vickerman P, Zábranský T, Beyrer C. Public health and international drug policy. The Lancet. 387:1427–1480. doi: 10.1016/S0140-6736(16)00619-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gilson L, Raphaely N. The terrain of health policy analysis in low and middle income countries: a review of published literature 1994–2007. Health Policy and Planning. 2008;23:294–307. doi: 10.1093/heapol/czn019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joyce J. Ulysses: A Reproduction of the 1922 First Edition. Dover Publications; Mineola, NY: 2002. [Google Scholar]
- Legal Science LLC [Retrieved October 16 2015];Prescription Drug Abuse Policy System (PDAPS) from www.PDAPS.org.
- Legal Science LLC [Retrieved December 2 2015];MonQcle. from http://www.monqcle.com/
- Marynak K, Holmes CB, King BA, Promoff G, Bunnell R, McAfee T. State laws prohibiting sales to minors and indoor use of electronic nicotine delivery systems - United States, November 2014. MMWR Morb Mortal Wkly Rep. 2014;63:1145–1150. [PMC free article] [PubMed] [Google Scholar]
- McMorris BJ, Catalano RF, Kim MJ, Toumbourou JW, Hemphill SA. Influence of Family Factors and Supervised Alcohol Use on Adolescent Alcohol Use and Harms: Similarities Between Youth in Different Alcohol Policy Contexts. Journal of Studies on Alcohol and Drugs. 2011;72:418–428. doi: 10.15288/jsad.2011.72.418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Institute on Alcohol Abuse and Alcoholism [Retrieved November 29 2014];Alcohol Policy Information System (APIS) from http://www.alcoholpolicy.niaaa.nih.gov. [PMC free article] [PubMed]
- Office of the Associate Director for Policy [Retrieved September 13 2015];Definition of Policy. from http://www.cdc.gov/policy/analysis/process/definition.html.
- Patrick SW, Fry CE, Jones TF, Buntin MB. Implementation Of Prescription Drug Monitoring Programs Associated With Reductions In Opioid-Related Death Rates. Health Affairs. 2016 doi: 10.1377/hlthaff.2015.1496. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peer-Reviewed Publications Using APIS Data [Retrieved March 3 2015]; from https://alcoholpolicy.niaaa.nih.gov/peer-reviewed_publications_using_apis_data.html.
- Ritter A, Livingston M, Chalmers J, Berends L, Reuter P. Comparative policy analysis for alcohol and drugs: Current state of the field. International Journal of Drug Policy. 2016;31:39–50. doi: 10.1016/j.drugpo.2016.02.004. [DOI] [PubMed] [Google Scholar]
- Sanders-Jackson A, Gonzalez M, Zerbe B, Song AV, Glantz SA. The pattern of indoor smoking restriction law transitions, 1970-2009: laws are sticky. Am J Public Health. 2013;103:e44–51. doi: 10.2105/AJPH.2013.301449. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stokols D, Hall KL, Taylor BK, Moser RP. The Science of Team Science: Overview of the Field and Introduction to the Supplement. American Journal of Preventive Medicine. 2008;35:S77–S89. doi: 10.1016/j.amepre.2008.05.002. [DOI] [PubMed] [Google Scholar]
- Tremper C, Thomas S, Wagenaar AC. Measuring Law for Evaluation Research. Eval Rev. 2010;34:242–266. doi: 10.1177/0193841X10370018. [DOI] [PubMed] [Google Scholar]
- Vuolo M. National-level drug policy and young people's illicit drug use: A multilevel analysis of the European Union. Drug & Alcohol Dependence. 131:149–156. doi: 10.1016/j.drugalcdep.2012.12.012. [DOI] [PubMed] [Google Scholar]
- Wagenaar A, Burris S. Public Health Law Research: Theory and Methods. Joseph Wiley and Sons; San Francisco: 2013. [Google Scholar]
- Wagenaar AC, Komro KA. Natural Experiments: Research Design Elements for Optimal Causal Inference Without Randomization. In: Wagenaar A, Burris S, editors. Public Health Law Research: Theory and Methods. Jossey-Bass; San Francisco: 2013. pp. 307–324. [Google Scholar]
- Wagenaar AC, Maldonado-Molina MM, Wagenaar BH. Effects of alcohol tax increases on alcohol-related disease mortality in Alaska: time-series analyses from 1976 to 2004. Am J Public Health. 2009;99:1464–1470. doi: 10.2105/AJPH.2007.131326. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walt G, Shiffman J, Schneider H, Murray SF, Brugha R, Gilson L. ‘Doing’ health policy analysis: methodological and conceptual reflections and challenges. Health Policy and Planning. 2008;23:308–317. doi: 10.1093/heapol/czn024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zur J, Mojtabai R. Medicaid Expansion Initiative in Massachusetts: Enrollment Among Substance-Abusing Homeless Adults. American Journal of Public Health. 2013;103:2007–2013. doi: 10.2105/AJPH.2013.301283. [DOI] [PMC free article] [PubMed] [Google Scholar]

