Skip to main content
ILAR Journal logoLink to ILAR Journal
. 2021 Aug 9;60(3):424–433. doi: 10.1093/ilar/ilab024

Neuroethics and Animals: Report and Recommendations From the University of Pennsylvania Animal Research Neuroethics Workshop

Adam J Shriver , Tyler M John
PMCID: PMC8767460  PMID: 34370840

Abstract

Growing awareness of the ethical implications of neuroscience in the early years of the 21st century led to the emergence of the new academic field of “neuroethics,” which studies the ethical implications of developments in the neurosciences. However, despite the acceleration and evolution of neuroscience research on nonhuman animals, the unique ethical issues connected with neuroscience research involving nonhuman animals remain underdiscussed. This is a significant oversight given the central place of animal models in neuroscience. To respond to these concerns, the Center for Neuroscience and Society and the Center for the Interaction of Animals and Society at the University of Pennsylvania hosted a workshop on the “Neuroethics of Animal Research” in Philadelphia, Pennsylvania. At the workshop, expert speakers and attendees discussed ethical issues arising from neuroscience research involving nonhuman animals, including the use of animal models in the study of pain and psychiatric conditions, animal brain-machine interfaces, animal–animal chimeras, cerebral organoids, and the relevance of neuroscience to debates about personhood. This paper highlights important emerging ethical issues based on the discussions at the workshop. This paper includes recommendations for research in the United States from the authors based on the discussions at the workshop, loosely following the format of the 2 Gray Matters reports on neuroethics published by the Presidential Commission for the Study of Bioethical Issues.

Keywords: animal research ethics, animal welfare, bioethics, cerebral organoids, chimeras, machine-brain interface, neuroethics, personhood

INTRODUCTION

Growing awareness of the ethical implications of neuroscience in the early years of the 21st century led to the emergence of the new academic field of “neuroethics.” Academic meetings and symposia, new journals, emerging university centers and programs, and an official society have provided venues for discussion of neuroethical issues. The importance of this new field was demonstrated in 2013, when President Obama announced the launch of the National Institutes of Health (NIH) BRAIN Initiative created with a mission of revolutionizing our understanding of the brain and simultaneously called on the Presidential Commission for the Study of Bioethical Issues to create a working group to specifically address the complicated ethical issues that arise from our rapidly increasing neuroscientific knowledge.

Two volumes entitled Gray Matters resulted from this examination of neuroethical issues by the working group. The first volume, released in May of 2014, examined “the integration of ethics into neuroscience research across the life of a research endeavour.”1 The second volume, released in March of 2015, focused on “a set of controversial topics that illustrate the ethical tensions and societal implications of advancing neuroscience and technology: cognitive enhancement, consent capacity, and neuroscience and the legal system.”2 The 2 volumes of the Presidential Commission for the Study of Bioethical Issues report surveyed a broad and complex landscape of ethical issues, presented a concise set of illustrative examples, provided insightful analysis, and included concrete, practical recommendations throughout. The reports have provided and will continue to provide valuable guidance for neuroscience researchers and policymakers for years to come. As an example, the Journal of Neuroscience recently published a set of 8 guiding principles for the BRAIN Initiative from the NIH Neuroethics Working Group along with an accompanying commentary.3,4

The vast majority of initial BRAIN Initiative funding was targeted at developing animal models of brain function, and the NIH recently announced a call for applications for creating new colonies of marmosets for “transgenic and chimeric neuroscience research.”5 But despite the rapid acceleration and evolution of neuroscience research on nonhuman animals, neither the 2 Gray Matters volumes nor the recent 8 guiding principles make any reference to ethical issues connected with neuroscience research involving nonhuman animals. This is a significant oversight given the central place of animal models in neuroscience. As such, there is a strong need for exploration of neuroethical issues involving animals.

Though several reports and guidance documents exist that are focused on the use of animals in neuroscience research, these documents have primarily focused on applying familiar ethical frameworks (such as the “3Rs” of reduction, replacement, and refinement) to neuroscience research rather than emphasizing new ethical challenges unique to neuroscience. But just as it has been a central theme from neuroethics that our increased abilities to interpret and influence human brains have given rise to unique ethical questions that differ from traditional questions in bioethics,6 our increased ability to study and manipulate animal brains also raises new questions and concerns that go beyond previous discussions of animal ethics. These new questions have not yet been addressed in a systematic and comprehensive manner, though the need for neuroethics to include explicit discussion of issues involving nonhuman animals is increasingly being recognized in special journal issues,7 statements from journal editors,8 and edited volumes.9

To respond to these concerns, the Center for Neuroscience and Society (CNS) and the Center for the Interaction of Animals and Society at the University of Pennsylvania hosted a workshop on the “Neuroethics of Animal Research” in Philadelphia, Pennsylvania, in June 2016. This workshop featured expert speakers and participants in neuroscience, ethics, psychology, veterinary medicine, and epidemiology as well as representatives from the NIH BRAIN Workgroup, the Committee on Guidelines for the Care and Use of Mammals in Neuroscience and Behavioral Research, the pharmaceutical industry, and an animal protection organization. The workshop was focused on identifying emerging and under-discussed ethical questions arising from neuroscience research involving nonhuman animals and was organized by Martha Farah, Director of the CNS, Adam Shriver, Visiting Research Fellow at the CNS, and James Serpell, Director of the Center for the Interaction of Animals and Society.

After brief introductory and stage-setting remarks from Martha Farah and Adam Shriver, the 1-day workshop focused on 5 topics presenting new ethical questions. The morning session was focused on the use of animal models in neuroscience research on pain and psychiatric conditions and featured presentations from Kenneth Sufka, Professor of Psychiatry at the University of Mississippi, Joseph Garner, Associate Professor of Comparative Medicine at Stanford, and Larry Carbone, Director of the Animal Care and Use Committee at UCSF. The afternoon was focused on “ontology-bending” new technologies that challenged our previous conceptions of “human,” “animal,” “person,” and “machine.” This session featured presentations from Andy Schwartz, Professor of Neurobiology at Pittsburgh, Evan Ballaban, Professor of Psychology at McGill University, Helena Hogberg, Research Associate at Johns Hopkins University, Robert Streiffer, Professor of Bioethics and Ethical Theory at Wisconsin, and Kristin Andrews, the York University Research Chair in Animal Minds. General discussion was included throughout the day’s presentations.

This report provides an overview of the ethical issues that arise in neuroscience research on nonhuman animals based on the workshop presentations and discussion. This includes a discussion of the use of animal models of pain and human psychiatric disorders, an analysis of recent technological breakthroughs relating to neuroscience research on nonhuman animals, and an overview of the ways that neuroscience interventions can challenge traditional categories of moral patienthood. Based on these discussions, we include analysis here of key ethical questions related to neuroscience research on nonhuman animals. From these discussions, we (the authors of this paper) propose recommendations for ethical conduct of neuroscience research on nonhuman animals moving forward. These recommendations should be considered the authors’ own and may not represent the views of other workshop participants or that of the funding or hosting institutions. Though the workshop was several years ago, the topics covered continue to be important ethical issues.

ANIMAL MODELS OF PAIN AND PSYCHIATRIC CONDITIONS

Neuroscience research has progressed tremendously over the past half century, but has also seen serious and expensive failings in attempts to use animal models of human mental states such as pain10 as well as animal models of treatments for psychiatric conditions.11 These topics were discussed at the workshop by Drs Kenneth Sufka, Joseph Garner, and Larry Carbone.

Regarding pain, many of the most common assays used in pain research on nonhuman animals (such as hot plate, Von Frey fibers, and tail flick tests) produce observable reactions that are spinally mediated and thus do not depend on brain activation. Pain is defined as having sensory and affective components,12 and the affective component of pain is arguably the most clinically significant,13 yet the most common tests in animals do not capture affect. Moreover, most assays are testing acute pain, but in humans the most serious clinical challenges involve chronic pain.14 As Kenneth Sufka stated succinctly during his presentation, “I have yet to figure out why we need to develop an analgesic for humans who leave their hands on a hot stove for too long.” Moreover, even when innovative new models of chronic pain are developed, they are often “validated” using the same measures of pain that are spinally mediated rather than brain dependent. A recent review explains that “[f]rustration is mounting over the limited success of the field in translating the veritable explosion of basic scientific data collected over the past few decades using animal models into truly new, effective and safe clinical analgesics.”10

For this reason, Sufka15,16 and others17–19 have developed conditioned place preference measures of pain. In these tests, animals are exposed to different stimuli in specific locations, and their willingness to approach or avoid those locations in the future is measured. Conditioned place aversion appears to rely on activity in the brain regions that are crucial for pain affect and chronic pain symptoms in humans. Moreover, the animals choosing to avoid areas they associate with negative events intuitively seems to better capture the negative valence of pain that makes pain undesirable for humans. The facial grimace scale20 also appears to be a promising method for evaluating pain affect in mice. Nevertheless, much pain research still uses older models that do not appear to capture the most clinically significant aspects of pain, rendering them poor predictors of analgesic efficacy. The primary purpose of pain research on animals is to find effective treatments of pain in humans, but most effective analgesics are still “based on mechanisms of action that have been recognized for some time,” and many new compounds designed as painkillers are failing clinical trials.21 As stated in a recent review, “Many are frustrated with the lack of translational progress in the pain field, in which huge gains in basic science knowledge obtained using animal models have not led to the development of many new clinically effective compounds.”10 This failure has only become more significant as the opioid crisis continues to cause massive harms due, in part, to the inability to find effective analgesics with less addictive properties than opioids.

As explained by Joseph Garner at the workshop, there are arguably even more severe translation problems when it comes to the modeling of human psychiatric conditions and brain-based diseases, and particularly with preclinical trials of drugs intended to treat those conditions. The success rate of drugs in clinical trials that made it through preclinical testing on animals is only 1 in 10 in humans.22 Because the primary justification given for invasive, non-therapeutic animal research is that the potential benefits outweigh the potential harms, this “90% devaluation” of the animal tests places serious pressure on the ethical justification for such research. Though the failure of human trials is not entirely a result of a failure to translate, the translation failure is nevertheless a major part of the explanation. These high failure rates not only mean that many animals are needlessly suffering but also that limited financial resources are lost.

In 2010, a major shift occurred when researchers began to question the validity of the animal models being used. One review found that of approximately 200 genetically modified mouse models of Alzheimer’s disease, none of them had been replicated in human studies.23 Another review published the same year found that of 500 studies reporting effective treatments for ischemic strokes, only 2 successfully translated to humans.24 Specifically in regard to animal models of human psychiatric disorders, Nestler and Hyman wrote: “Many of the symptoms used to establish psychiatric diagnoses in humans (for example, hallucinations, delusions, sadness and guilt) cannot be convincingly ascertained in animals.”11 Researchers, and particularly the pharmaceutical industry, have started seriously considering the possibility that the animal models being used might be unreliable.

The discussion of concerns about translation models continues today. A recent proposed roadmap for Parkinson’s research stated, “Currently, no animal model faithfully reproduces all the key clinical features of PD.”25 A valuable overview of some of the issues with the validity and reproducibility of animal models used in neuroscientific research can be found in Johnson,26 and a more general overview of recent evidence relating to the quality, validity, and value of pre-clinical animal research regardless of scientific discipline can be found in Pound.27

Garner has developed an extensive critique of current models. At the workshop, he argued that treatments that appear to work in animal models have a poor track record in humans in part because too many research protocols are “modeling the model” rather than “modeling the disease.” That is, they are continuing with a “model” of a disorder that is known to inaccurately capture a similar disorder in humans. Many behaviors or assays in animals taken to be models of particular disorders of humans are inadequate. For example, all of the models of Parkinson’s in animals involve dopaminergic cell loss. However, in humans, there are symptoms predictive of Parkinson’s that occur well before any dopaminergic cell loss. Thus, Garner argues, the model is only capturing a symptom, rather than an underlying cause, of the disorder. The forced-swim test is another well-known model with clear welfare issues that is a questionable “model” of multiple human psychiatric conditions28,29 but is nonetheless widely used, though this may finally be starting to change.30

Why do so many inadequate models persist in the search for human treatments? Garner offered several explanations, as follows.

The Incentive Structure of Academic Publishing. Pharmaceutical companies have started to move away from some of the least translatable animal models more quickly than academic researchers. These companies lose money from poor choices and thus are incentivized to stop using non-predictive models. However, NIH R01 grants are a central feature of the research funding structures for universities. R01 grants are renewable, and there is a 5 to 10 times greater chance of having an R01 grant renewed than getting a new R01 grant funded. Because grants are often tied to models, researchers are incentivized to continue using the same models, as they are much more likely to have grants using the same model renewed than to have new grants using different models approved. Moreover, if many of the reviewers for journals are invested in an old model for a disorder because of their own background, this can lead to problems with new models being adopted.

A Lack of Clinical Knowledge. Researchers who study models of disorders in animals do not necessarily see humans in a clinical context with that disorder. Many models are based on descriptions from the Diagnostic and Statistical Manual of Mental Disorders, but these criteria only partially and imperfectly capture the expression of disorders. Thus, a lack of experience with humans with a particular condition can lead researchers to miss cues that the model is not truly getting at the same disorder, cues that would be apparent from a more holistic understanding of the disorder in humans.

A Lack of Comparative Knowledge. Many researchers are not experts in the biology and behavior of the animals they are working with. This can lead to mistakes in interpretation or in treatment.

Excessive Homogeneity Among Research Subjects. The goal of human clinical research is to test treatments on a diverse and heterogeneous portion of the population to ensure it is an effective treatment for humans generally. However, in much animal research, the approach is dramatically different: genetically similar animals kept in identical housing with identical temperatures on identical feeding schedules are used, and this may lead to false indications that a particular treatment is effective (or ineffective) due to the focus on a narrow range of animals considered.

Ethical Issues With Animal Models

One problem raised repeatedly throughout the workshop was that the putative ethical justification for animal research is based on an assumed harm-benefit analysis where harms to nonhuman animals are outweighed by the potential benefits that would accrue to humans (or, in some cases, to other animals). Though important steps have been taken with the creation of the ARRIVE Guidelines31 and the AALAS–FELASA Working Group reports on harm-benefit analysis,32 at present there is no governmental requirement for performing genuine harm-benefit analysis in particular research programs in the United States. Funding agencies determine whether research is worth funding, and IACUCs are tasked with minimizing harms, but there is no requirement for directly weighing harms against expected benefits, so in fact a harm-benefit analysis is never strictly speaking required at any point in the research planning process.

Some have attempted to argue that the fact that funding agencies evaluate potential benefits and IACUCs are asked to minimize harms indicates that a reliable harm-benefit analysis does exist in the US system, but this is based on a misunderstanding of harm-benefit analysis. If the expected harms and benefits of research are never directly evaluated against one another, then there can be no basis for thinking that the benefits of research are comparatively more significant than the harms, no matter how selective funding agencies are and no matter how judicious IACUCs are. This fact is codified in the requirements of institutional research boards that oversee research on humans who are asked to both minimize risk and to ensure a favorable risk–benefit analysis. Merely minimizing risk, even where research has clear benefits, is not sufficient for ensuring a favorable ratio of benefits to risks.

Moreover, the 2015 NIH decision to eliminate most chimpanzee research on the grounds that such research was unnecessary is evidence that the current process for approving research is not robust enough to rule out animal models that are not justified by a true harm-benefit analysis. Both funding agencies and IACUCs arguably performed their roles correctly in evaluating previous chimpanzee research; nevertheless, when an independent commission looked at the research, it concluded that most of the research on chimpanzees was not necessary. Thus, despite the challenges of assessing harms and potential benefits from research, it nevertheless should be the case that there is a point in the process where the harms and expected benefits of the research are directly weighed against one another, and this analysis should be presented in a clear and transparent manner.

Nevertheless, a favorable harm-benefit analysis by no means guarantees ethical research standards. Workshop participants, bioethicists, and researchers generally were divided over what additional ethical restrictions exist on experimentation on nonhuman animals. Larry Carbone noted that many bioethicists believe that harming others without their consent is virtually always wrong and that this constraint may extend to nonhuman animals.33,34 Workshop participant David DeGrazia, along with Jeff Sebo, has argued that morally responsible animal research must give each animal a worthwhile life, cause no unnecessary harms, and meet the basic needs of each animal in addition to having a favorable harm-benefit ratio, and that even satisfying these constraints may not be a guarantee of ethical research.35

It is widely recognized that keeping animals in restrictive conditions, performing invasive procedures on them against their will, and ultimately ending their lives causes harms to those animals. Even the strongest proponents of animal research recognize these as harms but claim that those harms are justified by the potential benefits of animal research for humans. However, if thousands of animals are used in failed models lacking predictive validity, this research causes harm to animals without producing any benefits. Harming nonhuman animals without producing any benefits is not justifiable. Moreover, there are direct costs to human interests from relying on inefficient models. This money could be redirected to productive ends, whether in the pursuit of scientific advancement or on other societal priorities.

IN VITRO BRAINS (OR BRAINS-ON-A-CHIP)

Current models in toxicology testing rely on testing toxins on large numbers of animals, usually mice. These are referred to as in vivo testing and are currently considered the “gold standard” of toxicology testing by the FDA. Such research requires thousands of animals to be exposed to aversive experiences and killed. Alternative “in vitro” tests have been developed that examine the effects of toxins on cells that are not in living organisms but rather in artificial environments.36 In an optimistic assessment at a 2017 Senate Appropriations Committee, NIH Director Francis Collins suggested that in 10 years, no animals would be used in toxicity testing and the field would move entirely to in vitro models.37

Unfortunately, as explained by workshop participant Helena Hogberg, because the FDA treats in vivo models as their gold standard for translational research, alternative, non-animal models must typically be validated against animal models to gain federal approval. Because mouse models are imperfect models of human disease, this means that new alternatives must mirror an imperfect model to be implemented in toxicity research. On the US Food and Drug Administration (FDA)’s current regulatory standards, alternative models could in principle model human disease better than mouse models and fail to gain approval, and no model could in principle receive a stronger evaluation than a mouse model. FDA standards therefore make it politically impossible to develop models of human disease that are more accurate than mouse models. Researchers at the Center for Alternatives to Animal Testing at Johns Hopkins hope to identify toxicity pathways and mechanisms that will allow in vitro models to become the new standards for testing—thus greatly reducing the number of animals harmed—but this will require FDA endorsement and acceptance.

As part of this search for alternative and better methods of toxicology testing, researchers have been developing “organ-on-a-chip” models that can be used to assess toxicity at the molecular, cellular, and organ levels. Models have been developed for livers, eyes, and other organs. Efforts are also underway to create a “human-on-a-chip” that connects different organ models for a more holistic representation of the interaction of different systems within humans. At Johns Hopkins University’s Center for Alternatives to Animal Testing, researchers are designing models to assess neurodevelopmental toxicology by using brain-on-a-chip models. The motivation for this work is that 1 in 6 children in the United States have 1 or more developmental disorders, many of which are neurodevelopmental disorders. The National Research Council estimated that 3% of developmental disorders are due exclusively to chemical exposure and 28% are due at least partly to chemical exposure.38

Researchers have developed a 3-dimensional partial model of rat brains that contains neurons, astrocytes, and microglia. These neurons exhibit spontaneous electrical activity, although they receive no sensory stimuli. The models trace the development of cells from induced pluripotent stem cells to neural precursor cells and finally into neurons, thus demonstrating the influence of various chemicals in the cells’ environment. The phenotypes of these cells have been shown to mimic the behavior of neurons in humans with various neurodevelopmental disorders such as Parkinson’s.

Ethical Issues With In Vitro Brains

If these models are successful, they could replace the thousands of nonhuman animals who are harmed and killed in toxicity testing each year. Animals in toxicity tests can live extremely compromised lives, experiencing acute and chronic pain due to experimentation. The benefits of this change would therefore be quantitatively and qualitatively immense and has the potential to save thousands of animals from pain, distress, and death in toxicity tests. The EPA recently decided to allocate $4.25 million to funding the research of alternatives, including $849,000 to alternative models of neurotoxicity.39 Additional funding could accelerate this research and spare many animals from needless harm, and could help lead to the replacement of animal models as the gold standard of neurotoxicology testing.

However, one possible concern with neural arrays is whether groups of neurons, when they reach a certain level of complexity and organization, could produce sentience and therefore attain some moral status. This might initially sound implausible, but most neuroscience researchers agree that our minds and other animals’ minds ultimately consist of nothing more than coordinated neural activity, so at some point arrays of neurons organized in particular ways do attain moral status. Given the small number and limited functional connections of neurons involved in current 3D models, it seems unlikely that they presently exhibit any morally significant sentience. We should nevertheless ask if, as the models become more complex, they will eventually reach a point at which sentience is a genuine concern. Many researchers have expressed the idea that neural models raise no ethical concerns because they are not “connected to the outside world.” This can be read in 2 ways: (1) researchers might believe that, as a matter of empirical fact, when neurons are not connected to sensory inputs they fail to develop patterns of firing that are constitutive of thought or feelings; or (2) researchers might believe that a connection to the external world is required to have semantics, that is representation of the external world, in addition to syntax, a set of rules that govern how thoughts are related to one another.40 Both of these readings, if true, may imply that neural models are not sentient and are not objects of moral concern.

Though there seems to be widespread belief that the absence of input negates the possibility of moral patient status, there are at least some reasons for caution. First, the absence of a nociceptive signal does not always entail the absence of pain. For example, phantom limb pain occurs in patients who have lost their limbs and the nociceptors that are normally responsible for pain in the limbs. In other words, pain can occur in the absence of a signal coming from the peripheral nervous system. However, unlike in the neural array models, in the case of phantom limbs, there was a previous connection to external signals, raising the possibility that the cases are not relevantly similar. Second, spontaneous patterns of activation can occur in self-contained groups of neurons. In the absence of an ability to decode neural patterns into content, we cannot decisively rule out the possibility that this organized activity has some representational content. Though some initial texts have started to consider these challenges,41,42 this is an issue that requires future exploration.

ANIMAL BRAIN-MACHINE INTERFACES (BMI)

The development of BMIs has already shown tremendous potential to help humans with serious medical conditions. Human patients previously paralyzed or missing the ability to control a limb have been able to move mechanical prostheses via technology that reacts to the neural firing patterns that would normally control natural body parts. This technology has been used to help humans who have lost the ability to move a limb and has the potential to be used by patients with more severe conditions such as locked-in syndrome.

As described by workshop participant Andrew Schwartz, a key step in the development of BMI technology was using single- and multi-unit neural recordings in cortical areas associated with motor control to monitor how certain neurons fired when monkeys made arm movements.43 After these patterns were detected and decoded, researchers inserted electrodes capable of controlling prosthetic limbs into the monkeys’ brains. The monkeys were then able to control the mechanical limbs with their brains. This research made possible current BMI in humans, and such interfaces are unlikely to have developed without these animal models given our current inability to use non-invasive technology to study the behavior of individual neurons.

BMI technology has also been put to use in controlling animal behaviors in some well-publicized cases. The “roborat” was an example of a rat whose movements were controlled by stimulation of the reward centers of the brain.44 This technology is being tested by the US military as a way to use animals with cameras attached to them to gain information about otherwise inaccessible locations. The developers of the roborat claim that the fact that reward centers of the brain are the mechanism for guiding the rat’s behavior suggests that this is not an unpleasant experience.

However, some attendees at the meeting questioned this claim, noting that motor control disorders in humans, which cause involuntary movements, are known to be extremely aversive. This could perhaps be empirically explored in more detail by, for example, looking at cognitive bias tasks45 or assessing facial features46 during control tasks.

Neuron-machine technology has also been put to educational, or at least commercial, use. A group called Backyard Brains developed a “do-it-yourself kit” for creating what they call a “roboroach.”47 Using this kit, children can dissect the antennae off of insects, then insert electrodes that can be used to control the movements of the animals. The company suggests that the roboroach teaches children about the value of science.

In addition to BMI, another class of interesting findings could perhaps be called “machine-brain interaction.” DeMarse and Dockendorf reported the use of cultured rat cortical neurons to control the flight of a simulated aircraft,48 and Reger et al inserted a portion of a lamprey’s nervous system into a small robot as part of the control of its movements.49 In these cases, neurons are used as necessary parts of machines.

Ethical Issues With Animal BMIs

Though there are some who question whether animal models will be needed for neuroscience research going forward, the development of BMI technologies beneficial for humans may have been unlikely without certain extensive invasive experiments on nonhuman animals. Thus, under a harm-benefit analysis of the use of animals in this type of medical research, benefits to paralyzed and immobile humans, particularly as this technology continues to evolve, can be included in the benefits column. This application of BMI is in many ways a standard case of using animal models to improve human medical conditions, and so standard epistemic and ethical considerations relevant to animal research apply.

For the military and educational applications of BMI, however, new ethical issues arise. In cases where the animals’ movements are being controlled by outside sources, questions arise as to how aversive the experience is. In humans, we would also regard the loss of liberty and autonomy resulting from external control as a significant moral concern, even for young humans who may have rudimentary agency at the level of monkeys or rats, raising similar concerns for nonhuman animals. Although the monkeys and rats used in BMI experiments are doubtlessly sentient, in the case of the roboroach, the cockroach is, on many accounts of sentience, less likely to have positive or negative experiences that resemble those of humans. Consequently, the expected harms of the research may be reduced in this regard. On the other hand, it is not clear that this provides educational value that could not be achieved through other means, so the benefits of the kit are also questionable. Moreover, one might worry that using animals this carelessly could lead to a general disregard for the welfare of animals, and so might lead indirectly to additional welfare concerns.50

Finally, in the cases of using neural arrays to guide or influence machinery, the main concern is related to above worries about in vitro brains and the minimal conditions needed for organized neural activity to count as morally significant. Again, it seems intuitively unlikely in the above machine-brain interface cases that the neural arrays in question have anything approaching sentience, and even if the arrays did have some aspects of conscious experience there would be no reason to suppose that the phenomenology would have any positive or negative affective valence. However, an additional wrinkle compared with the in vitro neural cells is that neurons in machine-brain interfaces do have a connection to “the outside world.” That is, in contrast to claims that isolated neural arrays could have “syntax, but not semantics,” the firing patterns of these neural arrays are better candidates for representing features of the outside world. Thus, there might exist additional reasons for being cautious about their creation.

CHIMERAS

Chimeras are organisms that contain genetic material from 2 or more different sources. The chimeras of interest for neuroethics are organisms whose brains contain genetic material from multiple sources. We use “neural chimeras” to refer to such animals who have neurons from multiple genetic sources. For such chimeras, nomenclature is often used referencing “X-Y chimeras,” where X and Y refer to different types of organisms. For example, a “human-animal chimera” refers to a chimera that contains cells with a human origin and cells with a nonhuman animal origin.

Modern technology allows gene insertion to take place early in development while cells are still in the neural crest stage of development. For bird species, this means that the insertions can be performed in eggs so that no invasive surgery on sentient organisms is required. By tracking how cells from the neural crest develop into brain cells, researchers can selectively intervene such that cells in particular regions of the developed brain originate exclusively from the donor organism while cells in other brain regions originate from the host embryo. Thus, targeted precision for the combination of neural material is possible.

At the workshop, Evan Balaban provided an example of the power of neural chimeras for demonstrating the role of neurons in behavior using his research.51 Male chickens and quail each have their own characteristic vocalizations. In separate experimental arms, Balaban and colleagues inserted quail cells into the cellular progenitors of neurons that compose the midbrain, the brainstem, and the forebrain of chicken embryos, respectively. In the condition where quail neurons were in the midbrain, but not the other 2 conditions, the resulting chicken-quail neural chimeras made characteristic quail vocalizations along with the characteristic quail head bob. This research demonstrated both the utility of neural chimeras to produce behavioral changes and a technique for using neural chimeras to investigate mechanisms. Such techniques could be used to further investigate the role of neurons in behavior.

In another study, mice with a human FOXP2 gene made slightly different vocalizations and showed subtle anatomical and physiological differences in brain regions used for speech and other movement.52 This shows that a human gene can influence brain development in a mouse and exert behavioral effects. Although single genes rarely cause large effects by themselves, other methods of merging animal brains with larger-scale functional elements, either biological or electronic, could produce significant psychological changes, increasing an organism’s cognitive capacities.

The possibility of “humanizing” other animals has been the subject of much controversy; the NIH has a ban on human and nonhuman chimeras in place but has proposed lifting this ban. However, there are currently no special restrictions on animal–animal chimeras.

Ethical Issues With Chimeras

Inserting cells from one organism into another host organism has the potential to alter the cognitive and affective capacities of the host animal. It seems very likely that as technology becomes more sophisticated, this technique could increase the intelligence of organisms and could alter their capacity for positive and negative feelings. Because the cognitive life of an organism and the species of an organism are often tied to their moral status, the moral status of a neural chimera—and correspondingly the wrongness of subjecting that chimera to involuntary experimentation, suffering, and death that would follow from this status—cannot be assumed to be equivalent to the moral status of a typical adult member of the host species.

If we combine neural cells from 2 different organisms to create a neural chimera, this raises the question: what ethical standards should we apply to the new organism? Should we apply the standards we would normally hold for the more protected of the 2 organisms? If so, in cases of nonprimate-primate chimeras, current regulations would require that we account for the psychological well-being of the chimera, because this is what is currently required for the care of primates. And in cases of human-animal chimeras, current regulations would place a very high standard on such research that could even require consent, making the permissibility of such research unlikely. The human-mouse chimera discussed above, for example, would require full institutional review board approval applying all of their ordinary standards for human subjects research.

Of course, inserting a small number of human neurons into a nonhuman animal is unlikely to transform the cognition of the animal in any significant way, so a more sophisticated approach would not apply blanket ethical rules to all chimeras based on the species of origin but would instead attempt to assess the moral significance of each alteration. This leads to difficult questions about how to assess the moral status of the created being. It should be noted that although there has been much discussion of human-animal chimeras in the bioethics literature already, animal–animal chimeras raise related but underexplored questions about the moral status of the new organism.

The ethicist Robert Streiffer argued at the workshop that the moral status of neural chimeras depends entirely on that chimera’s cognitive capacities. If a neural chimera has the cognitive capacities of an adult chimpanzee, then that animal has the same moral status as an adult chimpanzee. If they have the cognitive capacities of an adult beagle, then they have the same moral status as an adult beagle. Such an account is revisionary for animal research guidelines, for it implies that organisms who have the same cognitive capacities should have the same protections. Thus, organisms who have the cognitive capacities of young humans (perhaps including mice) should have the same protections as young humans. Streiffer’s argument parallels an approach that has significant agreement in the animal ethics literature and claims the following: the species of an animal is not morally significant, and animals who are the same in virtually every way except for species membership have equal moral status.

PERSONHOOD

Previous discussions of animal research neuroethics have focused on how new technologies in the neurosciences open up new and interesting ethical questions. However, a different set of questions in neuroethics focuses on how new knowledge from the neurosciences might inform existing ethical debates. For example, the role of neuroscience in determining the sentience of various species has been discussed extensively in the literature. At our workshop, we chose to focus on how neuroscience might inform a different set of morally salient characteristics: those making up the concept of “personhood.”

As noted by the first speaker, Kristin Andrews, there are various different senses of “personhood” that are sometimes confused in debates on the topic. In the law, personhood signifies legal standing and is contrasted exclusively with “mere things.”53 Thus, in the law, lawsuits can be filed on behalf of a person whose rights have been violated. However, anything the law classifies as a mere thing, including, at the present moment, all nonhuman animals, has no such standing and, in fact, has no more legal standing than inanimate objects. Some advocacy groups, such as the Nonhuman Rights Project, are currently working to see courts recognize that some nonhuman animals can meet the legal standards for personhood.54

In contrast, psychological personhood refers to a suite of psychological properties that provide the basis for suggesting that a given entity has a cognitively sophisticated point of view. These properties can include such capacities as metacognition, a sense of self, moral behavior, and linguistic capacities, but in general there is not widespread agreement as to what the characteristics of psychological personhood should be.

Finally, the notion of most direct relevance for our workshop is the notion of moral personhood, which refers to having a particular moral status that is worthy of a suite of protections that generally go beyond the protections we provide for the “merely sentient.” The moral notion of personhood is sometimes, but not always, linked to the notion of moral agency; on this conception, to be a person means to be a being capable of acting on moral considerations, and this ability is suggested to make the being worthy of an extra level of moral considerability.

As detailed by Andrews, the set of capacities that make up the psychological notion of personhood, which often informs the moral notion, can be divided into different categories. These categories include rationality, autonomy, subjectivity, personality, a narrative sense of self, and relationships with others. It is presumably for these specific domains, rather than for personhood considered as a whole, that neuroscience could be most relevant. There are no conclusive findings yet, but future neuroscience may inform debates about whether nonhuman animals have person-like capacities. For example, it might turn out that spindle cells enable certain advanced forms of cognition, in which case they might be thought of as biomarkers for those functions. Stanislas Dehaene has argued that certain features of human neural organization have given rise to the capacity for more abstract thought. And certainly those notions of personhood that favor language capacity are likely to find particular properties of neural organization that uniquely support this capacity.

In general, however, most attendees of the workshop agreed that the notion of “personhood” itself was not especially helpful for most moral debates concerning the use of animals in research. Focusing on personhood in the legal context makes some sense given that the law is highly dependent on precedent and that according to current precedent, “persons” and “things” are logically exhaustive categories. However, in the moral context, it is likely that specific traits associated with personhood, rather than the combined cluster of traits that are vaguely referred to as “personhood,” will be most relevant for ethical questions. For example, whether there should be special provisions for social housing or interaction seems to depend primarily on an organism’s level of social attachment rather than on broader questions of the organism’s “personhood.” Similarly, whether an organism has a broader suite of possible interests and desires made possible by a sense of self or by autonoetic consciousness seems to be a question that does not turn on particular definitions of personhood. Thus, workshop participants considered the individual capacities suggested making up the notion of personhood as important and worthy of neuroscientific investigation, but the relevance of personhood itself was questioned.

Ethical Issues Regarding Personhood

The neural basis of specific attributes that could make nonhuman animals worthy of consideration beyond that of “mere sentience” are currently unclear. This includes the capacity for language, autonoetic consciousness, episodic memory, metacognition, assent, consent, dissent, and complex social relationships. Discussions of personhood are at present unlikely to specifically inform or guide our research guidelines, but many of the capacities associated with personhood can provide valuable insights into moral questions.

RECOMMENDATIONS

The workshop raised a number of significant ethical issues arising due to advances in animal neuroscience research. As the authors of this report, our primary aim was to summarize the deliberations of the workshop. However, an additional contribution we make is to include policy recommendations for the United States that aim at resolving some of the ethical issues discussed at the workshop. These recommendations should not be confused with recommendations arising out of the workshop itself. Although workshop participants largely agreed about the ethical issues arising, the recommendations themselves were not discussed at the workshop and are the opinions of the authors alone.

Recommendation 1: Develop Guidelines for Evaluating Animal Models of Psychiatric Conditions

This panel raised serious concerns about the reliability of current animal models of pain and psychiatric disorders. Given that the tests that produce models of these conditions are almost assuredly aversive, this raises the possibility that current paradigms are causing a significant amount of unnecessary harm to nonhuman animals. The critique of animal models from Garner and other researchers is not without controversy but raises serious issues that should be systematically addressed. We recommend that a panel of experts be convened through a well-respected organization, such as a Presidential Bioethics Commission, the National Academy of Medicine, or the National Academy of Sciences, that critically evaluates current methods used for modeling human psychiatric conditions in animals. As part of the report, a conference or workshop should be convened that represents proponents of some of the strongest critiques of these animal models as well as those who can argue ably on behalf of current procedures. In contrast to some recent panels convened by the NIH, it is critical that any such panel include significant representation from professional ethicists who can clearly articulate and fairly represent different positions in contemporary ethical debate.

Recommendation 2: Allocate Greater Funding to in Vitro Neurological Models

More government funding resources should be devoted to developing 3D models of neural function with an aim of replacing animal models of toxicity as the gold standard. This has the potential to save thousands of animals from pain, distress, and death in toxicity tests. Moreover, it might be possible for sophisticated neural models to replace animal testing in other domains as well. We applaud the EPA’s recent decision to allocate $4.25 million to funding the research of alternatives, including $849,000 to alternative models of neurotoxicity, and recommend that government funding for these initiatives continue to be greatly increased.

As part of this effort, funding should be provided to shift away from treating animal models as the gold standard of neurotoxicology testing. Hogberg’s presentation revealed a federal regulatory shortcoming: the FDA evaluates alternative toxicity tests by their degree of conformity to existing mouse models. Because mouse models are imperfect models of human disease, this limits the potential for developing disease models that are as or more accurate models of human disease than mouse models but that are less accurate models of mouse disease than mouse models. The FDA and other existing regulatory bodies should replace animal models as their gold standard with other biomarkers. Because many of these models are still in development, mouse models cannot be replaced overnight. If FDA regulatory policy is ahead of the development of such models, however, they can ensure that promising new models are evaluated by their degree of conformity to human disease rather than their degree of conformity to disease manifestations in mice and incentivize researchers to produce models of disease relying on human biomarkers. We support current efforts by the Interagency Coordinating Committee on the Validation of Alternative Methods and hope they are continued and accelerated.

Recommendation 3: Conduct Welfare Research on the External Control of Animals

Research should be conducted to assess whether external control of animals’ behavior leads to aversive states. This could be performed by, for example, looking at cognitive bias tasks or assessing facial features, but in general a variety of assessments should be used to ensure that any possible effect is noticed. If it is found that external control does have an influence on the welfare of animals, this needs to be taken into account in assessments of the harms and benefits of similar research in the future. Moreover, because we can be relatively confident that at least some forms of external control of behavior could be extremely aversive, guidelines should be established that place limits on the forms of external control that are permitted. For example, guidelines could permit only reward circuitry—rather than punishment circuitry—for controlling behavior in vertebrates.

As with all pain research, pain researchers should not conduct this research by performing independent, potentially aversive interventions where it is feasible to instead (1) measure indicators of discomfort on interventions that would already have been conducted for other experimental purposes, or (2) measure indicators of discomfort on nonhuman patients who are already experiencing potentially aversive external behavioral control. We think that often at least 1 of these 2 options is feasible.

CONCLUSION

Participants at the “Neuroethics of Animal Research” workshop from the Center for Neuroscience and Society and the Center for the Interaction of Animals and Society at the University of Pennsylvania raised significant ethical questions about new research rendered possible by advances in neuroscience. Herein, we have distilled these concerns and presented some discussion points relevant to decisions about how to improve the ethical standards of neuroscience research in nonhuman animals.

Technology is accelerating our ability to study and manipulate the brains of animals, and these new manipulations raise new ethical questions. More so than in the case of humans, cyborg and chimeric animals are not just thought experiments for the future; they have already arrived. At the same time, all signs indicate that societal concern for animal welfare is increasing. This can be seen in the NIH’s decision to phase out chimpanzee research and in numerous agricultural companies’ decisions to shift away from the most intensive confinement conditions. Significantly, recent polling indicates that concerns about the use of animals in invasive research are growing. It is important to ensure that the advancement of neuroscience is responsive to public concerns about the welfare of nonhuman animals. This calls for the constant improvement of research standards and the termination of research that is demonstrated to be unethical as well as the continuous monitoring of new ethical issues that arise from neuroscientific advances. The Penn Neuroethics of Animal Research Workshop has yielded findings that should be used toward these ethically urgent ends.

ACKNOWLEDGMENTS

The workshop was made possible by grants from the Alternatives Research Development Foundation (ARDF) and the University of Pennsylvania School of Arts and Sciences Conference Support Grant.

Potential conflicts of interest

All authors: No reported conflicts.

References

  • 1. Presidential Commission for the Study of Bioethical Issues . Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society. Vol 1. Bioethics Research Library. https://bioethicsarchive.georgetown.edu/pcsbi/node/3543.html. Published May  2014. Accessed August 9, 2016. [Google Scholar]
  • 2. Presidential Commission for the Study of Bioethical Issues . Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society. Vol 2. Bioethics Research Library. https://bioethicsarchive.georgetown.edu/pcsbi/node/4704.html. Published March  2015. Accessed August 9, 2016. [Google Scholar]
  • 3. Greely  HT, Grady  C, Ramos  KM  et al.  Neuroethics guiding principles for the NIH BRAIN initiative. J Neurosci. 2018; 38(50):10586–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bianchi  DW, Cooper  JA, Gordon  JA  et al.  Neuroethics for the National Institutes of Health BRAIN initiative. J Neurosci. 2018; 38(50):10583–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Department of Health and Human Services . BRAIN Initiative: marmoset colonies for neuroscience research. Department of Health and Human Services. Available at: https://grants.nih.gov/grants/guide/rfa-files/RFA-MH-20-145.html. Published June  2019. Accessed August 1, 2020. [Google Scholar]
  • 6. Farah  MJ. Neuroethics: the ethical, legal, and societal impact of neuroscience. Annu Rev Psychol. 2012; 63(1):571–91. [DOI] [PubMed] [Google Scholar]
  • 7. Buller  T, Shriver  A, Farah  M. Guest editorial: broadening the focus. Camb Q Healthc Ethics. 2014; 23(2):124–8. [DOI] [PubMed] [Google Scholar]
  • 8. Johnson  LS, Maslen  H. Toward a less anthropocentric neuroethics. The neuroethics blog. 2019. http://www.theneuroethicsblog.com/2019/04/toward-less-anthropocentric-neuroethics.html. Published April  Accessed June 6, 2020. [Google Scholar]
  • 9. Johnson  LS, Fenton  A, Shriver  A. Neuroethics and Nonhuman Animals. Springer International Publishing; 2020. [Google Scholar]
  • 10. Mogil  JS. Animal models of pain: progress and challenges. Nat Rev Neurosci. 2009; 10(4):283–94. [DOI] [PubMed] [Google Scholar]
  • 11. Nestler  EJ, Hyman  SE. Animal models of neuropsychiatric disorders. Nat Neurosci. 2010; 13(10):1161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Merskey  H. Pain terms: a list with definitions and notes on usage. Recommended by the IASP Subcommittee on taxonomy. Pain. 1979; 6:249–52. [PubMed] [Google Scholar]
  • 13. Shriver  A. Minding mammals. Philos Psychol. 2006; 19(4):433–42. [Google Scholar]
  • 14. Tracey  I, Woolf  CJ, Andrews  NA. Composite pain biomarker signatures for objective assessment and effective treatment. Neuron. 2019; 101(5):783–800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Sufka  KJ. Conditioned place preference paradigm: a novel approach for analgesic drug assessment against chronic pain. Pain. 1994; 58(3):355–66. [DOI] [PubMed] [Google Scholar]
  • 16. Roughan  JV, Coulter  CA, Flecknell  PA  et al.  The conditioned place preference test for assessing welfare consequences and potential refinements in a mouse bladder cancer model. PLoS One. 2014; 9(8):e103362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Vierck  CJ, Hansson  PT, Yezierski  RP. Clinical and pre-clinical pain assessment: are we measuring the same thing?  Pain. 2008; 135(1):7–10. [DOI] [PubMed] [Google Scholar]
  • 18. Morgan  D, Carter  CS, DuPree  JP  et al.  Evaluation of prescription opioids using operant-based pain measures in rats. Exp Clin Psychopharmacol. 2008; 16(5):367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Gregory  NS, Harris  AL, Robinson  CR  et al.  An overview of animal models of pain: disease models and outcome measures. J Pain. 2013; 14(11):1255–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Langford  DJ, Bailey  AL, Chanda  ML  et al.  Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 2010; 7(6):447–9. [DOI] [PubMed] [Google Scholar]
  • 21. Hewitt  DJ, Hargreaves  RJ, Curtis  SP  et al.  Challenges in analgesic drug development. Clin Pharmacol Ther. 2009; 86(4):447–50. [DOI] [PubMed] [Google Scholar]
  • 22. Garner  JP. The significance of meaning: why do over 90% of behavioral neuroscience results fail to translate to humans, and what can we do to fix it?  ILAR J. 2014; 55(3):438–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Zahs  KR, Ashe  KH. ‘Too much good news’–are Alzheimer mouse models trying to tell us how to prevent, not cure, Alzheimer's disease?  Trends Neurosci. 2010; 33(8):381–9. [DOI] [PubMed] [Google Scholar]
  • 24. Van der Worp  HB, Howells  DW, Sena  ES  et al.  Can animal models of disease reliably inform human studies?  PLoS Med. 2010; 7(3):e1000245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Merchant  KM, Cedarbaum  JM, Brundin  P  et al.  A proposed roadmap for Parkinson’s disease proof of concept clinical trials investigating compounds targeting alpha-synuclein. J Parkinsons Dis. 2019; 9(1):31–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Johnson  LS. The trouble with animal models in brain research. In: Johnson  LS, Fenton  A, Shriver  A, eds. Neuroethics and Nonhuman Animals. Springer; 2020. p. 271–86. [Google Scholar]
  • 27. Pound  P. Problems and prospects. In: The Routledge Handbook of Animal Ethics. Routledge; 2019. Chapter 18. [Google Scholar]
  • 28. Yankelevitch-Yahav  R, Franko  M, Huly  A  et al.  The forced swim test as a model of depressive-like behavior. J Vis Exp. 2015; 97(97):e52587. 10.3791/52587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Commons  KG, Cholanians  AB, Babb  JA  et al.  The rodent forced swim test measures stress-coping strategy, not depression-like behavior. ACS Chem Neurosci. 2017; 8(5):955–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Reardon  S. Depression researchers rethink popular mouse swim tests. Nature. 2019; 571(7766):456–8. [DOI] [PubMed] [Google Scholar]
  • 31. Kilkenny  C, Browne  WJ, Cuthill  IC  et al.  Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010; 8(6):e1000412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Laber  K, Newcomer  CE, Decelle  T  et al.  Recommendations for addressing harm–benefit analysis and implementation in ethical evaluation–report from the AALAS–FELASA working group on harm–benefit analysis–part 2. Lab Anim. 2016; 50(1_suppl):21–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Walker  RL. Human and animal subjects of research: the moral significance of respect versus welfare. Theor Med Bioeth. 2006; 27(4):305–31. [DOI] [PubMed] [Google Scholar]
  • 34. Kantin  H, Wendler  D. Is there a role for assent or dissent in animal research?  Camb Q Healthc Ethics. 2015; 24(4):459–72. [DOI] [PubMed] [Google Scholar]
  • 35. DeGrazia  D, Sebo  J. Necessary conditions for morally responsible animal research. Camb Q Healthc Ethics. 2015; 24(4):420–30. [DOI] [PubMed] [Google Scholar]
  • 36. Leite  PE, Pereira  MR, Harris  G  et al.  Suitability of 3D human brain spheroid models to distinguish toxic effects of gold and poly-lactic acid nanoparticles to assess biocompatibility for brain drug delivery. Part Fibre Toxicol. 2019; 16(1):1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Senate Appropriations Committee . Hearing on FY2017 National Institutes of Health Budget Request. Senate Appropriations Committee. Available at: https://www.appropriations.senate.gov/hearings/hearing-on-fy2017-national-institutes-of-health-budget-request. Published April  2016. Accessed January 5, 2020. [Google Scholar]
  • 38. National Research Council . Scientific Frontiers in Developmental Toxicology and Risk Assessment. Washington D.C.: National Academies Press; 2000. [PubMed] [Google Scholar]
  • 39. Center for Alternatives to Animal Testing . EPA awards nearly $850,000 to Johns Hopkins CAAT to advance research on alternatives to animal testing. CAATwalk Newsletter. Available at: https://us4.campaign-archive.com/?u=066d5d7abe2de2d5e04d214bf&id=0da656da82#epa. September 2019. Accessed September 2, 2020. [Google Scholar]
  • 40. Davidson  D. Knowing one's own mind. Proceedings and Addresses of the American Philosophical Association. 1986; 60:41–458. [Google Scholar]
  • 41. Shepherd  J. Ethical (and epistemological) issues regarding consciousness in cerebral organoids. J Med Ethics. 2018; 44(9):611–2. [DOI] [PubMed] [Google Scholar]
  • 42. Lavazza  A, Massimini  M. Cerebral organoids and consciousness: how far are we willing to go?  J Med Ethics. 2018; 44(9):613–4. [DOI] [PubMed] [Google Scholar]
  • 43. Golub  MD, Yu  BM, Schwartz  AB  et al.  Motor cortical control of movement speed with implications for brain-machine interface control. J Neurophysiol. 2014; 112(2):411–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Talwar  SK, Xu  S, Hawley  ES  et al.  Rat navigation guided by remote control. Nature. 2002; 417(6884):37–8. [DOI] [PubMed] [Google Scholar]
  • 45. Neave  HW, Daros  RR, Costa  JH  et al.  Pain and pessimism: dairy calves exhibit negative judgement bias following hot-iron disbudding. PLoS One. 2013; 8(12):e80556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Langford  DJ, Bailey  AL, Chanda  ML  et al.  Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 2010; 7(6):447–9. [DOI] [PubMed] [Google Scholar]
  • 47. Clark  L. In defence of the cockroach: RoboRoach Kickstarter ignores ethics. Wired Magazine. Available at: https://www.wired.co.uk/article/roboroach-kickstarter. Published June  2013. Accessed December 15, 2019. [Google Scholar]
  • 48. DeMarse  TB, Dockendorf  KP. Adaptive flight control with living neuronal networks on microelectrode arrays. IEEE. 2005; 3:1548–51. [Google Scholar]
  • 49. Reger  BD, Fleming  KM, Sanguineti  V  et al.  Connecting brains to robots: an artificial body for studying the computational properties of neural tissues. Artif Life. 2000; 6(4):307–24. [DOI] [PubMed] [Google Scholar]
  • 50. John  T, Sebo  J. Consequentialism and nonhuman animals. In: Portmore  D, ed. The Oxford Handbook of Consequentialism. Oxford: Oxford University Press; 2020. [Google Scholar]
  • 51. Balaban  E. Changes in multiple brain regions underlie species differences in a complex, congenital behavior. Proc Natl Acad Sci. 1997; 94(5):2001–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Enard  W, Gehre  S, Hammerschmidt  K  et al.  A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell. 2009; 137(5):961–71. [DOI] [PubMed] [Google Scholar]
  • 53. Wise  S. Rattling the Cage: Toward Legal Rights For Animals. Boston, MA: Da Capo Press; 2014. [Google Scholar]
  • 54. Andrews  K, Comstock  GL, Crozier  GK  et al.  Chimpanzee Rights: The Philosophers’ Brief. Taylor and Francis; 2018. [Google Scholar]

Articles from ILAR Journal are provided here courtesy of Oxford University Press

RESOURCES