1. THE SCIENCE OF MORALITY
One of the emerging subdisciplines of the cognitive sciences is the science of morality. Advanced techniques in neuroscience, such neuroimaging, together with sophisticated pharmacological, psychological and economic experiments have begun to shed light on the neural and psychological underpinnings of moral judgement and behaviour. Such research has created great controversy. Some neuroscientists have argued for ‘brain-based’ ethics (Gazzaniga 2005), claiming that moral decisions have to be compatible with our knowledge of the human brain or even directly inferred from it. Neuroscientists have already claimed that their research has dramatic implications for the practice and substance of ethics. It has been argued, for example, that neuroscientific findings show that political debate is conducted largely at the emotional level (Westen 2007) or that they undermine the common ethical practice of appealing to intuitions (Sunstein 2005; Singer 2005). Moreover, it has been claimed that such research undermines common moral views, exposing Kantian ethics as a ‘mere confabulation’ based on gut reactions, and supports utilitarianism (Singer 2005; Greene 2008). Some ethical positions have been criticized as ‘neurally implausible’ (Casebeer & Churchland 2003; Churchland, 2011).
Moral Enhancement
Although these claims are at this point speculative, science is likely to reshape our conceptions of justified morality. Indeed, it might even offer means of conforming to morality. In a recent series of articles and book (Persson and Savulescu Forthcoming; Persson and Savulescu 2011a, b, and c; Persson and Savulescu 2010; Persson and Savulescu 2008), we have argued that there is an urgent need to explore the possibility of using the emerging science of morality to develop means of enhancing moral dispositions. The argument goes roughly like this.
For most of the time the human species has existed, human beings have lived in comparatively small and close-knit societies, with primitive technology that enabled them to affect only their most immediate environment. Their moral psychology adapted to make them fit to live in these conditions. This moral psychology is ‘myopic’, restricted to concern about people in the neighbourhood and the immediate future. But through science and technology, humans have radically changed their living conditions, while their moral psychology has remained fundamentally the same throughout this technological and social evolution, which continues at an accelerating speed. Human beings now live in societies with millions of citizens and with an advanced scientific technology which enables them to exercise an influence that extends all over the world and far into the future. This is leading to increasing environmental degradation and to harmful climate change. The advanced scientific technology has also equipped human beings with nuclear and biological weapons of mass destruction which might be used by states in wars over dwindling natural resources or by terrorists. Liberal democracies cannot overcome these problems by developing novel technology. What is needed is an enhancement of the moral dispositions of their citizens, an extension of their moral concern beyond a small circle of personal acquaintances, including those existing further in the future. The expansion of our powers of action as the result of technological progress must be balanced by a moral enhancement on our part. Otherwise, our civilization, we argued, is itself at risk. It is doubtful whether this moral enhancement could be accomplished by means of traditional moral education. There is therefore ample reason to explore the prospects of moral enhancement by biomedical means.
In the first part of this paper, we will summarise the science that indicates that moral enhancement itself may be a realistic prospect. In the second part of this paper, we will examine whether moral bioenhancement is compatible with individual freedom and autonomy.
The New Science of Behavioural Control
Could we, through our knowledge of biology, strategically influence people’s moral dispositions and behaviour? There are reasons to believe that we could. Historically, drugs and surgery, such as lobotomy we used in an attempt to control behaviour. But today, sophisticated and ever more powerful cognitive science is providing new and more effective means of influencing human choices. Psychological research is affording strategies to influence choice: a range of unconscious stimuli can affect choice through priming (Kiesel et al. 2006). One prominently discussed technique is the ‘nudge’ strategy, which harnesses knowledge about ‘cognitive biases’ that may influence voluntary choice (Thaler and Sunstein 2008). These ideas are affecting health policy (Charkrabortty 2008).
A number of commonly employed antidepressants and antihypertensives (Terbeck et al. Under Review b) affect moral behaviour as a side effect. Indeed, a number of drugs are already prescribed for specifically for their choice-altering effects, which have effects relevant to moral behaviour: anti-alcohol abuse drug disulfuram, weight loss drug orlistat, and anti-libidinal agents, to reduce sexual re-offending. Neuropsychology is beginning to provide more robust evidence for biological correlates of morally relevant traits, e.g., aggression, trust and empathy. Ramachandran and colleagues have begun to identify neural loci of empathic responses in humans and animals (Ramachandran and Oberman 2006). This research may lead to pharmacological interventions to improve empathy, cooperation and trust (e.g. De Dreu et al. 2011). Indeed, our own empirical research has already shown that propranolol can reduce implicit racial bias (Terbeck et. al. under review (a)) and produce less utilitarian judgement (Terbeck et al. under review (b)).
There are non-pharmacological means of influencing moral behaviour. Work by Niels Birbaumer and colleagues on neurofeedback techniques has shown promise in rapid training of new emotional responses (Sitaram et al. 2007; Sitaram et al. 2009; Caria et al. 2010), and has been suggested as a possible treatment for psychopathy (Sitaram et al 2007).
Other possible techniques for influencing choices include Transcranial Magnetic Stimulation, Deep-Brain Stimulation, Transcranial Direct Current Stimulation (Cohen Kadosh et al. 2010), and Optogenetics, offering the prospect of profound manipulation using genetic manipulation and optic stimulation. These technologies can directly modify behaviours, perhaps even addictive behaviour (Carter et al. 2009).
Indeed, transcranial magnetic stimulation can affect choice without subjects’ awareness (Brasil-Neto et al. 1992). Studies of ego depletion have demonstrated that self-control is a limited resource; the more temptation a subject has resisted in the recent past, the more likely they are to give in to a subsequent temptation (Baumeister et al. 1998; Baumeister 2002). This suggests a low-tech and easily implemented way of influencing choice: control the number of stimuli to which they are exposed.
Neuroscience promises to explain addiction, which itself can contribute to immoral behaviour. Addicts have difficulty delaying gratification, choosing a smaller immediate reward over a larger delayed reward. Boettiger et al. (2007) found that this impulsivity was associated with greater activity in the parietal and prefrontal cortex, activity that is in turn strongly predicted by some genotypes. But environmental influences are also important in explaining addiction. Stress is a risk factor, apparently because of its effects on mesocorticolimbic dopamine (Wang et al. 2005). Primate studies seem to indicate that low social status also increases vulnerability to addiction through its effects on dopamine expression (Morgan et al. 2002).
Oxytocin and Serotonin
One substance with effects on moral behaviour is the hormone and neurotransmitter oxytocin. Oxytocin is naturally elevated by sex and touching. But it can also be elevated by nasal spray. It facilitates birth and breastfeeding in humans and other mammals, but it also appears to mediate maternal care, pair bonding, and other pro-social attitudes, like trust, sympathy and generosity (Insel et al. 2004). When oxytocin is administered via nasal spray, it crosses into the brain. Several commonly used drugs are also thought to affect the release or metabolism of oxytocin. For example, the combined oral contraceptive pill, currently used by over 100 million women worldwide, is associated with elevated baseline oxytocin levels and is believed to increase oxytocin secretion (Stock et al. 1994; Silber et al. 1987). Similarly, glucocorticoids, widely used to treat asthma and other disorders of inflammation, are thought to modulate both the release of oxytocin and the expression of oxytocin receptors in some parts of the brain (Link et al. 1993; Liberzon et al. 1997).
Kosfeld and collaborators investigated the relationship between oxytocin and trust in a simple game of cooperation (Kosfeld et al. 2005). Research subjects were divided into pairs and the first member of the pair (the ‘investor’) was asked to choose an amount of money to give to the second member (the ‘trustee’), knowing that the second member will receive three times the amount of money given. The second member then chooses an amount of money to return to the first member. The initial payment can thus be viewed as a signal of trust, while the return payment can be interpreted as an indication of trustworthiness and gratitude. A greater level of trust signalled by the investor increases the total amount of money to be allocated between the two players, but the investor benefits from this only to the extent that the trustee is trustworthy and grateful. Prior to playing the game, participants were randomised to receive a nasal spray containing either oxytocin or placebo. Investors administered oxytocin exhibited significantly more trusting behaviour – that is, they entrusted the trustee with a significantly greater amount of money.
In a similar game to that to used by Kosfeld et al., Zak and collaborators found that receipt of a signal of trust by the trustee is associated with a spike in oxytocin levels and that the degree of trustworthiness exhibited by the trustee is positively and significantly correlated with the oxytocin level (Zak et al. 2004). Thus, in a population with universally elevated oxytocin levels, increased trust seems to be matched by increased trustworthiness.
However, oxytocin’s effects on trusting and other pro-social behaviour towards others appears to be sensitive to the group membership of the others. Carsten De Dreu and associates (2010; 2011) presented participants who had been randomised to receive either oxytocin or placebo via nasal spray with moral dilemma scenarios in which one individual would have to be sacrificed in order to save a greater number (De Dreu et al. 2011). Participants administered oxytocin were significantly more likely to sacrifice a different-race individual in order to save a group of race-unspecified others than they were to sacrifice a same-race individual in the same circumstances. In participants administered placebo, the likelihood of sacrificing an individual did not significantly depend on the racial group of the individual. This suggests that the pro-social effects of oxytocin may be limited to in-group members.
Further experiments by De Dreu’s group indicated that oxytocin can also reduce pro-social behaviour towards out-group members where this helps one’s in-group. Administration of oxytocin to prior to participating in a group-based financial game induced ‘tend and defend’ reactions: it increased trust and co-operation within groups, but also increased non-cooperation with (though not offensive aggression against) members of other groups when this helped to protect one’s in-group (De Dreu et al. 2010).
This work supports the hypothesis that the pro-social effects of oxytocin are more aptly characterised as ‘pro-in-group’ effects, since the hormone can in fact induce anti-social behaviour when this conduces to the interests of one’s in-group. Thus, it might be that a higher level of oxytocin amplifies the intensity of trust and reciprocity within an already favoured group rather than to extend their range to out-groups. Since in-group favouritism seems to drive class and racial discrimination, which in extreme cases manifests itself in genocide and terrorism, administration of oxytocin would not by itself be an effective cure against these evils.
Another neurotransmitter implicated in moral behaviour is serotonin. Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed for depression, anxiety, and obsessive compulsive disorder. They help govern activities such as eating and sleeping, and sexual activity. Millions of people world wide use these drugs. SSRIs work by slowing the reabsorption of serotonin, a neurotransmitter crucially involved in mood, thereby making more of it available to stimulate receptors. But SSRIs also seem to make subjects more fair-minded and willing to cooperate. Tse and Bond (2002) had subjects play the Dictator game – a game in which a dictator decides how a certain sum of money is to be divided between him or her and another participant – and found that subjects administered the SSRI citalopram divided the sum more fairly than controls. Conversely, depletion of precursor of serotonin (tryptophan), which would lead to reduced levels of serotonin, leads to lower rates of cooperation in the Prisoner’s dilemma game (Wood et al. 2006). The effect was only evident for subjects with depleted tryptophan in the first round of testing, suggesting that serotonin contributes to establishing a cooperative pattern of response, not maintaining it.
In the Ultimatum game, subjects are divided into a proposer and a responder. The proposer proposes a division of a reward (eg, money), such that proposer gets a share and the responder gets a share. The responder can accept her share and give the proposer the proposed share, or she can reject the offer, in which case neither party obtains anything. Normal human subjects typically reject offers they regard as strongly unfair, despite the fact that rejection decreases their payoff (in a one-shot game). What is regarded as unfair differs from culture to culture (Oosterbeek, Sloof and van de Kuilen 2004). Crockett and colleagues (2008) found that depletion of tryptophan led to increased rates of rejection of unfair offers relative to controls. This suggests that SSRIs may make subjects easier to exploit by modulating their assessment of what counts as (unacceptably) unfair. However, it is not clear how an increased rate of rejection of unfair offers is to be interpreted: is it really a manifestation of a heightened sense of fairness, or perhaps rather of a greater aversion to harming others (the proposers), as Crockett and colleagues (2010) suggest? But in any case it is clear that modifications of such systems in the brain by drugs like SSRIs have moral consequences.
While the science of influencing moral disposition is still in its infancy, it seems likely that science will afford ever more powerful means on influencing choice, including moral choice. In our collected work, we have argued that such science should be prioritised and aggressively pursued, such is the need for moral enhancement. But one objection is raised time and again, both in discussion and in print: such moral enhancement would compromise our freedom. It is this objection which we will now attempt to spell out more clearly, then address.
2. MORAL ENHANCEMENT AND FREEDOM
John Harris has recently mounted an objection to moral bioenhancement that he claims originates from Milton. Harris writes,
Famously, in Book III of Paradise Lost Milton reports God saying to his “Only begotten Son” that if man is perverted by the “false guile” of Satan he has only himself to blame:
……………whose fault?
Whose but his own? Ingrate, he had of me All he could have; I made him just and right, Sufficient to have stood, though free to fall.
When God says of man that “he had of me all he could have” he qualifies this in two ways. Firstly by the vainglorious claim “I made him just and right”, and second by a wonderful analysis of freedom: “sufficient to have stood, though free to fall”. Milton’s God was certainly overestimating her role in making humankind just, right and all the rest, but nature, or more particularly, evolution, has done most of this for us. We have certainly evolved to have a vigorous sense of justice and right, that is, with a virtuous sense of morality. God was, of course, speaking of the fall from Grace, when congratulating herself on making man “sufficient to have stood though free to fall”, she was underlining the sort of existential freedom … which allows us the exhilaration and joy of choosing (and changing at will) our own path through life. And while we are free to allow others to do this for us and to be tempted and to fall, or be bullied, persuaded or cajoled into falling, we have the wherewithal to stand if we choose. So that when Milton has God say mankind “had of me all he could have”, he is pointing out that while his God could have made falling impossible for us, even God could not have done so and left us free. Autonomy surely requires not only the possibility of falling but the freedom to choose to fall, and that same autonomy gives us self-sufficiency; “sufficient to have stood though free to fall. (Harris 2011)
Harris goes on to claim, “… Milton’s insight is the crucial role of personal liberty and autonomy: that sufficiency to stand is worthless, literally morally bankrupt, without freedom to fall” and “our freedom to fall is ‘precious.’”
Harris’ claims are extreme, perhaps hyperbolic. According to a more moderate version of this objection, moral enhancement is wrong because it restricts the freedom to do wrong and undermines autonomy. He implies that moral enhancement would somehow make it impossible to act immorally. We ask first whether this is so and secondly, if it were impossible to act immorally, whether this would be a bad thing, all things considered.
Acting Morally
To be morally enhanced is to have those dispositions which make it more likely that you will arrive at the correct judgement of what it is right to do and more likely to act on that judgement. It is disputed what the right thing to do is and how we would arrive at the right course of action. What constitutes moral enhancement will depend on the account one accepts of right action.
For argument’s sake, consider a simple moral theory: utilitarianism. According to utilitarianism, the right action is that action which maximizes utility. For simplicity, consider preference utilitarianism, which holds that the right action is the action which satisfies maximally the preferences of everyone affected by the action, where the preferences of everyone count equally.
To be an enhanced utilitarian will require, amongst many other things:
Cognitive enhancement – to accurately estimate the consequences of action and the impact on people’s preferences.
Impulse control to enable one to act on one’s judgements of right action.
Willingness to sacrifice one’s own preference satisfaction for the satisfaction of others’ preference.
The last requirement is important. It is characteristic of morality, as opposed to prudence or self-interest, that it requires the sacrifice of one’s own interests for the sake of others, or at least for the sake of some moral code. Utilitarianism is very demanding as a moral theory – even if someone else gains slightly greater preference satisfaction than you do, you should act so as to satisfy their preferences rather than your own. To take an extreme example, if your life will be prolonged by 10 years by a medical treatment and someone else’s will be prolonged by 11 years, you should ensure that they are treated in preference to you, if their preferences are even slightly stronger.
While few people are utilitarians, it is a general feature of all moralities that they require a degree of self-sacrifice and altruism - when and how much is dependent on the particular theory. But it is a prerequisite of moral action that one should sacrifice/constrain one’s own self-interest for some moral code for the benefit of others.
For example, a variant of consequentialism is “easy-rescue” consequentialism. This states that when the harm to A of Fing is small and the benefit to another, B, is great, then A should F. Such a form of consequentialism would not be demanding and all would stand to benefit from it in the long term. However, it would require short-term sacrifices of self-interest.
A willingness to sacrifice one’s own interests is thus a feature of even undemanding moralities. Yet it is something which, like all human characteristics, varies from person to person. Some will be less inclined to make sacrifices, or less often or of very small magnitude.
Increasing the willingness to sacrifice one’s own interests for the benefit of others is a moral enhancement, on any account of morality.
One basic form of morality that is uncontroversial is altruism.
Altruism
A common, specific form of self-sacrifice is altruism.1 Altruism involves the sacrifice of one’s own interests for the welfare of others (as opposed to sacrifice for some non-welfarist moral goal).
Various factors predictably increase altruistic self-sacrifice. For example, if one derives pleasure from altruistic self-sacrifice, this will increase willingness to sacrifice one’s interests. The praise or esteem of others increases altruisitic self-sacrifice, as does treatment of depression. All these are moral enhancers in one sense. There is a lot to be learnt from religions in this regard as their goal has been to engineer self-sacrifice. Drugs, ritual, dance, induction ceremonies, etc have all been used increase self-sacrifice of members of groups. However, it is possible to not only manipulate the situational and social determinants of self-sacrifice, but also the biological determinants.
Alongside altruism, a sense of justice is a central moral disposition. Both have a biological basis (Persson and Savulescu 2011b)2. Björn Wallace and associates have found that, in the case of identical twins (who share the same genes), there is a striking correlation between identical twins in what they consider to be unfair and fair in Ultimatum games. There is no such correlation in the case of fraternal twins (2007, 15631-4). This indicates that the human sense of fairness has a genetic basis.
According to Simon Baron-Cohen (2003, 114), there is also a striking correlation in respect of altruism in identical twins. If there is a genetic basis to some trait such as sense of justice or altruism, this opens the door to future biological manipulation of that trait. Even if control of that trait is impossible, changing the strength or nature of a disposition even to a small degree can have moral effect.
It is plausible to think that women have a greater capacity for altruism in general than men. Baron-Cohen (2003) argues that women have a greater capacity for empathy than men. We have argued that empathy is a capacity to imagine vividly what it is like to be another, to think, perceive and feel as they do (Persson and Savulescu 2011b) Thus, empathy, as we conceive it, does not involve any motivational component. On Baron-Cohen’s conception, empathy is a merely component of altruism, as we understand it, since we take altruism to include also a motivational component of sympathetic concern about how others feel, a concern that they feel good rather than suffer.
Baron-Cohen notes that empathy can act as ‘brake on aggression’ (2003, 35). Thus, we should expect that a lesser male capacity for empathy could go with the greater display of male aggression, which is borne out by the statistics of crimes like murder (see e.g. Baron-Cohen 2003, 36). Baron-Cohen does not maintain that women are not aggressive at all. His claim is rather that female aggression tends to take the subtler forms of backstabbing, social exclusion, etc. instead of direct physical assault, and these subtler forms of aggression presuppose mindreading (2003, 35). If women have a lower tendency to harm others overall, it seems that in principle we could make men more moral by biomedical methods by making them more like women, or rather, more like the men who are more like women in respect of empathy and aggression.
It is clear from these examples that some forms of moral bioenhancement would not limit freedom or autonomy: women are not less free than men because by biological nature they are more altruistic and less aggressive.
Moral enhancement is easily embroiled in debates on free will and determinism. Moral enhancement, the worry goes, somehow determines our actions and removes freedom of the will.
However, slightly deeper reflections reveals debates around the truth of determinism are not relevant to the acceptability of moral enhancement. Suppose, first, that our freedom is compatible with it being fully determined whether or not we shall do what we take to be good and right. Then a judicious use of effective techniques of moral bioenhancement will not reduce our freedom; it will simply make it the case that we are more often, perhaps always, determined to do what we take to be good. We would then act as a morally perfect person now acts.
Suppose, on the other hand, that we are free only because, by nature, we are not fully determined to do what we take to be good. Then moral bioenhancement cannot be fully effective because its effectiveness is limited by the indeterministic freedom that we possess. So, irrespective of whether determinism or indeterminism reigns in the realm of human action, moral bioenhancement will not curtail our freedom.
However, some critics of moral bioenhancement seem to think that it would turn us into mindless robots who do not act for reasons. John Harris writes that moral bioenhancement will ‘make the freedom to do immoral things impossible, rather than simply making the doing of them wrong and giving us moral, legal and prudential reasons to refrain’ (2011, 7). But, in our view, those who had undergone moral bioenhancement would act for the same reasons as those of us who are most moral today do, and the sense in which it is ‘impossible’ that they do what they regard as immoral will be the same for the morally enhanced as for the garden-variety virtuous person: it is psychologically or motivationally out of the question. To conclude, people who are morally good and always try to do what they regard as right are not necessarily less free than those who sometimes fail to do so (Persson and Savulescu 2011).
Violence and Aggression
The opposite of promoting another’s interests is damaging another’s interests. Traits which increase harm to others cause immoral behaviour. The paradigm is psychopathic personality disorder, but other personality disorders such as antisocial personality disorders, borderline personality disorder and narcissistic personality disorder can cause great harm to those who come into contact with these individuals. The reduction in these tendencies are thus moral enhancements.3
Personality disorder affects 5-10%, placing heavy demands on psychiatric, social and forensic services (NIMH 2003): 64% of male and 50% of female offenders have PD (NOMS 2011). Traits include criminal behaviour, addiction, self-harm, violence, selfishness, recklessness, impulsivity, lack of empathy and remorse, poor anger management, and willingness to exploit others. Personality disorder has an inherently moral component: traits are moral failings that harm self and others (Charland 2004; Pickard 2009; 2011a).
Alongside genetic predisposition (Lang and Vernon 2001), the strongest predictor of personality disorder is early-environment psychosocial adversity. Personality disorder is associated with parental psychopathology, institutional care, sexual, emotional, and physical abuse (Paris 2001). The chaotic/violent behaviour and emotional instability diagnostic of personality disorder mirrors early environment. People with personality disorder frequently did not have the opportunity to learn moral skills (Pickard 2011b).
There is increasing evidence that personality disorders can be treated pharmacologically and psychologically. Antidepressants are recommended for depressive symptoms and impulsivity (NIMH 2003); sedatives for short-term crises (NICE 2009). There are specific psychological therapies: cognitive-behavioural therapy (Davison 2008), dialectical behavioural therapy (Dimeff and Linehan 2001) and STEPPS (Blum et al. 2008); mentalization-based therapy (Fonagy et al. 2004); and Therapeutic Communities (Lees et al. 1999). These develop theory of mind skills and self-control as well as promoting personal and social responsibility (Pickard 2011a). Psychiatric interventions are acting as moral enhancers (Pearce and Pickard 2009).
Other Traits Necessary for Moral Behaviour
There are other traits which are necessary for moral behaviour. Willingness to co-operate with other people is one. As we have seen, SSRIs increase willingness to co-operate. Another trait is impulse control. If one cannot withstand temptation and delay gratification, one will be less likely to sacrifice one’s own interests for some moral goal. Drugs which increase impulse control thus can contribute to more moral behaviour. Ritalin, Adderall and other drugs improve impulse control in children with attention deficit disorder, indeed reducing violence and antisocial behaviour.
Of course, both these traits could be used for nefarious purposes and to increase immoral behaviour, making someone for example a more effective criminal or Nazi, more willing to co-operate with other Nazis and able to control one’s impulses, for example the impulse to help suffering people, as Himmler claimed. But the combination of these traits, particularly the inclusion of altruism involving empathy and sympathy would preclude such disastrous outcomes.
Beggar in the street
John is a professor of mathematics at the University of Oxford. Everyday, he passes a beggar sitting in the front of his college. The woman is in rags and asks for 50 p for shelter that night. John always averts his gaze and walks as far as possible away from her. He never gives her any money. John is relatively wealthy and prefers to buy expensive bottles of claret from the College cellar for himself and his friends.
John takes a drug which makes him more interested in the suffering of others, more empathetic, more capable of vividly imagining what it would be like to be in another person’s shoes. The drug is like a pair of “moral spectacles”, clarifying his vision of the other. He looks at the beggar, reflects more about her suffering and so decides to give the beggar an apple. He does not give money because he believes the beggar will use it imprudently.
In this case, there is the right sort of connection deliberation and judgement. John acts for reasons, as much as anyone acts for reasons. He just got to see things the right way. John’s giving the apple was not unfree - it was virtuous. Imagine that John, when he took the drug, always behaved in the morally correct way. He would not be unfree. He would be the most virtuous person.
Perhaps the most likely group in which moral bioenhancement will be effective is children during their early development. By giving them drugs or other biological manipulation we may be able to increase their ability to learn to behave morally, just as cognitive enhancement may enable them one day to gain knowledge more effectively. Such enhanced moral education will of course require conventional moral education in terms of learning correct values, acting on one’s values, etc. But the two together may be more effective than each alone.
Consider fostering the trait of willingness to consider the suffering of others and respond sympathetically to it in a child. It might be objected that engineering this trait biologically restricts the child’s options in the future, their so-called “open future”.
But we do this all the time through education, stories, literature and punishment. Why should it make a difference if we should we do this using knowledge from cognitive science? It is precisely because we want to foster the development of this disposition that we employ these techniques.
Enhancement is not cheating. Cognitive bioenhancement does not give knowledge – it requires effort and learning. Moral bioenhancement will not produce by itself moral behaviour. It requires effort and learning. But it may make it easier and more likely.
Obliterating Immoral Behaviour: The God Machine
It is 2050. The science of morality is far advanced. Some say it is complete. The field of ‘optogenetics’ took off in 2020. It is now possible to genetically engineer human embryos during early development. Genes can now be inserted and modified so that the activity of single neurons can be both measured and manipulated externally.
GMNs, or genetically modified neurons, contain ‘nanosignalers’ – these indicate when activity is occurring in a single neuron. GMNs emit “signatures” of light and these GMNs can be controlled via light in precisely the same range, not visible to the human eye. This light signature is now picked up by a ubiquitous light-based communications network that replaced the old mobile phone network. Information is transmitted to bioquantumcomputers that are trillions of times as intelligent and fast as the most powerful supercomputer earlier in the millenium.
The Great Moral Project was completed in 2045. This involved construction of the most powerful, self-learning, self-developing bioquantum computer ever constructed called the God Machine. The God Machine would monitor the thoughts, beliefs, desires and intentions of every human being. It was capable of modifying these within nanoseconds, without the conscious recognition by any human subjects.
The God Machine was designed to give human beings near complete freedom. It only ever intervened in human action to prevent great harm, injustice or other deeply immoral behaviour from occurring. For example, murder of innocent people no longer occurred. As soon as a person formed the intention to murder, and it became inevitable that this person would act to kill, the God Machine would intervene. The would-be murderer would ‘change his mind.’ The God Machine would not intervene in trivial immoral acts, like minor instances of lying or cheating. It was only when a threshold insult to some sentient being’s interests was crossed would the God Machine exercise its almighty power.
Nowdays, the God Machine rarely intervenes in this world. As a part of the Great Moral Project, people have also been morally enhanced by biomedical and other means. Their altruism and sense of justice is now so strong they almost never decide or choose to act immorally.
Human beings can still autonomously choose to be moral, since if they choose the moral action, the God Machine will not intervene. Indeed, they are free to be moral. They are only unfree to do grossly immoral acts, like killing or raping. This is seen as preferable to physical incarceration, which physically restricts the freedom of the immoral. While people weren’t free to act immorally in the ‘old days,’ since the law prohibited it on pain of punishment, the instalment of the God Machine means that it has become literally impossible to do these things. It is seen as preferable that would-be murderers “change their minds”, rather than an innocent person is killed and then the murderer incarcerated for life. And, the would-be murderer never knows that her intentions have been changed by an authority outside of herself. It seems to her that she has “changed her mind” spontaneously – she experiences a life of complete freedom, though she is not free. Although any intention to kill or rape immediately changed, this was put down to the efficacy of moral education. It seemed “from the inside” that she had just developed an aversion to killing an innocent person. And no one was ever killed.
People understood the God Machine existed and suspected that it did indeed intervene, though no one knew how often. Some were so deeply attracted to complete freedom that they enlisted in extra Moral Enhancement courses which sought to provide advanced Cognitive Behavioural Therapy to help one to rid oneself of all evil intentions and desires. These Freedom Lovers retained their complete freedom.
There had been quite a bit of controversy over what should be classified as “grossly immoral action” which should be within the God Machine’s purview. Should cheating in exams be extinguished? Marital infidelity? The Machine decided that only those acts which would have resulted in imprisonment of a person should be prevented. Thus prisons were abolished.
It is, perhaps, this kind of world which objectors to moral enhancement like Harris fear. Human beings are no longer ‘free to fall’ or at least not free to fall big time.4 But it might be wondered what is so bad with such a world after all? Those who value and want to be free can be free, or at least as free as humans can ever be. And everyone is much better off for the absence of evil. There is no physical incarceration or great harm wrought by one human being on another. Why not create the God Machine, as a fail-safe device which kicks in when moral enhancement has not been effective enough?
It is important to recognise that although moral enhancement exists in such a world by biomedical and conventional means, the God Machine is not itself a moral enhancement. It prevents people from acting immorally (though they can still form immoral intentions).
Autonomy is the power to make well-grounded, rational decisions and to act in accordance with them. There is one way in which the God Machine would not compromise autonomy, that is, even if it did prevent people from acting immorally. This would be the case if people voluntarily chose to be connected. Voluntarily connecting to the God Machine would then be an example of a precommitment contract, the paradigm example of which is Ulysses and the Sirens.
Ulysses and Sirens
The story of Ulysses and the Sirens provides an example of what can be called an obstructive or irrational desire which goes against his best judgement. Ulysses was to pass “the Island of the Sirens, whose beautiful voices enchanted all who sailed near. [They] … had girls’ faces but birds’ feet and feathers … [and] sat and sang in a meadows among the heaped bones of sailors they had drawn to their death”, so irresistible was their song. Ulysses desired to hear this unusual song, but at the same time wanted to avoid the usual fate of sailors who succumbed to this desire. So he plugged his men’s ears with bees’ wax and instructed them to bind him to the mast of his ship. He told them: “if I beg you to release me, you must tighten and add to my bonds.” As he passed the island, “the Sirens sang so sweetly, promising him foreknowledge of all future happenings on earth.” Ulysses shouted to his men to release him. However, his men obeyed his previous orders and only lashed him tighter. They passed safely (Graves 1960).5
Before sailing to the Island of the Sirens, Ulysses made a considered evaluation of what was best for him. Thinking clearly, with all the facts before him, he formed a plan which would enable him to both hear the song of the Sirens and live. His order that he should remain shackled was an expression of his autonomy.
In the grip of the Sirens’ song, Ulysses’ strongest desire was that his men release him. But it was an irrational desire. At the time, this may have been his only desire. The song of the Sirens was irresistible. We see in this case how it is necessary to frustrate some of a person’s desires, even his strongest desires, if we are to respect his autonomy.6
Involuntary Incarceration?
The objector to moral enhancement by means of the God Machine might respond, “It is fine to voluntarily connect to the God Machine and prevent oneself from acting deeply immorally. What is wrong is connecting people either against their will or without their consent. It would be wrong to connect a child to such a machine.”
But would it be wrong to connect a young child to the God Machine? After all, we would physically restrain a child if we knew he were about to commit murder. We employ all sorts of moral education to shape the very desires of children. This would only remove the most immoral of those desires, leaving the child free to develop during childhood without the taint of serious murder or other serious harm that we deem punishable on his hands. And without imprisonment. At the coming of age, children could be given the choice of whether to remain connected or disconnected.
The Value of the Freedom to Fall
If there is anything wrong with the God Machine, it seems that at most it is wrong to connect competent adults against their will. Of course, criminals would be unlikely to voluntarily connect themselves to the God Machine so that, in order to eradicate crime, some people would need to be involuntarily connected. However, this number might not be great if there was also extensive moral enhancement in the manner of improving altruism and sense of justice, couple with improved impulse control to prevent backsliding from one’s considered judgements.
But for those in whom the God Machine did intervene, the freedom to fall would have been removed. And it would not constitute a moral enhancement of the person connected – it would indeed subjugate their moral identity to that of the Machine by substituting immoral intentions with moral intentions.
Freedom, however, is only one value. In the world of the God Machine, there would be no serious crime. There would be great benefits to other people. In the absence of perfectly effective moral enhancement, the loss of freedom in one domain of our lives - to commit evil deeds - would be worth the benefits. We would be otherwise free. Even in those cases in which the God Machine does undermine autonomy, the value of human well-being and respect for the most basic rights outweighs the value of autonomy. This is not controversial. As Mill wrote,
That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant. (Mill 1859)
What more moral way to prevent harm to others than to cause a person to change his mind?
We are not free to commit serious crime even now – the laws prohibits it on pain of punishment. What we weren’t free to do, the God Machine makes strictly impossible. If this is a loss, it would be outweighed by the fact that there are no victims suffering from serious crimes.
Conclusion
Moral bioenhancement is occurring in small ways already when drugs like SSRIs are taken for psychiatric indications. However, there has been no strategic programme to use knowledge from the science of morality to deliberately and effectively improve moral disposition and behaviour. But such enhancement seems possible and, in many ways, desirable.
In this paper, we have addressed the objection that moral bioenhancement is wrong because it would compromise the freedom to act immorally and undermine personal autonomy, the ‘precious’ ‘freedom to fall’. But enhancement of moral dispositions like altruism and a sense of justice would not undermine freedom of choice – it would not make people less free than those who are most moral today. Even if our freedom of choice consists in our choices not being fully causally determined, it can’t be undercut of moral enhancement – rather this freedom means that there are limits to efficiency of moral enhancement, by whatever means, traditional or biomedical.
We have argued that there might be interventions, such as the God Machine, that do indeed produce more moral behaviour that do control the moral agent, subjugating that person to the will of another and removing the freedom to act immorally. Such interventions and such control are not plausibly moral enhancements of that person – they rather undermine autonomy by substituting moral for immoral intentions. Nonetheless, they might be justified if they prevent grave suffering.
Moral enhancements which increase altruism, including empathetic imagination of the suffering and interests of others, coupled with sympathetic response to this, together with greater preparedness to sacrifice one’s own interests, greater willingness to co-operate, and better impulse control would not undermine freedom or autonomy. Indeed, improved impulse control would indeed enhance autonomy.
There are clearly some kinds of moral bioenhancement that do not compromise freedom. Indeed, some ways of enhancing the dispositions necessary for morality would increase freedom and autonomy. In the most extreme cases, where technology is able to remove the freedom to act in gravely immoral ways, the loss of such freedom could be outweighed by the suffering such behavioural modification prevents.
Footnotes
Altruism is concern for others for their own sake. It doesn’t necessarily involve self-sacrifice, though it often does. If on my death-bed I’m concerned that you find a treasure, I’m altruistic, though I may not engage in self-sacrifice, since I can’t benefit from the treasure myself.
Material in this section is drawn from Persson and Savulescu (2011b).
Thanks to Hannah Pickard for contributing paragraphs on Personality Disorder.
Notice that they are ‘free to choose to fall,’ and this is what Harris is concerned about in the quotation given at the outset. They aren’t able to translate this choice into action. However, given moral enhancement, very few perhaps nobody would make this choice.
All quotations in this paragraph are from this work.
The notion that some desires can frustrate the expression of our autonomy is also described by Young (1986, especially 9, 14, 50, 56), Frankfurt (1975, especially 68-71) and Watson (1975, especially 109-110, 117). The last two writers use the term freedom rather than autonomy. Feinberg gives a detailed list of the kinds of states which can interfere with autonomy (1973)
References
- Baron-Cohen Simon. The Essential Difference: Male and Female Brains and the Truth about Autism. Basic Books; New York: 2003. [Google Scholar]
- Baumeister Roy F., Bratslavsky Ellen, Muraven Mark, Tice Dianne M. Ego-depletion: Is the Active Self a Limited Resource? Journal of Personality and Social Psychology. 1998;74:1252–1265. doi: 10.1037//0022-3514.74.5.1252. [DOI] [PubMed] [Google Scholar]
- Baumeister Roy F. Ego Depletion and Self-Control Failure: An Energy Model of the Self’s Executive Function. Self and Identity. 2002;1:129–136. [Google Scholar]
- Blum Nancee, St. John Don, Bruce Pfhol, Stuart Scott, McCormick Brett, Allen Jeff, Arndt Stephan, Black Donald W. Systems Training for Emotional Predictability and Problem Solving (STEPPS) for Outpatients With Borderline Personality Disorder: A Randomized Controlled Trial and 1-Year Follow-Up. American Journal of Psychiatry. 2008;165:468–478. doi: 10.1176/appi.ajp.2007.07071079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boettiger Charlotte A., Mitchell Jennifer M., Tavares Venessa C., Robertson Margaret, Joslyn Geoff, D’Esposito Mark, Fields Howard L. Immediate Reward Bias in Humans: Fronto-Parietal Networks and a Role for the Catechol-O-Methyltransferase 158Val/Val Genotype. The Journal of Neuroscience. 2007;27(52):14383–14391. doi: 10.1523/JNEUROSCI.2551-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brasil-Neto Joachim P., Pascual-Leone Alvaro, Valls-Sole Josep, Cohen Leonardo G., Hallett Mark. Focal Transcranial Magnetic Stimulation and Response Bias in a Forced-Choice Task. Journal of Neurology, Neurosurgery, and Psychiatry. 1992;55:964–966. doi: 10.1136/jnnp.55.10.964. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Caria Andrea, Sitaram Ranganatha, Veit Ralf, Begliomini Chiara, Birbaumer Niels. Volitional Control of Anterior Insula Activity Modulates the Response to Aversive Stimuli: A Real-Time Functional Magnetic Resonance Imaging Study. Biological Psychiatry. 2010;68(5):425–432. doi: 10.1016/j.biopsych.2010.04.020. [DOI] [PubMed] [Google Scholar]
- Carter Adrian, Hall Wayne, Nutt D. The Treatment of Addiction. In: Carter Adrian, Capps Benjamin, Hall Wayne., editors. Addiction Neurobiology: Ethical and Social Implications. Office for Official Publications of the European Communities; Luxembourg: 2009. pp. 29–50. [Google Scholar]
- Casebeer William, Churchland Patricia S. The Neural Mechanisms of Moral Cognition. Biology and Philosophy. 2003;18:169–194. [Google Scholar]
- Charkrabortty Aditya. From Obama to Cameron: Why Do So Many Politicians Want a Piece of Richard Thaler? The Guardian. 2008 Jul;8 2008. [Google Scholar]
- Charland Louis. Moral treatment and the Personality Disorders. In: Radden Jennifer., editor. The Philosophy of Psychiatry: A Companion. Oxford University Press; Oxford: 2004. pp. 64–77. [Google Scholar]
- Churchland Patricia S. Braintrust: What Neuroscience Tells Us about Morality. Princeton University Press; Princeton: 2011. [Google Scholar]
- Cohen Kadosh, Sonja Soskic Roi, Luculano Teresa, Kanai Ryota, Walsh Vincent. Modulating Neuronal Activity Produces Specific and long-lasting changes in Numerical Competence. Current Biology. 2010;20:2016–20. doi: 10.1016/j.cub.2010.10.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crockett Molly J., Clark Luke, Tabibnia Golnaz, Lieberman Matthew D., Robbins Trevor W. Serotonin Modulates Behavioral Reactions to Unfairness. Science. 2008;320:1739. doi: 10.1126/science.1155577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crockett Molly J., Clark Luke, Hauser Marc D., Robbins Trevor W. Serotonin Selectively Influences Moral Judgment and Behavior Through Effects on Harm Aversion. Proceedings of the National Academy of Sciences. 2010;107(40):17433–8. doi: 10.1073/pnas.1009396107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davidson Kate. Cognitive Therapy for Personality Disorder. 2nd edition Routledge; London: 2008. [Google Scholar]
- De Dreu Carsten K. W., Greer Lindred L., Handgraaf Michel J. J., Shalvi Shaul, Van Kleef Gerben A., Baas Matthijs, Ten Velden Femke S., Van Dijk Eric, Feith Sander W. W. Neuropeptide Oxytocin Regulates Parochial Altruism in Intergroup Conflicts among Humans. Science. 2010;328:1408–11. doi: 10.1126/science.1189047. [DOI] [PubMed] [Google Scholar]
- De Dreu Carsten K.W., Greer Lindred L., Van Kleef Gerben A., Shalvi Shaul, Handgraaf Michel J. J. Oxytocin Promotes Human Ethnocentrism. Proceedings of the National Academy of Sciences. 2011;108(4):1262–6. doi: 10.1073/pnas.1015316108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Waal Frans. The Age of Empathy. Souvenir Press; London: 2010. [Google Scholar]
- Dimeff Linda, Linehan Marsha. Dialectical Behavioural Therapy in a Nutshell. The California Psychologist. 2001;34:10–13. [Google Scholar]
- Feinberg Joel. Social Philosophy. Englewood Cliffs (N.J.); Prentice-Hall: 1973. [Google Scholar]
- Fonagy Peter, Gergely Gyorgy, Jurist Elliot L., Target Mary. Affect Regulation, Mentalization, and the Development of the Self. Karnac; London: 2004. [Google Scholar]
- Frankfurt Harry. Freedom of the will and the concept of a person. In: Philip Christman John., editor. The Inner Citadel: Essays on Individual Autonomy. Oxford University Press; New York: 1989. pp. 63–76. [Google Scholar]
- Gazzaniga Michael. S. The Ethical Brain. Dana Press; New York: 2005. [Google Scholar]
- Graves Robert. The Greek Myths. Volume 2. Penguin; London: 1960. [Google Scholar]
- Greene Joshua D. The Secret Joke of Kant’s Soul. In: Sinnott-Armstrong W, editor. Moral Psychology. Volume III – The Neuroscience of Morality. MIT Press; 2008. pp. 35–79. [Google Scholar]
- Harris John. Moral Enhancement and Freedom. Bioethics. 2011;25:102–111. doi: 10.1111/j.1467-8519.2010.01854.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Insel Thomas R., Fernald Russell D. How the Brain Processes Social Information: Searching for the Social Brain. Annual Review of Neuroscience. 2004;27:697–722. doi: 10.1146/annurev.neuro.27.070203.144148. [DOI] [PubMed] [Google Scholar]
- Kiesel Andrea, Wagener Annika, Kunde Wilfried, Hoffmann Joachim, Fallgatter Andreas J., Stöcker Christian. Unconscious Manipulation of Free Choice in Humans. Consciousness and Cognition. 2006;15:397–408. doi: 10.1016/j.concog.2005.10.002. [DOI] [PubMed] [Google Scholar]
- Kosfeld Michael, Heinrichs Markus, Zak Paul J., Fischbacher Urs, Fehr Ernst. Oxytocin Increases Trust in Humans. Nature. 2005;435(7042):673–6. doi: 10.1038/nature03701. [DOI] [PubMed] [Google Scholar]
- Lang Kerry L., Vernon Philip A. In: “Genetics,” in Handbook of Personality Disorders: Theory, Research and Treatment. Livesley W. John., editor. Guildford Press; New York: 2001. pp. 177–195. [Google Scholar]
- Lees Janine, Manning Nick, Rawlings Barbara. Therapeutic Community Effectiveness: A Systematic International Review of Therapeutic Community Treatment for People with Personality Disorders and Mentally Disordered Offenders. York Publishing; York: 1999. [Google Scholar]
- Liberzon Israel, Young Elizabeth A. Effects of Stress and Glucocorticoids on CNS Oxytocin Receptor Binding. Psychoneuroendocrinology. 1997;22(6):411–22. doi: 10.1016/s0306-4530(97)00045-0. [DOI] [PubMed] [Google Scholar]
- Link H, Dayanithi G, Gratzl M. Glucocorticoids Rapidly Inhibit Oxytocin-Stimulated Adrenocorticotropin Release from Rat Anterior Pituitary Cells, Without Modifying Intracellular Calcium Transients. Endocrinology. 1993;132:873–877. doi: 10.1210/endo.132.2.8381078. [DOI] [PubMed] [Google Scholar]
- Mill John Stuart. On Liberty. Oxford University; [Retrieved 2008-02-27]. 1859. pp. 21–22. [Google Scholar]
- Morgan Drake., Grant Kathleen A., Gage H. Donald, Mach Robert H., Kaplan Jay R., Prioleau Osric, Nader Susan H., Buchheimer Nancy, Ehrenkaufer Richard, Nader Michael. Social Dominance in Monkeys: Dopamine D2 Receptors and Cocaine Self-Administration. Nature Neuroscience. 2002;5:169–74. doi: 10.1038/nn798. [DOI] [PubMed] [Google Scholar]
- National Institute of Mental Health in England (NIMH(E)) Personality Disorder: No Longer a Diagnosis of Exclusion. NIMH(E); London: 2003. [Google Scholar]
- National Offender Management Strategy (NOMS) Working with Personality Disordered Offenders: A Practitioner’s Guide. NOMS; London: 2011. [Google Scholar]
- National Institute of Clinical Excellence (NICE) Borderline Personality Disorder: Treatment And Management. NICE; London: 2009. [Google Scholar]
- Oosterbeek Hessell, Sloof Randolph, van de Kuilen Gijs. Cultural Differences in Ultimatum Game Experiments: Evidence from a Meta-Analysis. Experimental Economics. 2004;7(2):171–188. [Google Scholar]
- Paris Joel. Psychosocial Adversity. In: Livesley W. John., editor. Handbook of Personality Disorders: Theory, Research and Treatment. Guildford Press; New York: 2001. pp. 231–241. [Google Scholar]
- Pearce Steve, Pickard Hanna. The Moral Content of Psychiatric Treatment. British Journal of Psychiatry. 2009;195:281–282. doi: 10.1192/bjp.bp.108.062729. [DOI] [PubMed] [Google Scholar]
- Persson Ingmar, Savulescu Julian. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy. 2008;25(3):162–177. [Google Scholar]
- --- Unfit for the Future? The Need for Moral Enhancement. Oxford University Press; Oxford: Forthcoming. [Google Scholar]
- --- The Turn for Ultimate Harm: A Reply to Fenton. Journal of Medical Ethics. 2011a;37:441–444. doi: 10.1136/jme.2010.036962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- --- Getting Moral Enhancement Right: The Desirability of Moral Enhancement. Bioethics. 2011b doi: 10.1111/j.1467-8519.2011.01907.x. doi: 10.1111/j.1467-8519.2011.01907.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- --- Unfit for the Future? Human Nature, Scientific Progress and the Need for Moral Enhancement. In: Savulescu Julian, Meulen Ruud Ter, Kahane Guy., editors. Enhancing Human Capacities. Wiley-Blackwell; Oxford: 2011c. [Google Scholar]
- --- Moral Transhumanism. Journal of Medicine and Philosophy. 2010;35(6):656–669. doi: 10.1093/jmp/jhq052. [DOI] [PubMed] [Google Scholar]
- Pickard Hanna. Mental Illness is Indeed A Myth. In: Broome Matthew R., Bortolotti Lisa., editors. Psychiatry as Cognitive Neuroscience. Oxford University Press; Oxford: 2009. pp. 83–101. [Google Scholar]
- --- Responsibility Without Blame: Empathy and the Effective Treatment of Personality Disorder. Philosophy, Psychiatry, Psychology. 2011a;18(3):209–224. doi: 10.1353/ppp.2011.0032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- --- What is Personality Disorder? Philosophy, Psychiatry, Psychology. 2011 b;18(3):171–184. doi: 10.1353/ppp.2011.0032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ramachandran Vilayanur S., Oberman Lindsay M. Broken Mirrors: A Theory Of Autism. Scientific American. 2006;295:62–9. doi: 10.1038/scientificamerican1106-62. [DOI] [PubMed] [Google Scholar]
- Silber Märta, Almkvist Ove, Larsson Bertil, Stock S, Uvnäs-Moberg Kerstin. The Effect Of Oral Contraceptive Pills on Levels of Oxytocins in Plasma and on Cognitive Functions. Contraception. 1987;36:641–650. doi: 10.1016/0010-7824(87)90037-0. [DOI] [PubMed] [Google Scholar]
- Singer Peter. Ethics and Intuitions. Journal of Ethics. 2005;9:331–52. [Google Scholar]
- Sitaram Ranganatha, Caria Andrea, Veit Ralf, Gaber Tilman, Rota Giuseppina, Kuebler Andrea, Birbaumer Niels. fMRI Brain-Computer Interface: A Tool for Neuroscientific Research and Treatment. Computational Intelligence And Neuroscience. 2007 doi: 10.1155/2007/25487. Article ID 2548. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sitaram Ranganatha, Caria Andrea, Birbaumer Niels. Hemodynamic Brain-Computer Interfaces for Communication and Rehabilitation. Neural Networks. 2009;22(9):1320–1328. doi: 10.1016/j.neunet.2009.05.009. [DOI] [PubMed] [Google Scholar]
- Stock S, Karlsson R, von Schoultz B. Serum Profiles of Oxytocin During Oral Contraceptive Treatment. Gynecological Endocrinology. 1994;8(2):121–6. doi: 10.3109/09513599409058033. [DOI] [PubMed] [Google Scholar]
- Sunstein Cass. Moral Heuristics. Behavioral and Brain Sciences. 2005;28:531–542. doi: 10.1017/S0140525X05000099. [DOI] [PubMed] [Google Scholar]
- Terbeck Sylvia, Kahane Guy, McTavish Sarah, Savulescu Julian, Cowen Philip, Hewstone Miles. Beta-Adrenergic Blockade Reduces Implicit Negative Racial Bias. Under review a. [DOI] [PMC free article] [PubMed]
- --- Emotion in Moral Decisionmaking: Beta Adrenergic Blockade Increases Deontological Moral Judgments. Under review b.
- Thaler Richard H., Sunstein Cass. Nudge: Improving Decisions about Health, Wealth, and Happiness. Yale University Press; Yale: 2008. [Google Scholar]
- Tse Wai S., Bond Alyson J. Serotonergic Intervention Affects Both Social Dominance and Affiliative Behaviour. Psychopharmacology. 2002;161:324–330. doi: 10.1007/s00213-002-1049-7. [DOI] [PubMed] [Google Scholar]
- Wallace Björn, Cesarini David, Lichtenstein Paul, Johannesson Magnus. Heritability of Ultimatum Game Responder Behaviour. Proceedings of the National Academy of Sciences. 2007;104(40):1, 5631–4. doi: 10.1073/pnas.0706642104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Bin, Shaham Yavin, Zitzman Dawnya, Azari Soraya, Wise Roy A., Zhi-Bing You. Cocaine Experience Establishes Control of Midbrain Glutamate and Dopamine by Corticotropin-Releasing Factor: A Role in Stress-Induced Relapse to Drug Seeking. Journal of Neuroscience. 2005;25:5389–96. doi: 10.1523/JNEUROSCI.0955-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Watson Gary. Free Agency. In: Christman John Philip., editor. The Inner Citadel: Essays on Individual Autonomy. Oxford University Press; New York: 1989. pp. 109–22. [Google Scholar]
- Westen D. The Political Brain: The Role of Emotion in Deciding the Fate of the Nation. PublicAffairs books; 2007. [Google Scholar]
- Wood Richard M., Rilling James K, Sanfey Alan G., Bhagwagar Zubin, Rogers Robert D. Effects of Tryptophan Depletion on the performance of an Iterated Prisoner’s Dilemma Game in Healthy Adults. Neuropsychopharmacology. 2006;31(5):1075–84. doi: 10.1038/sj.npp.1300932. [DOI] [PubMed] [Google Scholar]
- Young Robert. Personal autonomy: Beyond negative and positive liberty. Croom Helm; London: 1986. [Google Scholar]
- Zak Paul, Kurzban Robert, Matzner William. The Neurobiology of Trust. Annals of the New York Academy of Sciences. 2004;1032:224–227. doi: 10.1196/annals.1314.025. [DOI] [PubMed] [Google Scholar]