Abstract
In the last decades increasing attention is paid to the topic of responsibility in technology development and engineering. The discussion of this topic is often guided by questions related to liability and blameworthiness. Recent discussions in engineering ethics call for a reconsideration of the traditional quest for responsibility. Rather than on alleged wrongdoing and blaming, the focus should shift to more socially responsible engineering, some authors argue. The present paper aims at exploring the different approaches to responsibility in order to see which one is most appropriate to apply to engineering and technology development. Using the example of the development of a new sewage water treatment technology, the paper shows how different approaches for ascribing responsibilities have different implications for engineering practice in general, and R&D or technological design in particular. It was found that there was a tension between the demands that follow from these different approaches, most notably between efficacy and fairness. Although the consequentialist approach with its efficacy criterion turned out to be most powerful, it was also shown that the fairness of responsibility ascriptions should somehow be taken into account. It is proposed to look for alternative, more procedural ways to approach the fairness of responsibility ascriptions.
Keywords: Responsibility, Liability, Efficacy, Fairness, Informed consent, No harm principle, Technological risk, Engineering practice, Consequentialism
Introduction
In the last decades increasing attention is paid to the topic of responsibility in technology development and engineering.1 The topic is often raised in the context of disasters due to technological failure, such as the Bhopal disaster (Castleman and Purkavastha 1985; Bisarya and Puri 2005), the explosion of the Challenger (Vaughan 1996; Davis 1998; Harris et al. 2005), and the sinking of the Herald of Free Enterprise (Richardson and Curwen 1995; Berry 2006). The discussion of responsibility then typically focuses on questions related to liability and blameworthiness.2 Asking these questions might suggest that there is one, unambiguous definition of responsibility. This is far from true, however. In moral philosophy, few concepts are more slippery than that of responsibility (Miller 2001, p. 455). What the questions of liability and blameworthiness share, is that the question of responsibility is asked after some undesirable event has occurred. However, the ascription of responsibility can also refer to something that ought to happen in the future: being responsible then means that an agent has been assigned a certain task or set of obligations to see to it that a certain state of affairs is brought about (or prevented). In that latter case, responsibility is often ascribed from a consequentialist perspective.3 As a third approach one could also distinguish the question of responsibility from the perspective of the rights of potential victims, which often focuses on the question who should put a situation right (e.g., by compensating for certain damage).
Recent discussions in engineering ethics call for a reconsideration of the traditional quest for responsibility. Rather than on alleged wrongdoing and blaming, the focus should shift to more socially responsible engineering, in which “to maximize the service to the larger society” should become the ethical norm (Durbin 2008, p. 230). Responsibility as blameworthiness should therefore be replaced by, or complemented with the notion of engineering as a responsible practice (Pritchard 2001). Until the late 1990s, scholarly literature on engineering ethics, however, seemed to be biased towards the blame-oriented or merit-based perspective on responsibility rather than this more forward-looking perspective (Pritchard 2001, p. 391; Durbin 1997).
Similarly, both in the general field of moral philosophy, and more specifically in the field of engineering ethics, there has also been a call to shift the focus of ethics from an abstract outsider’s perspective towards the practice in which moral deliberation takes place. For example, in the general field of moral philosophy Alasdair MacIntyre and Michael Walzer argue for an insider’s perspective when trying to improve a practice.4 In the field of engineering ethics, philosophers such as Michael Pritchard, Mike Martin, Vivian Weil and Michael Davis are firm proponents of taking an insider’s perspective on engineering and its ethical issues. Michael Davis, for instance argues that the discussion of responsibility is too much about ‘holding others responsible’ instead of ‘assuming responsibility’ (Davis 2009).
This shift from an outsider’s perspective towards an insider’s perspective might have implications for the topic of responsibility as well. The present paper aims at exploring three main approaches to responsibility in order to see which one is most appropriate to apply in engineering and technology development, where I take appropriateness to mean two things:
the approach should reflect people’s basic intuitions of when it is justified to ascribe responsibility to someone. An approach that contravenes these basic intuitions will probably be deemed unfair. Whether such an approach should depart from abstract principles and work top-down to considered judgments about particular cases, or depart from these considered judgments and work bottom-up to more general principles is still open for discussion. It is important, though, that people recognize that the responsibility ascription is justified.
the approach should inform the direction of technology development and therewith improve technological design. In order for this to be so, it should be possible to apply the approach to specific contextualized moral issues that are raised by specific technological and scientific developments rather than to more general abstract issues. This second requirement follows from recent discussions within engineering ethics, and ethics concerning New and Emerging Science and Technology (NEST) in particular, in which it is argued that the ethical and social aspects of new technologies should be addressed at an early stage of technology development in order to adapt technology to society’s needs (Van de Poel 2008; Swierstra and Rip 2007).5
The outline of this paper is as follows. I will first discuss three different perspectives for ascribing responsibility: a merit-based perspective, a rights-based perspective and a consequentialist perspective. After a brief intermezzo on forward-looking and backward-looking responsibilities, I will apply the three perspectives to the example of the development of a new sewage water treatment technology. A comparison of the three approaches will show that the consequentialist perspective is especially suited for distributing responsibilities since it is most akin to the engineering work and it (therefore) offers the best opportunities for improving technological design. The paper ends with recommendations for further developing the field of engineering ethics by incorporating insights from political philosophy.
Three Perspectives for Ascribing Responsibility
In this section, I discuss three approaches or perspectives for ascribing responsibility: a merit-based perspective, a rights-based perspective and a consequentialist perspective.6 Although the latter is common in non-philosophical discussions (for example in organizational and management literature), the philosophical literature is mainly focused on responsibility as blameworthiness (i.e., the merit-based perspective).7 Being the most common approach in philosophical literature, I start the present overview with this merit-based perspective.
A Merit-Based Perspective on Responsibility
In the philosophical literature on moral responsibility, the aim for ascribing responsibility is mostly retributivist. In the traditional view, being morally responsible means that the person is an appropriate candidate for reactive attitudes, such as blame or praise (Strawson 1974; Fischer and Ravizza 1993; Miller 2004). Being morally responsible (i.e., being eligible for reactions of praise and blame) is not the same as being causally responsible. One can imagine a situation where a person did indeed causally contribute to certain outcome but is not eligible for moral evaluation, and hence not for reactive attitudes of praise or blame (e.g., in case of positive outcomes due to sheer luck, or negative outcomes which one could not reasonably avoid). In both cases it is not warranted to praise or blame the person for the outcome. Hence, since moral responsibility in this above elaborated view is related to reactive attitudes, which may have consequences for the well-being of an agent, the ascription of moral responsibility is only warranted if these reactive attitudes and their consequences are merited or deserved (see Zimmerman 1988; Wallace 1994; Watson 1996; Magill 2000; Eshleman 2008). This is usually translated into certain conditions that have to be met before it is fair to ascribe responsibility to someone. In the remainder, I call this the fairness criterion of responsibility ascriptions. Although academics disagree on the precise formulation, the following conditions together capture the general notion of when it is fair to hold an agent morally responsible for (the consequences of) their actions (see Feinberg 1970; Hart and Honoré 1985; Bovens 1998; Fischer and Ravizza 1998; Corlett 2006):
Moral agency: the responsible actor is an intentional agent concerning the action. This means that the agent must have adequate possession of his or her mental faculties at the moment of engaging in the action. Young children and people whose mental faculties are permanently or temporarily disturbed will not be (fully) held responsible for their behavior. However, to put oneself knowingly and voluntarily into a situation of limited mental capacity (by drinking alcohol or taking drugs for example) does not (in general) exempt one from being responsible for the consequences of one’s behavior. Some people phrase this condition in terms of intention, meaning that the action was guided by certain desires or beliefs.
Voluntariness or freedom: the action resulting in the outcome was voluntary, which means that the actor is not responsible for actions done under compulsion, external pressure or hindered by other circumstances outside the actor’s control. The person must be in the position to determine his own course of action (cf. condition 1), and to act according to that.
Knowledge of the consequences: the actor knew, or could have known, the outcome. Ignorance due to negligence, however, does exempt one from responsibility.
Causality: the action of the actor contributed causally to the outcome; in other words, there has to be a causal connection between the agent’s action or inaction and the damage done.
Transgression of a norm: the causally contributory action was faulty, which means that the actor in some way contravened a norm.
Note that especially the first two conditions are closely interrelated. Being an intentional agent means that one has the opportunity of putting the will into effect and that one is free from external pressure or compulsion (Thompson 1980; Lewis 1991; May and Hoffman 1991). With regard to the fifth condition, extensive debate has been going on as to what counts as a norm. In daily life the norm can be much vaguer than in criminal law where the norm must be explicitly formulated beforehand (the nullum crimen, nulla poena sine praevia lege poenali principle).8
A Rights-Based Perspective on Responsibility: The No Harm Principle
A second approach for ascribing responsibilities within the field of science and technology is based on the individual right of people to be safeguarded from the consequences of another person’s actions (the so-called no harm principle). This implies that “actions are right if and only if: either there are no (possible) consequences for others; or those who will experience the (possible) consequences have consented to the actions after having been fully informed of the possible consequences” (Zandvoort 2005b, p. 46). The aim of this approach is remedial: it refers to the duty or obligation to put a situation right (Miller 2004). In practice this rights-based approach translates into two requirements for decision making regarding the development, production and use of technology (Zandvoort 2008). The first is the (legal) requirement of strict liability, which holds that actors are unconditionally required to repair or fully compensate for any damage to others that may result from their actions, regardless of culpability or fault (Honoré 1999; Van Velsen 2000; Vedder 2001; Zandvoort 2005a). Hence, the question of responsibility is reduced to the question ‘who caused the particular outcome’ (causal responsibility). As such, blame is not the guiding concept in ascribing responsibility.9 The second requirement relates to the principle of informed consent, which holds that “for all activities that create risks for others, all who are subjected to the risks must have given their informed consent to the activities and the conditions under which the activities are performed” (Zandvoort 2008, p. 4).
Instead of fairness towards potential wrongdoers, this approach focuses on fairness towards potential victims. Given the importance of informed consent, the engineering ethics literature on this approach to responsibility therefore focuses on the conditions under which consent can be gained and its implications for, e.g., risk communication and risk assessment.
A Consequentialist Perspective on Responsibility
The third perspective for ascribing responsibility is the consequentialist perspective. In the consequentialist perspective, responsibility is ascribed for instrumental reasons rather than retributivist (merit-based) or remedial (rights-based) reasons. In the consequentialist perspective, the most important question when ascribing responsibility is not whether the reactive response triggered by the responsibility ascription is warranted but whether the reactive response would likely lead to a desired outcome, such as improved behavior by the agent (Eshleman 2008).10 Where fairness is the main criterion for the merit-based perspective and informed consent the basis for the rights-based approach, efficacy is the criterion for consequentialist responsibility ascriptions, which means that they should contribute to the solution of the problem at hand (Nihlén Fahlquist 2006a, 2009). According to a strict consequentialist view, the responsibility ascription that yields the best consequences is the morally optimal responsibility ascription. Responsibilities, in this view, do not take specific actions of persons as their object but they rather have the character of obligations to see to it that a certain state of affairs is brought about (or prevented). As such responsibilities are outcome and result oriented (Van den Hoven 1998, p. 107).
In the case of engineering and technology development, this consequentialist perspective could be taken to imply that for a technology to be “right,” in the sense that it is from a societal point of view desirable or at least acceptable that the technology is being developed, potential implications for society (e.g., human health and the environment) should be taken into account during the design phase. In other words, for every potential implication, whether this is a risk or some other problematic issue, someone should be ascribed the responsibility to address this issue. This does not mean that all risks should be completely excluded—a requirement which is impossible to live up to—but that at least everything that can reasonably be known should be considered during design and development phase. Sometimes this might imply that, after deliberation, a potential risk will be accepted as is since the (societal) costs of preventing it do not outweigh the (societal) costs of accepting it.11
The Three Perspectives Compared
In the overview presented above, a distinction was made between the goals that were aimed at in the different perspectives. In addition to a different aim, we could also say that the three approaches each depart from a particular moral background theory and that they each try to answer a different moral question.12 The merit-based approach fits into a deontological framework, which is primarily a theory of “right actions.” The rights-based approach fits into an ethics of rights and freedoms (see, e.g., Nozick 1974; Mackie 1978). This theory shares with deontological ethics that it takes “action” as the primary object of evaluation. Where deontological ethics departs from duties, a right-based discourse departs from people’s individual rights and freedom and uses these to determine which actions are permissible and which are not. In both cases the content of the responsibility ascription is action that ought to be abstained from (merit-based) or that ought to be done (rights-based): to breach a duty is to perform a blameworthy action (merit-based) or to be liable for compensation (rights-based).
The consequentialist approach, which (unsurprisingly) fits best into some form of consequentialism, has a different focus. Rather than on particular action, the consequentialist approach is focused on states of affairs. It does not prescribe what action ought to be done but rather what should be achieved.
A summary of the three approaches is listed in Table 1.
Table 1.
Perspective | Ethical theory | Aim | Criterion | Content |
---|---|---|---|---|
Merit-based | Deontological ethics | Retributivist | Fairness | Actions |
Rights-based | Ethics of rights and freedoms | Remedial | Informed consent | Actions |
Consequentialist | Consequentialism | Instrumental | Efficacy | States of affairs |
Forward-Looking Versus Backward-Looking Responsibility
Before continuing the application of the three perspectives on a real engineering case, some clarifications regarding responsibility need to be made.
One could argue that the merit-based and the consequentialist perspective responsibility are not comparable in the sense that they refer to different time horizons. We therefore cannot speak of two perspectives on the same concept but we rather should speak of two different types of responsibility, each with a different criterion. For example, the merit-based perspective is often applied after-the-fact and it is therefore backward-looking or retrospective. The consequentialist perspective is often applied in a forward-looking or prospective sense (i.e., before something has happened). However, despite the difference in focus, the two perspectives are closely related. Imagine an engineer E who designs some artifact A. Unfortunately, there is a serious flaw in the design and the artifact causes the death of some innocent person P. Imagine further that E could have easily designed an artifact A* with similar (functional) characteristics but without the property leading to the death of P. In fact, E knew that the design was flawed and he intentionally did not improve the design, even though he had the freedom to do so. From this we would probably conclude that E is morally responsible for the death of P. But why is that so? As explained in the section “A Merit-Based Perspective on Responsibility,” this perspective involves a moral assessment of the agent in terms of the conditions discussed above. Except for the condition of causation, which determines whether someone did causally contribute to a certain outcome, the other four conditions bridge the gap between causal and moral responsibility. In the example, four conditions are obviously met: the engineer is a moral agent (condition 1), he was free (2), he knew of the consequences (3) and he causally contributed to the death (4). But what about the fifth condition: the transgression of a norm? Most people would probably say that E is blameworthy because he did not pay enough attention to the lethal consequences of the artifact. Apparently, the fifth condition in the merit-based perspective implies a forward-looking responsibility to be careful or to pay attention. Both in law and professional ethics this forward-looking responsibility is operationalized in the duty of (reasonable) care to avoid (foreseeable) harm to others. At the minimal level, this duty of care implies that E should consider how to redirect foreseeable harm to people who are affected by his artifact, but it could also be argued that he has the (broader) responsibility to look after potentially dangerous but as yet unforeseen risks. The duty of care implies that there are certain acts or omissions that should be avoided. In this simplified case, the duty of care requires that the engineer should not develop artifact A but rather A*. So also in a merit-based perspective, people have forward-looking responsibilities.
If we depart from the consequentialist view, we also see that the forward-looking and backward-looking responsibilities are closely related. It is because blame and praise can have a motivational force to take up one’s forward-looking responsibility that backward-looking responsibilities are being ascribed. Hence, forward-looking responsibilities translate into backward-looking responsibilities and vice versa.
Development of a New Sewage Treatment Technology: The Three Perspectives Applied
Now we have clarified the different approaches to ascribing responsibility, we can apply these to the field of technology development. I do so on the basis of an embedded ethical research that was carried out parallel to the technical development of a new sewage treatment technology (Zwart et al. 2006; Van de Poel and Zwart forthcoming). The idea behind this so-called embedded ethical research or ethical parallel research is that ethical investigations are carried out parallel to, and in close cooperation with, a specific technological R&D project. The ethicists interact with the technological researchers, allowing the ethicists to co-shape new technological developments. By applying the three responsibility perspectives (merit-based, rights-based, and consequentialist) to technology development, I explore the appropriateness of the different perspectives in engineering practice in terms of the two criteria formulated in the introduction of this paper.
Ethical Parallel Research Into The Upscaling Of The GSBR Technology
The ethical parallel research concerned the development of a new sewage treatment technology, the so-called granular sludge sequencing batch reactor (GSBR) (see Text box below for a description of the technology). In the technological project, different parties contributed, classified by the ethical parallel researchers according to their role in the project team. These were the role of researcher, technology producer (including activities like design and consultancy), user of the technology, and financer of the technology. The ethical parallel research consisted of a qualitative research, based on interviews, document analysis, attendance of technical meetings and the organization of an interactive session in the Group Decision Room (GDR; an electronic brainstorming facility) with the different stakeholders, where questions related to risks and responsibilities were addressed.
Text box.
One drawback of traditional biological wastewater treatment plants is their large space demand or footprint, which is caused by the use of separate settling tanks and the slow settling velocity of the sludge. In the aerobic GSBR technology both size increasing factors are addressed. By using high-density granules, the time needed for the sludge to sink to the bottom at the end of each cycle is substantially reduced. Subsequently, the shorter deposit time increases the throughput of the installation and reduces the footprint. Second, it is hoped that different ecological zones inside the granules will be able to take care for the entire treatment process in one reactor instead of several separate tanks |
The GSBR technology has been developed at the Department of Biotechnology, Delft University of Technology, the Netherlands. After successful laboratory experiments, the Dutch Foundation for Applied Water Research (STOWA) was found willing to invest in the scaling-up of the three-liter laboratory reactor to an outdoor pilot plant of 1.5 m3. In parallel to the upscaling of the pilot plant, funds were acquired for a PhD-project (funding organization: Technology Foundation STW). Finally, an international engineering and consulting firm, with water management technology as one of its main domains, showed interest in the commercial exploration of the GSBR technology. This firm was in charge of the research at the pilot plant, operated by a local water board. The results of the pilot plant have been positive and the firm anticipates a large demand for GSBRs |
One of the crucial elements in the development of the technology was the upscaling of the three-liter laboratory reactor to an outdoor pilot plant of 1.5 m3. This upscaling was partly based on several unproven assumptions about which microbiological mechanisms are at work. The ethical parallel research, therefore, focused on the question of how this incompleteness of knowledge was dealt with in the choice of scaling-up steps. Incomplete knowledge can lead to the introduction of certain risks, which may become manifest in the research done during the development of the technology, but also later in the eventual use of this technology. The aim of the ethical parallel research was to find out how risks and uncertainties are handled and how this is open to improvement.
During the ethical parallel research, it was observed that the risks due to so-called secondary emissions (i.e., unwanted but not yet regulated substances in the effluent) were not addressed by any of the engineers and researchers involved. The users of the technology delegated the risk of secondary emissions to the research phase, for which they were not primarily responsible, and most of the researchers allocated the risk to a phase for which they in turn bore no responsibility. Nobody therefore assumed responsibility for dealing with this risk. The argument put forward by the researchers and users was that the impact of the risks due to these secondary emissions was negligible and that problems were expected to be solvable in the next phase of the research. This was based on the presumed similarity between biological processes in traditional sewage plants and the biological processes in the GSBR technology. As a result, the issue who is responsible for checking or preventing secondary emissions never became an object of discussion. The ethical parallel researchers state that it cannot be concluded that “such emissions are a serious cause of concern; the situation is rather one of insufficient knowledge. Thus the question arises which of the actors in the network are responsible for reducing this knowledge deficiency, and which actors are responsible for reducing potential secondary emissions in case they turn out to be a serious concern” (Van de Poel and Zwart forthcoming). As a result of the ethical parallel research, the consultancy firm together with the university applied for additional funding to carry out research into the secondary emissions.
In the remainder of this section, I try to show how the different responsibility approaches can be applied to the development of this new technology and how these affect engineering practice, focusing on the issue of secondary emissions.
A Merit-Based Perspective on Harm Caused by the GSBR Technology
The first approach I discuss is the merit-based perspective on responsibility. In the section “Forward-Looking Versus Backward-Looking responsibility,” it was shown that, although focused on blame, the merit-based perspective implies the ascription of forward-looking responsibilities as well. It was argued that these forward-looking responsibilities are primarily derived from the duty of (reasonable) care. This means that people should take measures against foreseeable harm and possibly also look after as yet unforeseen harms. It is notoriously difficult to assess what “reasonable care” exactly amounts to in technology development, especially in the case of new and emerging technologies where the consequences are even harder to predict. A possible starting point for the evaluation of due care is the test of independent peers. If peers think that some negative consequences were foreseeable, we could probably conclude that the engineer(s) did not exercise due care.
Let us assume that the GSBR technology is being further developed and commercially exploited. Now suppose that secondary emissions, contrary to expectations, cause some problems for farmers who have their surface water treated with the GSBR technology. Can we point to some person or institution as being morally responsible for these problems? The ethical parallel researchers asked the developers of the technology whom they would ascribe moral responsibility for the secondary emissions to (in the sense of preventing or investigating the harmful effects). They did not get a unanimous answer: some ascribed the responsibility to the researchers at laboratory scale, some to the operators of the pilot plant and some to the users of the technology. Some even argued that no-one carries moral responsibility for these harmful consequences because “introduction of new technology introduces risks and we have to learn to live with that” (ibid.). The latter answer suggests that the principle of due care was not breached at all. However, the fact that some researchers from adjacent scientific fields did express their concerns about the technology (ibid., pp. 20–21) suggests the opposite. Apparently, before involvement of the ethical parallel researchers there was not enough incentive to take up the forward-looking responsibility to further investigate the potential risks of these secondary emissions, even though the researchers were aware of the lack of knowledge regarding these emissions. As such we could say that the duty of (reasonable) care was not fully exercised.
If we discuss moral responsibility in terms of the traditional criteria, probably no-one can be held morally responsible. Although the different actors all contributed to the development of the technology, we can not single out one particular actor or institution that individually carried out all necessary contributions to the outcome. Whereas the conditions, if applied to the complete research group, were fulfilled, probably none of the actors or institutions within the research group fulfilled all the responsibility criteria individually. Especially the knowledge condition, requiring that one can only be held responsible if one knew or could have known the negative consequences, is a problematic condition in this case. Since none of the actors took up the responsibility to reduce the knowledge deficiency regarding the secondary emissions, which would show that the secondary emissions are not as harmless as the technology developers thought they were, no further preventive measures were taken to reduce the risks. However, it is not clear who should have taken up this responsibility. The responsibility for this knowledge deficiency probably lies with the researchers, whereas the causal responsibility lies with the technology producers and users. Hence, if we apply the five conditions of the merit-based approach, nobody can be held responsible for the negative consequences (i.e., the secondary emissions) of the technology, even though the research team as a whole breached the duty of (reasonable) care.13 In the literature this is called the problem of many hands, which is first defined as such by Thompson (1980).14 It refers to the difficulty to identify, even in principle, the person responsible for some outcome, if a large number of people is involved in an activity. But sometimes it is the joined acting of individuals within a collective that bring about negative consequences, precisely because collectives can create potentially greater harms than individuals working independently. Acting on an individual basis, neither the water board nor the researchers could have built a treatment plant with the innovative technology but as a collective they were able to do so.
Some people therefore propose to hold the collective as a whole morally responsible. All individuals within the collective are held equally responsible (May and Hoffman 1991). This ascription of responsibility to the whole collective is criticized for being morally unsatisfactory. People are then being held responsible for the conduct of others, which is rendered unfair (Lewis 1991). This raises a fundamental problem for individual responsibility: either no-one can be fairly held responsible and hence the problem of many hands occurs, or moral responsibility is ascribed to the whole collective of people who in some way contributed to the outcome, leaving aside an individual assessment in terms of the responsibility conditions, which is rendered unfair. The latter holds especially if sanctions are coupled to the ascription of responsibility. After all, being part of a collective that caused some negative event does not imply that one’s individual actions were immoral or illegitimate and hence that one is eligible for blame.15 We could also see this as a tension between what we owe to potential wrongdoers (not being blamed unless it is fair to do so) and what we owe to potential victims (to make someone responsible for preventing disasters). Although I think that individual responsibility should not too easily be dismissed on the grounds that individuals are powerless cogs in the machinery of their professional organization, the point remains that this traditional individualistic approach seems to put much more emphasis on what we owe to potential wrongdoers rather than on what we owe to potential victims.16 Consequently, the problem of many hands is a serious threat to this approach.
This more conceptual problem of individual responsibility raises an important practical problem as well. Due to the inability to ascribe moral responsibility, an important opportunity for improvement is missed. Ascribing moral responsibility may lead to learning processes, which may ultimately prevent similar disasters from happening again in the future. If no-one can be held responsible, this opportunity for learning will not be fully exploited (Nihlén Fahlquist 2006a).
Summarizing, in the merit-based perspective on responsibility it is difficult to ascribe responsibilities. In the light of engineering practice, this approach seems rather powerless. In the extreme case, no-one learns from the mistakes being made and the development of the technology continues as if nothing happened. As a consequence, there is little incentive to take up the forward-looking responsibility to prevent negative consequences.
A Rights-Based Perspective on Harm Caused by the GSBR Technology
As said in the section “Three Perspectives for Ascribing Responsibility,” the rights-based perspective focuses on the task or obligation to set a situation right. With regard to the question of liability, all people involved in the project (including the end users) unanimously agreed that water boards using the new technology are legally liable when incidents (such as problems related to the secondary emissions) would occur.
If we apply the principle of strict liability, it is questionable whether institutions, such as the water board in the present example, will ever participate in innovative research projects. They will most probably be very reluctant in participating in the development of innovative and radically new technologies. Some scholars even argue that unrestricted liability would hamper any large-scale investment, also desirable ones (Perrott 1982). After all, existing problems sometimes require radical technological innovations (think of technological innovations relating to green energy). Technologies are primarily developed to “change positively the quality of life” (Berloznik and Van Langenhove 1998, p. 24), in the sense that they try to solve or reduce existing problems. In the development of new technologies trade-offs have to be made between competing values, in the GSBR case between sustainability and safety. The categorical rejection of the technology because it does not satisfy one of the demands is not a viable option, since this creates risks of its own (Sunstein 2005).
As explained in the section “A Rights-Based Perspective on Responsibility: The No Harm Principle,” the procedure of “informed consent” is introduced as a possible response to this problem: in case of risk for irreversible harm the principle of strict liability requires that consent of all people who are subjected to this risk be obtained. If this consent cannot be obtained, the risk should simply not be posed (Zandvoort 2008, p. 8). The fact that this approach takes seriously the perspective of potential victims of (high-risk) technologies is unmistakably a strength. The risks of these technologies cannot be imposed to anyone without his or her informed consent. Hence, an unfair distribution of risks by majority decision making is not allowed according to this approach. However, despite its democratic aim, this approach runs the risk of paralyzing the debate on potentially risky technologies. After all, the principle of actual consent implies that anyone has the right to veto against activities that impose risks, which ultimately creates a society of stalemates where nothing can be done, as Hansson argues (Hansson 2006, 2009). Informed consent is problematic if applied to affected individuals collectively. Zandvoort therefore discusses procedures to increase the willingness to consent (Zandvoort 2008). These are all based on monetary compensation (either directly or indirectly, such as the building of a new city theatre if the city consents to the building of nuclear plant in the neighborhood) or improvement of the credibility of risk assessment. It is striking that both approaches do not give any incentive to improve the technology itself. The focus is on ready-made technologies rather than participation in decision making process along the way of development (Hansson 2006, p. 150).
Summarizing, the rights-based approach emphasizes the right of people to be safeguarded from harm caused by others. However, the operationalization of this right by way of the principle of informed consent is problematic in the context of collective decision making. Moreover, the approach in itself seems problematic because of its focus on monetary compensation instead of improvement of the technology.
A Consequentialist Perspective on Potential Harm Caused by the GSBR Technology
The third approach is the consequentialist perspective, which is in fact the approach that was taken by the ethical parallel researchers. In the section “Ethical Parallel Research Into The Upscaling Of The GSBR Technology,” I discussed how the ethical parallel research influenced the development of the GSBR technology. The ethical parallel research led to the identification of gaps in the distribution of responsibilities, in particular the responsibility for secondary emissions. As a result, funds were acquired to carry out additional research into the secondary emissions. As such the analysis of the responsibilities by the ethicists led to an improvement of the division of labor amongst the technological researchers and engineers, which in its turn led to an improved technological design. The responsibilities were not distributed on the basis of fairness criteria but on the basis of efficacy (capacity, power, resources). By making the technological research team aware of the responsibility issues, some of the technological researchers took the initiative to incorporate the secondary emissions in the research project. As such the effect of the ethicists’ involvement on the engineering practice was not blaming or sanctioning but rather that of co-shaping. The ethical parallel research did not so much pose limits to the technology development but guided it.
Summarizing, since responsibilities are ascribed according to the criterion of efficacy, the problem of many hands does not manifest itself (or at least, not as severely as would be the case in a strictly merit-based perspective). By taking a consequentialist stance, the ethicists encouraged the engineers and researchers to improve the technological design.17
The Three Perspectives Compared
If we compare the different approaches all three have their merits. The merit-based perspective emphasizes the fairness of a responsibility ascription. It takes seriously the moral question: who, from a moral point of view, is responsible? This moral notion of responsibility is in line with common morality, and especially in case of victims of irreversible harm, people will be interested to hear the answer.18 We sometimes “want to ascribe responsibility to the person who is responsible—for example, someone who intentionally and culpably brought about an unwanted event—irrespective of the impact on future events of our responsibility ascriptions” (Nihlén Fahlquist 2006a, p. 17). The merit-based perspective does make a serious attempt to try to answer this question of “who is responsible?” However, this classical view on responsibility is based on an individualistic assessment of responsibility, as we saw, which makes it problematic in the context of collective action. Kutz argues that as long as individuals are only assessed in terms of the actions they produce, the disparity between collective harm and individual effect results in the disappearance of individual responsibility (Kutz 2000). And with the disappearance of responsibility, so goes the incentive for individuals to improve their behavior, he argues.
The question of “who is responsible?” was found to be less problematic in the rights-based approach, since it uses only the causal condition rather than the full range of responsibility conditions. With its focus on compensation and consent, this approach put most emphasis on the interests of potential victims. However, it was also shown that this approach gave no or only little incentive to actually improve technological design. Moreover, this approach seemed to have a hampering effect on the exploitation of innovative new technologies.
The consequentialist approach, as a third approach, appeared to be most powerful in terms of the second point identified at the start of the paper: the ability to shape the direction of technology development. It should be noted first that engineers themselves are often driven by a consequentialist heuristic of “problem solving” (Davis 2009). More than discussing who is to blame, they are guided by questions of how to prevent the (re-)occurrence of harmful events. This attitude of “problem solving” is necessarily context-specific. When engineers design a new technology they want that technology to work under real-world circumstances and not only in a laboratory. They therefore engage in extensive studies of errors and mistakes. As Davis puts is,
Whatever is true of other professionals, engineers consider it their responsibility to study any disaster that seems to arise from what they did – and to report what they find. To commit a certain mistake once, even a serious one, is something engineers tolerate as part of advancing technology (…). What engineers do not tolerate is that an engineer, any engineer, should make the same mistake. Once a mistake has been identified, the state of the art advances and what was once tolerable becomes intolerable (a kind of incompetence). (…) Engineering is unusual among professions in recognizing an obligation to ‘acknowledge their errors’. (Davis 2009)
We could say that the consequentialist perspective is most typical of the engineering practice itself. The background question is always “does it solve the problem at hand?” By focusing on real issues rather than abstract duties or principles the impact on engineering practice is also more sensitive to the context in which technology development takes place.19 If a certain responsibility ascription does not lead to the desired solution to a real problem, this responsibility should not be imposed or should be imposed differently. Compare this with the rights-based perspective that focuses solely on the question whether or not readymade technologies are harmful. The rights-based perspective seems to influence not so much the direction but rather the pace of technology development.
Second, the consequentialist approach allows for more fine-grained responsibility ascriptions. Since the merit-based perspective is often applied after the fact (i.e., after something undesirable has happened), the question of responsibility becomes a matter of all-or-nothing: one is either responsible for the undesirable outcome or not (Goodin 1985; Bovens 1998; Lynch and Kline 2000). Some therefore argue that this merit-based perspective is about nonresponsibility: it defines excusing conditions that exempt people from responsibility (Ladd 1989). However, recent insights from Science and Technology Studies (STS) show that before dramatic cases occur, often incremental small decisions have to be made that ultimately lead to undesired outcomes. Instead of focusing on blame for these—sometimes catastrophic—events, engineering ethics should pay more attention to the “complexities of engineering practice that shape decisions on a daily basis”, STS scholars argue (Lynch and Kline 2000), in order to modulate technology into the desired direction (Bovens 1998; Swierstra and Jelsma 2006; Van de Poel and Van Gorp 2006). The consequentialist responsibility ascription is based on the capacity of each agent to contribute to the shaping of technology. After all, within the consequentialist perspective, with its criterion of efficacy, responsibilities ought to be ascribed according to the capacity of each agent to discharge them. This is in line with the common intuition that having the capacity, power, and resources to contribute to the solution of a social problem, entails a forward-looking responsibility to do so (Nihlén Fahlquist 2009). For example in case of risky technologies, engineers, more than any stakeholder, have the knowledge of the risks and possible ways to reduce them. From the consequentialist perspective this entails the responsibility to address these risks. This responsibility ascription, then, is not derived from a merit-based view in which particular actions are deemed faulty, but rather from the set of obligations to see to it that a certain state of affairs is brought about (i.e., a situation in which risks are prevented or at least addressed properly). This approach to ascribing responsibility fits nicely with the insights from more sociologically oriented literature on the dynamics of engineering and technology development.
However, efficacious as it may be, the fairness of the responsibility ascription cannot be ignored all together. This brings us to the other requirement of appropriateness: the question whether or not the responsibility perspective reflects people’s intuitions of when it is justified to ascribe a certain responsibility. It is unlikely that a purely consequentialist approach is psychologically feasible. The motivational force of responsibility ascriptions that are inconsistent with basic intuitions of fairness will therefore be undermined (Kutz 2000, p. 129). This is in line with the point made in the section about the relation between forward-looking and backward-looking responsibility. The motivational force to take up one’s forward-looking responsibility is partly derived from expressions of praise and blame. The researcher in the GSBR project who judges his or her own responsibility within the project as fair will be motivated to act according to it, whereas the researcher that is assigned a responsibility unfairly will potentially be inclined not to act according to it or to do it less carefully.20 Moreover, from a moral point of view it is also undesirable to ascribe responsibilities in ways that contravene our basic feelings of fairness. Even if fairness is not the overriding criterion, we do not want a responsibility ascription that is morally unfair—both for the victims and for the people who are potentially blamed. Hence, even though fairness is not the ultimate criterion in the consequentialist perspective, it should still somehow be taken into account. Especially in case different people are involved there can be a tension between the requirement of efficacy and that of fairness. Whereas the fairness requirement is somewhat restrictive in ascribing responsibility, the efficacy requirement seems to have the opposite effect. It broadens rather than narrows the scope of responsibility ascriptions. If we focus on the fairness criterion, we probably end up with an ascription of responsibilities which is undesirable from a consequentialist perspective. If we only stress the efficacy of the responsibility ascription, we probably end up with an unfair distribution of responsibilities. Hence, we somehow have to incorporate both perspectives if we ascribe responsibilities.
A possible way to reduce the tension between the requirements of fairness and efficacy, is to focus on alternative fairness criteria (i.e., criteria that are not related to the traditional substantive fairness criteria for individual responsibility). Insights from political philosophy show that fairness could also be achieved in a more procedural way. According to a procedural approach to fairness, a responsibility distribution can be rendered fair if it is established in a fair way (i.e., if it is the result of a fair procedure). Further research is needed to explore this procedural approach to fairness. A possible starting point may be the Rawlsian approach of Wide Reflective Equilibrium (WRE), according to which a procedure can be justified as fair if it fits within the individual set of background theories and moral principles of each relevant actor involved. The establishment of this procedural fairness could be part of an embedded ethical research (Doorn forthcoming b; Van de Poel and Zwart forthcoming). Questions as to which actors are relevant to include and how to assess such a WRE need to be further explored (Doorn forthcoming a).
The discussion above indicates an important role for the ethicist in the process of distributing responsibilities and identifying potential (negative) side-effects and consequences. The obvious question is then how this approach would work in case the technical work is not paralleled by an ethicist. I think we have to make a distinction between two situations. The first is one where a group of researchers have currently no embedded ethicists in their project but who have some experience with ethical parallel research in previous projects. In this case the researchers have experienced how ethical research could be carried out. It is a challenge to sustain this “ethical attitude” in future projects. This is a challenge that somehow should be considered already during the ethical parallel research itself. The future will tell to what extent the impact of the past ethical parallel research will indeed lead to more permanent ethical reflection by the engineers themselves during their work. It goes without saying that the ethicists aspire that their involvement is not just a passing phase and that they want an enduring impact on engineering practice. Further research into the different methods for doing ethical parallel research and possible ways to sustain its impact is therefore required.
The most common situation, however, is one where the research team has never been paralleled by a team of ethicists. How to make sure that ethical reflection is also incorporated in the work of these teams? Let me start by saying that there is a positive trend in requirements by funding organizations. It is nowadays often required to have a paragraph on ethical, legal, and social aspects (ELSA) in funding proposals. Although this attention for ELSA still runs the risk of being nothing more than “checkbox ethics,” it points to a direction of more awareness for the social implications of technology. In addition to this requirement from funding organizations, (prospective) engineers should be trained in recognizing moral issues during their professional work. Engineering ethics should therefore be part of every engineering curriculum. Whether this will make the role of the ethicists completely replaceable is doubtful, but it will probably make engineers more prone to inviting ethicists in their project if they need their advice.
Conclusions
In this paper I discussed three “responsibility perspectives” in the light of the development of a new technology. It was found that the merit-based perspective was rather powerless to the engineering practice because of the problem of many hands. As a result, opportunities for learning and improvement were not optimally used. The rights-based perspective appeared to be most pessimistic about technology development. Due to its focus on monetary compensation, the effect of this approach on technology development was rather restrictive. Funding organizations and commercial partners would probably become reluctant to sponsoring innovative research. Moreover, it did not provide a strong incentive to improve the technology itself. The effect of the consequentialist perspective on engineering practice was most profound. This approach allowed for more fine-grained responsibility ascriptions and was found to fit nicely with insights from STS literature.
Although the consequentialist approach was found most powerful in co-shaping the direction of technology development, it was argued that the fairness requirement could not be ignored all together. It was shown that, for both moral and a consequentialist reasons, responsibility ascription should reflect our basic intuitions of when a particular responsibility ascription is justified. Since there is a potential tension between the traditional fairness criteria and the criterion of efficacy, it was proposed to conceive of fairness in a more procedural rather than substantive way, in order to reconcile the two demands of responsibility ascriptions.
Acknowledgements
This research is part of the research program Moral Responsibility in R&D Networks, which is supported by the Netherlands Organisation for Scientific Research (NWO) under grant number 360-20-160. I would like to thank my colleagues at the philosophy department, and Ibo van de Poel and Jessica Nihlén Fahlquist in particular, for the valuable comments they provided on an earlier draft of this article. The article has also profited from the useful comments of the reviewers.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Footnotes
In the present paper I will only discuss the ethical aspects of moral responsibility. The metaphysics of moral responsibility, which is closely related to the free will debate, is outside the scope of the present paper. The reader is referred to the vast array of literature on this topic (e.g., Berofsky 1966; Frankfurt 1971; Wolf 1981; Watson 1982; Dennett 1984; O’Connor 1995; Kane 2002; Widerker and McKenna 2002; Pink 2004).
A good example is the discussion of the case study “The West Gate Bridge: Who was Responsible?” in the engineering section of the recent anthology on professional ethics (Allhoff and Vaidya 2009). Also Swierstra and Jelsma (2006) ask the question “to what extent engineers can be held responsible in normal practice” (p. 309).
One could also distinguish a virtue ethics approach to responsibility, which is forward-looking as well. In the remainder of the text I focus on the consequentialist perspective in general, since the virtue ethics approach is primarily aimed at relations between people. This does not imply that there are no leads to apply this approach to the field of technology development and engineering, but until now this has hardly been done. The elaboration of this relatively novel approach falls outside the scope of the present paper.
Michael Walzer argues that only a passionately committed “connected critic” can effectively challenge a prevailing culture. Such a critic can only be effective because he is committed and involved (Walzer 1987, 2002). Alasdair MacIntyre questions the distinction between theory and practice. These are thoroughly intertwined, and as such, the search for the good life always develops on the basis of the embeddedness in a particular practice. By participating in a practice, people form their opinions of what the good life amounts to. Moral deliberation should therefore not be separated from the practice itself (MacIntyre 1984).
Cf. the contributions in the special issue on Ethics and Engineering Design in Science, Technology and Human Values (May 2006 issue), edited by Van de Poel and Verbeek (2006).
The way the three approaches are presented here might suggest that, when talking about responsibility, people apply either one of the three approaches. In reality hybrid approaches exist as well. However, as an analytical concept it seems useful to separate the three approaches since they each serve a different purpose and as such they are distinct.
A few exceptions are Goodin (1995), Van den Hoven (1998), Young (2006), Nihlén Fahlquist (2006b, 2009).
The literal translation of this principle reads “no crime, no punishment without a previous penal law.”
This does not necessarily hold for all versions of liability. The principle of fault liability holds that an offender can only be held liable in case of culpably careless or faulty behavior (Zweigert and Kötz 1998[1977]).
Note that this consequentialist perspective does not imply that one necessarily promotes some (material) utility function. Kutz, e.g., defends an instrumental (or functionalist, as he calls it) conception of responsibility without claiming that practices of accountability are aimed at optimizing aggregates states of social welfare. Accountability, in Kutz’ view, serves to sustain relationships among discrete individuals (Kutz 2000, p. 54).
For the moment I leave it open how to determine “what can reasonably be known.” For the remainder of the argument, it is not required to exactly define it. A good starting point might be the state of knowledge of peers in one's field.
Similar to what was said in footnote 6, this classification is meant for analytical clarification and as such it shows a somewhat simplified picture of the “ethical landscape.” The use of a merit-based perspective is not applied exclusively by deontologists, neither is it impossible to think of consequences in a deontological or rights-based discourse. This classification does describe the moral theory most akin to a certain responsibility perspective.
It should be emphasized that in reality the research team did further investigate the secondary emissions and so the duty of care was adequately exercised.
Although the problem of many hands is mostly discussed in retrospective terms, it is strictly speaking not limited to backward-looking responsibilities. One could also think of a situation where people need to distribute a number of (sub)tasks to bring about a certain goal. In case this distribution of responsibilities is not complete, for example because certain necessary (sub)tasks are overlooked, the problem of many hands manifests itself as well.
Some philosophers have therefore introduced notions to distinguish between individuals who are responsible for the conduct of the organization and individuals who are not. Kutz (2000) gives a minimalist criterion for individuals to be responsible for the group’s outcome. If individuals act on overlapping participatory intentions, they can be said to be promoting a collective act and be responsible for the outcome. Similarly, May (1992) argues that individuals are responsible for the organization’s actions when they voluntarily joined the group.
Note that alternative ethical outlooks, notably a virtue ethics approach, may put more emphasis on what we owe to potential victims, since these approaches depart from questions about the good life and virtuous behavior rather than a rights-based or duty-based discourse (Ladd 1982).
The encouragement to take up the forward-looking responsibility to improve technological design seems in line with the virtue ethics aim of “responsible engineering.”
Although the term “common morality” is a slippery term, most people agree that there are certain values that most “thoughtful people implicitly use in arriving at moral judgments” (Gert 2004). I think that the fairness of responsibility ascriptions is part of this shared system of morality, which is also reflected in penal law.
It should be noted that this does not hold for consequentialism in general. A common critique of consequentialism, as an ethical ideology, is that it is too narrowly focused on the promotion of one single value. However, in the distribution of responsibilities within a particular practice, the consequentialist perspective resonates the engineer’s heuristic of ‘problem solving’.
One could think of the simple task of writing the minutes of a meeting. If it is decided by majority rule (but not consensus) that the same person should always make minutes of the meetings, this distribution of responsibilities is efficacious in the sense that for all meetings someone is ascribed the responsibility of writing the minutes. However, after some time, this person might become less motivated to accurately write down the minutes because he does not consider it fair that it is always him who should do the writing. However, if the person realizes that it is fair that he is given this task and that he will be blamed in case of sloppy minutes, he will most probably be motivated to come up with accurate minutes. More related to technology development, one could think of the responsibility related to the social impact or acceptance of the technology. If this is not recognized by the researchers as fairly being part of their work, it is questionable whether it will be addressed adequately, even if someone is explicitly given the task to look after the social impact.
References
- Allhoff F, Vaidya AJ. Professions in ethical focus. An anthology. Peterborough: Broadview Press; 2009. [Google Scholar]
- Berloznik R, Van Langenhove L. Integration of technology assessment in R&D management practices. Technological Forecasting and Social Change. 1998;58(1–2):23–33. doi: 10.1016/S0040-1625(97)00084-X. [DOI] [Google Scholar]
- Berofsky B, editor. Free will and determinism. New York: Harper and Row; 1966. [Google Scholar]
- Berry C. Corporate manslaughter. Medicine, Science and the Law. 2006;46(1):2–6. doi: 10.1258/rsmmsl.46.1.2. [DOI] [PubMed] [Google Scholar]
- Bisarya RK, Puri S. The Bhopal gas tragedy—a perspective. Journal of Loss Prevention in the Process Industries. 2005;18(4–6):209–212. doi: 10.1016/j.jlp.2005.07.006. [DOI] [Google Scholar]
- Bovens M. The quest for responsibility. Accountability and citizenship in complex organisations. Cambridge: Cambridge University Press; 1998. [Google Scholar]
- Castleman BI, Purkavastha P. The Bhopal disaster as a case study in double standards. In: Ives JH, editor. The export of hazard. Boston: Routledge & Kegan Paul; 1985. [Google Scholar]
- Corlett JA. Responsibility and punishment. 3. Dordrecht: Springer; 2006. [Google Scholar]
- Davis M. Thinking like an engineer. Oxford: Oxford University Press; 1998. [Google Scholar]
- Davis, M. (2009). “No one here but us chickens”. Some thoughts on the professional responsibility of engineers. In N. Doorn, N. A. Vincent & J. Nihlén Fahlquist (Eds.), Moral responsibility, neuroscience, organization, and engineering. Book of abstracts. Delft, The Netherlands: Delft University of Technology.
- Dennett D, editor. Elbow room: The varieties of free will worth having. Cambridge, MA: MIT Press; 1984. [Google Scholar]
- Doorn, N. (forthcoming a). Applying Rawlsian approaches to resolve ethical issues: inventory and setting of a research agenda. Journal of Business Ethics. doi:10.1007/s10551-009-0073-5.
- Doorn, N. (forthcoming b). A Rawlsian approach to distribute responsibilities in networks. Science and Engineering Ethics. doi:10.1007/s11948-009-9155-0. [DOI] [PMC free article] [PubMed]
- Durbin PT. Engineering ethics and social responsibility: Reflections on recent developments in the USA. Bulletin of Science, Technology & Society. 1997;17(2–3):77–83. [Google Scholar]
- Durbin PT. Engineering professional ethics in a broader dimension. Interdisciplinary Science Reviews. 2008;33(3):226–233. doi: 10.1179/174327908X366914. [DOI] [Google Scholar]
- Eshleman, A. (2008). Moral responsibility. In E. N. Zalta (ed.). The stanford encyclopedia of philosophy (Fall 2008 edition). URL http://plato.stanford.edu/archives/fall2008/entries/moral-responsibility/.
- Feinberg J. Doing and deserving. Essays in the theory of responsibility. Princeton: Princeton University Press; 1970. [Google Scholar]
- Fischer JM, Ravizza M. Introduction. In: Fischer JM, Ravizza M, editors. Perspectives on moral responsibility. Ithaca: Cornell University Press; 1993. pp. 1–41. [Google Scholar]
- Fischer, J. M., & Ravizza, M. (1998). Responsibility and control. A theory of moral responsibility. Cambridge: Cambridge University Press.
- Frankfurt H. Freedom of the will and the concept of a person. Journal of Philosophy. 1971;68:5–20. doi: 10.2307/2024717. [DOI] [Google Scholar]
- Gert B. Common morality: Deciding what to do. Oxford: Oxford University Press; 2004. [Google Scholar]
- Goodin RE. Protecting the vulnerable. A reanalysis of our social responsibilities. Chicago: Chicago University Press; 1985. [Google Scholar]
- Goodin RE. Utilitarianism as a public philosophy. Cambridge: Cambridge University Press; 1995. [Google Scholar]
- Hansson SO. Informed consent out of context. Journal of Business Ethics. 2006;63(2):149–154. doi: 10.1007/s10551-005-2584-z. [DOI] [Google Scholar]
- Hansson SO. Risk and safety in technology. In: Meijers AWM, editor. Handbook philosophy of technology and engineering sciences. Amsterdam: Elsevier; 2009. [Google Scholar]
- Harris CE, Pritchard MS, Rabins MJ. Engineering ethics: Concepts and cases. Belmont, CA: Wadsworth; 2005. [Google Scholar]
- Hart HLA, Honoré T. Causation in the law. London: Clarendon Press; 1985. [Google Scholar]
- Honoré T. Responsibility and fault. Oxford: Hart; 1999. [Google Scholar]
- Kane R, editor. The Oxford handbook of free will. New York: Oxford University Press; 2002. [Google Scholar]
- Kutz C. Complicity: Ethics and law for a collective age. Cambridge: Cambridge University Press; 2000. [Google Scholar]
- Ladd J. Collective and individual moral responsibility in engineering: Some questions. IEEE Technology and Society Magazine. 1982;1(2):3–10. doi: 10.1109/MTAS.1982.5009685. [DOI] [Google Scholar]
- Ladd J. Computers and moral responsibility: A framework for an ethical analysis. In: Gould CC, editor. The information web. Boulder, CA: Westview Press; 1989. pp. 207–229. [Google Scholar]
- Lewis HD. Collective responsibility. In: May L, Hoffman S, editors. Collective responsibility: Five decades of debate in theoretical and applied ethics. Savage, MD: Rowman & Littlefield Publishers; 1991. [Google Scholar]
- Lynch WT, Kline R. Engineering practice and engineering ethics. Science Technology & Human Values. 2000;25(2):195–225. doi: 10.1177/016224390002500203. [DOI] [Google Scholar]
- MacIntyre, A. (1984 [1981]). After virtue: A study in moral theory. Notre Dame: University of Notre Dame Press.
- Mackie JL. Ethics: Inventing right and wrong. Harmondsworth: Penguin Books; 1978. [Google Scholar]
- Magill K. Blaming, understanding, and justification. In: van den Beld T, editor. Moral responsibility and ontology. Dordrecht: Kluwer Academic Publishers; 2000. [Google Scholar]
- May L. Sharing responsibility. Chicago: Chicago University Press; 1992. [Google Scholar]
- May L, Hoffman S. Introduction. In: May L, Hoffman S, editors. Collective responsibility: Five decades of debate in theoretical and applied ethics. Savage, MD: Rowman & Littlefield Publishers; 1991. [Google Scholar]
- Miller D. Distributing responsibilities. The Journal of Political Philosophy. 2001;9(4):453–471. doi: 10.1111/1467-9760.00136. [DOI] [Google Scholar]
- Miller D. Holding nations responsible. Ethics. 2004;114:240–268. doi: 10.1086/379353. [DOI] [Google Scholar]
- Nihlén Fahlquist J. Responsibility ascriptions and public health problems. Who is responsible for obesity and lung cancer? Journal of Public Health. 2006;14(1):15–19. doi: 10.1007/s10389-005-0004-6. [DOI] [Google Scholar]
- Nihlén Fahlquist J. Responsibility ascriptions and Vision Zero. Accident Analysis and Prevention. 2006;38:1113–1118. doi: 10.1016/j.aap.2006.04.020. [DOI] [PubMed] [Google Scholar]
- Nihlén Fahlquist J. Moral responsibility for environmental problems—individual or institutional? Journal of Agricultural and Environmental Ethics. 2009;22(2):109–124. doi: 10.1007/s10806-008-9134-5. [DOI] [Google Scholar]
- Nozick R. Anarchy, State, and Utopia. New York: Basic Books; 1974. [Google Scholar]
- O’Connor T, editor. Agents, causes, and events: Essays on indeterminism and free will. New York: Oxford University Press; 1995. [Google Scholar]
- Perrott DL. Changes in attitude to limited liability—the European experience. In: Orhnial T, editor. Limited liability and the corporation. London: Croom Helm; 1982. pp. 81–121. [Google Scholar]
- Pink T. Free will: A very short introduction. Oxford: Oxford University Press; 2004. [Google Scholar]
- Pritchard MS. Responsible engineering: The importance of character and imagination. Science and Engineering Ethics. 2001;7(3):391–402. doi: 10.1007/s11948-001-0061-3. [DOI] [PubMed] [Google Scholar]
- Richardson B, Curwen P. Do free-market governments create crisis-ridden societies. Journal of Business Ethics. 1995;14(7):551–560. doi: 10.1007/BF00871983. [DOI] [Google Scholar]
- Strawson PF. Freedom and resentment. In: Strawson PF, editor. Freedom and resentment and other essays. London: Methuen; 1974. pp. 1–25. [Google Scholar]
- Sunstein CR. Laws of fear. Cambridge: Cambridge University Press; 2005. [Google Scholar]
- Swierstra T, Jelsma J. Responsibility without moralism in techno-scientific design practice. Science, Technology & Human Values. 2006;31(3):309–332. doi: 10.1177/0162243905285844. [DOI] [Google Scholar]
- Swierstra T, Rip A. Nano-ethics as NEST-ethics: Patterns of moral argumentation about new and emerging science and technology. Nanoethics. 2007;1(1):3–20. doi: 10.1007/s11569-007-0005-8. [DOI] [Google Scholar]
- Thompson DF. Moral responsibility and public officials. American Political Science Review. 1980;74:905–916. doi: 10.2307/1954312. [DOI] [Google Scholar]
- Van de Poel I. How should we do nanoethics? A network approach for discerning ethical issues in nanotechnology. NanoEthics. 2008;2(1):25–38. doi: 10.1007/s11569-008-0026-y. [DOI] [Google Scholar]
- Van de Poel I, Van Gorp AC. The need for ethical reflection in engineering design—the relevance of type of design and design hierarchy. Science, Technology & Human Values. 2006;31(3):333–360. doi: 10.1177/0162243905285846. [DOI] [Google Scholar]
- Van de Poel I, Verbeek PP. Ethics and engineering design. Science, Technology & Human Values. 2006;31(3):223–236. doi: 10.1177/0162243905285838. [DOI] [Google Scholar]
- Van de Poel, I., & Zwart, S. D. (forthcoming). Reflective equilibrium in R&D networks. Science, Technology & Human Values. doi:10.1177/0162243909340272.
- Van den Hoven MJ. Moral responsibility, public office and information technology. In: Snellen ITM, Van de Donk WBHJ, editors. Public administration in an information age. A handbook. Amsterdam: IOS Press; 1998. pp. 97–111. [Google Scholar]
- Van Velsen JFC. Relativity, universality, and peaceful coexistence. Archiv für Rechts- und Sozialphilosophie. 2000;86(1):88–108. [Google Scholar]
- Vaughan D. The challenger launch decision; risky technology, culture, and deviance at NASA. Chicago: University of Chicago Press; 1996. [Google Scholar]
- Vedder A. Accountability of Internet access and service providers—strict liability entering ethics? Ethics of Information Technology. 2001;3(1):67–74. doi: 10.1023/A:1011492109277. [DOI] [Google Scholar]
- Wallace RJ. Responsibility and the moral sentiments. Cambridge, MA: Harvard University Press; 1994. [Google Scholar]
- Walzer, M. (1987). Interpretation and social criticism. Cambridge, MA: Harvard University Press.
- Walzer, M. (2002). The company of critics: Social criticism and political commitment in the twentieth century. New York: Basic Books.
- Watson G, editor. Free will. Oxford: Oxford University Press; 1982. [Google Scholar]
- Watson G. Two faces of responsibility. Philosophical Topics. 1996;24:227–248. [Google Scholar]
- Widerker D, McKenna M, editors. Moral responsibility and alternative possibilities. Aldershot: Ashgate Publishing; 2002. [Google Scholar]
- Wolf S. The importance of free will. Mind. 1981;60:386–405. doi: 10.1093/mind/XC.359.386. [DOI] [Google Scholar]
- Young IM. Responsibility and global justice: A social connection model. Social Philosophy and Policy. 2006;23(1):102–130. doi: 10.1017/S0265052506060043. [DOI] [Google Scholar]
- Zandvoort H. Knowledge, risk, and liability. Analysis of a discussion continuing within science and technology. Cognitive Structures in Scientific Inquiry: Essays in Debate with Theo Kuipers. 2005;2(84):469–501. [Google Scholar]
- Zandvoort H. Globalisation, environmental harm and progress: The role of consensus and liability. Water Science and Technology. 2005;52(6):43–50. [PubMed] [Google Scholar]
- Zandvoort H. Risk zoning and risk decision making. International Journal of Risk Assessment and Management. 2008;8(1–2):3–18. doi: 10.1504/IJRAM.2008.016137. [DOI] [Google Scholar]
- Zimmerman M. An essay on moral responsibility. Totowa, NJ: Roman and Littlefield; 1988. [Google Scholar]
- Zwart SD, Van de Poel I, Van Mil H, Brumsen M. A network approach for distinguishing ethical issues in research and development. Science and Engineering Ethics. 2006;12(4):663–684. doi: 10.1007/s11948-006-0063-2. [DOI] [PubMed] [Google Scholar]
- Zweigert, K., & Kötz, H. (1998[1977]). Introduction to comparative law. Oxford: Clarendon Press.