Skip to main content
Springer logoLink to Springer
. 2010 Sep 24;18(1):35–48. doi: 10.1007/s11948-010-9233-3

Moral Responsibility, Technology, and Experiences of the Tragic: From Kierkegaard to Offshore Engineering

Mark Coeckelbergh 1,
PMCID: PMC3275727  PMID: 20862561

Abstract

The standard response to engineering disasters like the Deepwater Horizon case is to ascribe full moral responsibility to individuals and to collectives treated as individuals. However, this approach is inappropriate since concrete action and experience in engineering contexts seldom meets the criteria of our traditional moral theories. Technological action is often distributed rather than individual or collective, we lack full control of the technology and its consequences, and we lack knowledge and are uncertain about these consequences. In this paper, I analyse these problems by employing Kierkegaardian notions of tragedy and moral responsibility in order to account for experiences of the tragic in technological action. I argue that ascription of responsibility in engineering contexts should be sensitive to personal experiences of lack of control, uncertainty, role conflicts, social dependence, and tragic choice. I conclude that this does not justify evading individual and corporate responsibility, but inspires practices of responsibility ascription that are less ‘harsh’ on those directly involved in technological action, that listen to their personal experiences, and that encourage them to gain more knowledge about what they are doing.

Keywords: Moral responsibility, Engineering, Technology, Tragedy, Kierkegaard, Deepwater Horizon

Introduction: The Problem of Responsibility Ascription in Engineering Contexts

On April 20, 2010 the drilling rig Deepwater Horizon exploded as a result of a wellhead blowout, killing 11 platform workers and causing an oil spill in the Gulf of Mexico. Causes of the accident suggested by the media include a failing blowout preventer, which was meant to stop the flow after a blowout, improper cementing of the well, and lacking regulatory oversight. After the accident, several attempts to stop the leak failed. Almost 3 months later, on July 15, a cap was put on the well and in August BP announced that a so-called ‘static kill’ procedure (pumping mud into the well) has been successful and stopped the oil. The oil spill has damaged marine and wildlife habitats as well as fishing and other economic activities. The case invokes images of other offshore accidents such as Piper Alpha in 1988 (explosion of a platform in the North Sea) and the Exxon Valdez oil spill in 1989 (tanker ran aground near Alaska).

In the wake of this disaster, moral responsibility has been mainly ascribed to individuals and collectives treated as individuals: BP (the oil company), Tony Hayward (BP), Barack Obama, the US Government (the regulator), and several (other) companies. This way of thinking about moral responsibility is understandable and at first sight seems entirely justified. However, this approach is also exemplary of some significant limitations of our traditional theories of moral responsibility.

The conditions for attributing moral responsibility prescribed by traditional theories make demands on agency, control, and knowledge that are seldom met in engineering and—more generally speaking—technological action. It is usually assumed that responsibility is individual and in line with Aristotle’s (1925) discussion in the Nicomachean Ethics (Book III, 1109b30-1111b5) a distinction is made between two negative conditions for ascribing moral responsibility: one should not be forced to do something (control condition) and one must not be ignorant of what one is doing (knowledge condition). These conditions are often problematic. For example, in the literature there have been discussions about how tenable the control condition is given the influence of character, circumstances, and consequences (Nagel 1979) and given that persons sometimes lack attention to crucial elements of the situation, exercise poor judgment, or lack moral insight and imagination (Sher 2006). However, in the case of technological action it appears even more difficult to meet the conditions. Let me give some reasons why the reality of technological action and experience, for example in engineering, is far removed from what the traditional approach and criteria assume and require.

First, both in philosophical analysis and in practice it is often assumed that responsibility is mainly or exclusively individual. In engineering ethics, philosophers focus on the application of moral principles to individual actions, for example by means of a codes of ethics (Davis 1991), or on the virtues of individual engineers (Harris 2008). And in our legal systems individuals (and collectives like companies treated as individuals) are held legally responsible. But technological action is often distributed and collective rather than individual (Lenk and Maring 2001, p. 100). As Ger Wackers and I have shown for the case of the Snorre A gas blowout (a near disaster with an oil and gas production platform in the North Sea), responsibility should be understood as distributed between various actors at various levels and times (Coeckelbergh and Wackers 2007). Recently I suggested in a contribution to the Guardian that this is also true in the Deepwater Horizon case: responsible actors include many companies involved, the financial actors, the regulators and politicians, and—last but not least—citizens and consumers who depend on oil and support the current regulatory frameworks.1 However, our traditional practices of responsibility ascription are ill equipped to deal with such a broad distribution of responsibility. It is easier to blame or prosecute only individuals directly involved and clearly defined and visible organisations and institutions like BP and the US Government.

Second, regardless of our (individual) intentions and (individual) capacity of self-control, we usually lack full control of the technology and its consequences. We may enjoy external, negative freedom in the sense that there is no-one who tells us what to do (we sometimes wish there was one, since the freedom is hard to bear) and we may also have internal freedom (control over our desires). But even if we have no master and if we master ourselves, the major problem is that we cannot control the consequences of technological action; it escapes the boundaries of what we and others intend and can control. In cases where possibilities to control are very limited, we might decide not to develop or use the technology for that reason. However, if and when we use it (for instance because we already depend on it for our way of living), we want to be able to ascribe responsibility for technological action. For example, it is likely that in the Deepwater Horizon case most people who contributed to the disaster were not ‘forced’ to do what they did; yet the cumulative outcome of their actions (or the outcomes of failures to act in the right way) combined with circumstances they did not control resulted in the disaster. Moreover, it appears that as an oil consumer I have little control over the consequences of my consumption. As with food and (other) mass produced consumer goods, we often have no idea where the products comes from, how they are produced, which risks and costs that way of production incurs, etc. Furthermore, once the blow-out accident happened, there was a general failure to control its consequences. Now failure to control is an instance of wrongdoing if one has the possibility to control. But what if this condition is only partly fulfilled what if there is very little space for action? Does that imply that no-one is responsible? How meaningful is the control condition anyway with regard to technological action?

Third, as the examples show, the control condition depends on knowledge: we lack knowledge and are uncertain about the consequences of the technology. This uncertainty is not only due to unpredictability of the future as such, but also to the scale and complexity of the technological-social world in which we act and which we shape by our actions. In previous work I argued that in engineering contexts moral responsibility is ascribed under epistemic conditions of opacity: between the actions of an engineer and the eventual consequences of her actions lies a world of relationships, people, things, time, and space. This lack of epistemic transparency makes it difficult to define the nature and scope of technological action. For instance, we can foresee some potential consequences of technological action but not all potential consequences. In the Deepwater Horizon case, it is unclear if people could have foreseen (1) that the blowout preventer would not function under these circumstances, (2) that initial attempts to stop the blowout would fail, and (3) all consequences of that failure and those actions. Moreover, in technological action it is hard to sharply distinguish between our own contributions and those of others, and between our actions and (bad) luck (Coeckelbergh 2010a). What happens (e.g. an accident) is the outcome of many actions and events—some which cannot be controlled. This amounts to saying that in a very real sense ‘we don’t know what we’re doing’ when it comes to technological action and engineering practice. Of course as individual users, designers, etc. we know what we do in the sense that we know our tasks, roles, and direct actions. But how these contribute to the larger technological action and engineering practice is not entirely clear to any single individual. Again, it seems that if we use the traditional criteria, it is difficult to ascribe moral responsibility. In the Gulf oil spill case, for example, most citizens who voted for politicians who maintained a deficient regulatory framework seemed unaware of the risks they created for the environment.

In the remainder of this paper, I analyse these problems concerning responsibility ascription by using the concept of tragedy. Responding to the philosophical tradition, I will first distinguish between different meanings of tragedy and its relation to technology. Then I will construct a Kierkegaardian notion of moral responsibility that accounts for experiences of the tragic in technological culture and engineering contexts. Thus, I do not only introduce the idea of tragedy in thinking about engineering, but I also give it a new interpretation. In this way I hope to contribute to exploring new ways of ascribing responsibility in engineering contexts and hence to avoiding a fatalist or defeatist response to the problems with meeting the conditions of responsibility. In the course of my arguments I provided examples relating to offshore engineering, in particular the Deepwater Horizon case.

Inspired by Kierkegaard I first construct a notion of tragic action and responsibility that does not promote fatalism or passivity. Instead of resolving the tension between freedom and fate, it identifies this tension as the heart of tragic experience.

Then I apply this concept to the problem of responsibility ascription in technological, engineering contexts. I argue that such ascription should be sensitive to personal experiences of helplessness when lacking full control, being overwhelmed by unexpected events, uncertainty about the future, inability to resolve conflicts between responsibilities related to different roles and social relations, feeling highly dependent on what others do, being part of a story one can neither control nor predict, and having to choose when no option appears ‘right’.

Although my discussion and examples will mainly concern backward-looking responsibility (responsibility ascription after something bad has happened, retrospective responsibility), I will also indicate how my reflections can be useful for forward-looking responsibility (responsibility ascription in order to prevent bad things to happen, prospective responsibility).2

I end with conclusions about how to ascribe responsibility in technology and engineering contexts given the discussed relations between moral responsibility, knowledge, and tragedy.

A Kierkegaardian View of Tragic Action and Responsibility

Before I develop a Kierkegaardian view of tragedy and responsibility, let me first say more about tragedy and its relation to technology. This is important for my argument since I will construct a specific view of tragedy that is distinct from both common usage and usage in an influential body of (philosophical) literature.

Tragedy and Technology

In daily speech ‘tragedy’ usually means ‘terrible’, ‘awful’, or ‘catastrophic’. For instance, the accident with Deepwater Horizon can be called ‘tragic’ in the sense that people died and were injured and that the environment was damaged. In his paper, however, I use the term ‘tragic’ in a sense that refers back to ancient Greek tragedy and its reception in the history of philosophy.

There is a tradition in philosophy which understands modern culture as essentially untragic. It is claimed that in our obsession with rationality and control we lost a sense for fate. Steiner thought that in modern times we succeeded in destroying our sense of tragedy (Steiner 1961). Technology, it appears, is the very opposite of a tragic sense: it is a means to tame fate, as de Mul phrased it (de Mul 2006). Steiner stands in a tradition of thought that turned to ancient Greek tragedy as a remedy for modern non-tragic culture and technology. Nietzsche and Heidegger also argued that we need to recover our sense of tragedy as a cure for our obsession with control and mastery of nature—our obsession with technology.3

If this were true, then by definition oil production, as a technology, could easily be interpreted as part of a ‘sick’ technological culture that exploits nature and we should ‘return to nature’ and to the tragic understanding of life. In this view, accidents such as Deepwater Horizon could be interpreted as a kind of divine punishment for what the ancient Greeks called hybris: technology displays arrogance and lack of humility. However, I do not adopt this conception of tragedy and its relation to technology for at least the following reasons.

First, these views are too Romantic in assuming that we can make a strict distinction between on the one hand, ‘nature’ that functions ‘on its own terms’4 and, on the other hand, human culture and human experience. Instead, we have always transformed nature and in this sense Greek culture was as much ‘technological’ as it was tragic. Modern oil production is different from ancient means of energy production, of course, but in a sense nature has always been used as an energy resource. We can (and should) discuss about what kind of technology and energy we want, of course, but we cannot ‘switch off’ technology. Technology is an important aspect of what we do and what we are: we have always been technological beings and to stop being technological would be to stop being human altogether.

Second, it is not clear that today we have lost our sense of tragedy. Perhaps it has not been promoted in modern culture, but as de Mul has argued, technology can give us a sense of the tragic as well (de Mul 2006).5 De Mul uses in his work the stories of Prometheus (the tragedy Prometheus Unbound) and the monster of Frankenstein. Technology appears to us as something we created but which then escapes our (full) control. For example, the Deepwater Horizon case is not only tragic in the common sense noted above, but is also tragic in a deeper way since the disaster and failing human efforts to cope with the disaster reveal to us how little control we have over the consequences of our technological actions. Disasters such as this show how vulnerable we and our technological systems are, and how dependent we are on our technologies and our natural environment.

Third, and most important for my following argument, my conception of tragedy rejects Nietzsche’s, Heidegger’s, and de Mul’s fatalistic interpretation of tragedy. To (re)discover the tragic in technology does not imply that we have to set up technology as an autonomous force, a new god or demon which keeps us in the chains of fate and which we have to accept. Technological practice already includes the struggle and attempts to escape fate at two ‘moments’ or levels. As de Mul has argued, technology is itself a means to tame fate (we try to gain mastery of nature). But in addition, our attempts to perceive, assess, and reduce the risks associated with that technology are secondary attempts at mastery: we try to gain mastery of the technology (not only of nature).6 Thus, when I call experience in technological culture and engineering ‘tragic’ I do not refer to (the acceptance of) fate but to the dynamics between, on the one hand, the experience of fate, luck, and contingency and, on the other hand, how we respond to these events and experiences as beings that are free and in control to some extent. For example, the responses to the Deepwater Horizon case (efforts to gain control over the well, efforts to contain the oil spill, political actions, etc.) and the struggles and failures related to them are as much part of the ‘tragedy’ as the more ‘fatalistic’ (experiences of the) initial events and their direct consequences. Below I will construct a view of the tragic that does not resolve the tension between (experiences of) freedom and (experiences of) fate but instead identifies this tension as the core of tragedy and tragic experience.

Finally, in contrast to treating technology as one thing (as Heidegger did: technology as an attitude or way of seeing the world), we should break up the term ‘technology’ and ‘technological culture’. We should not only say something about the tragic character of technological culture as a whole, but explore tragic experiences in concrete technological and engineering practices such as oil production. In the following pages I sketch a framework that can guide this exploration and draw conclusions for thinking about moral responsibility in engineering and other technological practices.

A Kierkegaardian View of Tragedy and Moral Responsibility

Kierkegaard suggests an interesting interpretation of tragedy and moral responsibility which avoids the one-sided fatalistic interpretation and allows us to attend to experiences of the tragic in engineering.

In ‘The Ancient Tragical Motif as Reflected in the Modern’ (1843) Kierkegaard contrasts modern tragedy with ancient tragedy. He argues that the ‘action in Greek tragedy is intermediate between activity and passivity (action and suffering)’ (Kierkegaard 1843, p. 117). Although the characters in ancient tragedy rested ‘in the substantial categories, of state, family, and destiny’—this is what Kierkegaard calls the “fatalistic” element in Greek tragedy (Kierkegaard 1843, p. 116)—there was also activity. The heroes were not passive. Kierkegaard means that we do not only suffer from fate, but we also contribute to what happens (to us). We always have some control. If, on the other hand, we had full control over everything, our lives would also loose their tragic character. There would be no struggle. Thus, it appears that human, tragic action occurs ‘in-between’ these two extremes, as mixtures of suffering and activity. If this is true, to recognise the tragic is not to accept fate but to recognise that we have to live within the tension between freedom and fate, activity and passivity, control and lack of control.

Is this deplorable? My point here is not that tragedy is ‘bad’ (deplorable) or ‘good’ (valuable) in the way an event or accident is bad or good. It is rather a basic feature of the human condition which structures our possibilities for action: we can only act in the space between freedom and fate. Understood in this specific way, tragedy is not the word we normally use to express concrete experiences of limitations to human control (e.g. when an accident happens in which we had a hand), but rather the condition of possibility for such experiences. It is because human action is often deeply tragic (in the specific sense I elaborated above) that we can be helpless, sad, and so on after particular events for which we were only partly responsible—or indeed experience joy and happiness when, partly beyond our control, something good befalls us. Of course one may also deplore tragedy as a feature of human action and of the human condition (being in-between, on the one hand, all-powerful gods and, on the other hand, things that are only acted upon). But then we should remind ourselves that good and happiness also spring from this condition.7

This notion of tragic action seems applicable to technological action in engineering. On the one hand, engineers, managers, and others involved in technological practices like oil production have control over their actions and the consequences of their actions. However, if Kierkegaard is right then it is pointless to strive for full, absolute control, since it is in the nature of tragic human action that we can never have that; there will be always events and things that we cannot control and there will always be struggle and suffering as a result of that lack of control.

With regard to the control condition, then, my position is not that the control condition is not fulfilled (and that therefore some people might be ‘excused’), but rather that there is something wrong with the criterion itself if it assumes the possibility of full control, since such a condition cannot be fulfilled with regard to human technological action given the tragic nature of that action.

Does this mean that engineers, managers, consumers, and so on, are not responsible at all? Once again it seems that responsibility ascription becomes highly problematic since full control is lacking. But there is a way to conceive of responsibility that accounts for the ‘mixed’ nature of tragic action. Kierkegaard draws the following consequences for the question of responsibility (he uses the term ‘guilt’):

just as the action in Greek tragedy is intermediate between activity and passivity (action and suffering), so is also the hero’s guilt, and therein lies the tragic collision. […] The tragedy lies between these two extremes. If the individual is entirely without guilt, then the tragic interest is nullified […]; if, on the other hand, he is absolutely guilty, he can no longer interest us tragically. (Kierkegaard 1843, p. 117)

For Kierkegaard, modern tragedy misunderstands the tragic by making the hero absolutely responsible, making him ‘accountable for everything’ (Kierkegaard 1843, p. 117). Instead, he says that the tragic has in it “an infinite gentleness” as opposed to the ethical which is ‘strict and harsh’ (Kierkegaard 1843, p. 118). The hero is not responsible for everything since not everything is in his power.

We can learn from this interpretation of the tragic for revising our ways of ascribing responsibility in engineering contexts. In so far as technological, engineering action is ‘tragic’ action (intermediate between activity and passivity), it is appropriate to ascribe responsibility in a more ‘gentle’ way since responsibility is not absolute but gradual. This is an answer to the problem with the control condition introduced in the beginning of this article: technological action is not a matter of having either full control or no control. We usually have some control. Similarly, knowledge is a matter of degree. Therefore, where technological action is concerned the question is not whether or not a person is responsible, but to what extent a person is responsible. For example, in cases such as Deepwater Horizon, the main actors (engineers, managers, politicians, consumers) are not independent gods who enjoy full control over their creation but engage in tragic, all too human action in which there is also passivity vis-à-vis the many natural, social, and other forces and influences that co-shape and pre-shape what they do. They work within a natural-social order that is already there and which functions in the same way as the ‘substantial’ categories Kierkegaard identified in ancient Greece. They have already been assigned roles and find themselves in a network of relations—with humans, with systems, with nature. At the same time, to recognise this does not imply that they are condemned to passivity and that they have no responsibility whatsoever for their actions. For example, consumers of oil are not absolutely and not directly responsible for oil production disasters, of course, but they might carry a small degree of moral responsibility for disasters like Deepwater Horizon to the extent that they benefit from the oil (production continues if demand continues) and since nowadays they have—in principle—the possibility to inform themselves of the consequences of their actions.

When an accident happens, then, responsible technological action has the two-fold aspect of fatalism and activity. On the one hand, the actors involved should cope with technological risk by means of activity: when something goes wrong, try to do something about it. For example, actors try to cope with a crisis on an oil production platform. This corresponds to the engineering attitude to problems: try to ‘fix’ the problem. They should also try to inform ourselves of the known risks related to their actions and take action to avoid disasters. In the Deepwater Horizon case, for example, many risks were known but it appears that insufficient action was taken to avoid the problems that led to the disaster. On the other hand, the actors also have to accept that they cannot fully control or foresee the consequences of their actions and therefore act prudentially in the light of that knowledge (the knowledge ‘that we know nothing’—or at least much less than we would want to). They should also accept and take into account dependence on others and on the natural environment—accept this as their problem, as individuals and as teams or organisations.

Given this ‘tragic’ character of technological action those who respond to engineering disasters should not apply an ethics that is ‘strict and harsh’: they should not ascribe full individual responsibility, but consider the distributed and shared character of technological action and take into account the conditions of relative rather than absolute control and knowledge.

However, we should not only consider the question of responsibility when the accident already happened (backward-looking responsibility); we should also take measures to create more responsible technological action in the future (forward-looking responsibility). This requires the construction of more epistemic transparency, more knowledge of what they are doing (in terms of the scope of action). Engineering and technology management (including risk management) are not individual but collective enterprises that stand in need of collective, pooled intelligence in order to contribute to responsible technological action. But recognising the tragic character of engineering also requires acknowledging that even then accidents might happen; there is no such thing as full control in the case of technological action.

Let me now further clarify and develop my argument by considering the personal, experiential dimension of engineering and technological action.

Experiences of the Tragic In Engineering and their Implications for Ascribing Moral Responsibility

Most contemporary accounts of technological and engineering action come in the form of ‘objective’ reports. Think about accident investigation reports, risk assessment studies, and other narratives that aim at an ‘objective’ rendering of ‘the facts’. For example, in response to the Deepwater Horizon disasters several investigation reports are being written. This is very helpful but the kind of knowledge constructed by these reports misses attention for the tragic aspect as outlined above. Therefore, I propose to complement these narratives with narratives that shift the focus from the object to the subject. Let me provide some examples of experiences of the tragic in contemporary technological practices and discuss the implications for responsibility. The subject of the experience can be the user, designer or policy maker. What matters here is not ‘the facts’ but the experiences of the individuals—in particular experiences of and coping with the tragic.

Helplessness

Sometimes we cannot do much when things go wrong. Our computer crashes and we feel helpless. We try to implement a policy but the technology appears to become an autonomous force and seeks its own ways out of the framework we designed. We design something but it is used for something entirely different than we intended to. In these cases, to ascribe full responsibility on the basis of the control condition would be inadequate, since our possibilities to act are severely limited. At most, we can hold someone responsible for what he or she has done in the past in order to try to prevent the bad thing to happen. But even then it is not always clear quite how much one should have ‘tried’ in order to have acted responsibly, given that one had incomplete knowledge. In the Deepwater Horizon case, for example, BP has been accused of not having taken enough measures to prevent a blowout. Whether or not this is true, such explanations and accusations give us little insight into the experiences of helplessness and personal struggles of the people involved.

The Unexpected and Uncertainty

One form in which the problem of incomplete knowledge comes is in the context of prediction. Our technological-bureaucratic systems try to predict the future. We try to analyse risks and try to predict technological developments. But our epistemological basis for making such predictions is always shaky. Technological-social systems are extremely complex, we cannot control for everything that can happen in human-technology interaction at individual and system levels. Uncertainty is a fundamental feature of such systems. There will always be blind spots. (For example, offshore oil production involves a complex socio-technological system that is highly vulnerable.) Therefore, to hold someone absolutely responsible for something he or she could not have known is not fair. Again, at most one should hold someone responsible for what he or she has done to try to prevent the unexpected (bad thing) to happen on the basis of the knowledge that individual had or could have acquiredindeed should have acquired. This ‘should have acquired’ criterion8 is a difficult one and needs interpretation in particular cases. For example, someone might argue that it was not her task to gather that information. This leads me to the next problem.

Conflicting Roles

Sometimes our roles conflict when we have to act and take decisions. As a manager, we might prioritize profit, market share and organisational expansion in a particular case, whereas as an engineer we might prioritize safety in that same case (although this is not necessarily the case.). As an internet user, we might want to download music for free, whereas as a musician we might want some financial reward if others download our music. Who is responsible in these cases? Me-as-an-engineer or me-as-a-manager? Me-as-a-parent or me-as-a-politician? Conflicting roles are not so much due to conflicting intentions but to conflicting expectations. Often it is impossible to meet all demands, as Antigone already experienced in Sophocles’s play: she had to choose between obeying the gods (religious demands) and obeying Creon (demands of the law, the state). How can I be held absolutely responsible for not doing A given that there was a strong pressure on me to do B? At most, I am partly responsible. For example, if engineers feel under pressure not to install an expensive safety measure (e.g. an expensive blowout preventer) in order to save costs, they are partly responsible but responsibility is also shared by those who create, execute, and benefit from business models that prioritize cost reduction and profit maximalisation at the expense of safety.

Dependency and Collective Action

Technological action is highly dependent on others. It is relational: what is done depends on what happens at other nodes in the socio-technological network. Often that collective and relational aspect is made explicit, but even if this is not the case, what we do always depends on what others do. Technological action is also distributed and often collective. Therefore, to hold one individual responsible for a technological action that goes wrong is unfair. Responsibility should also be understood as relational, distributed, and collective. Once more this leaves no room for a concept of absolute and individual responsibility.

The relational aspect means not only that responsibility is shared and distributed (which answers the question who is responsible?) but also that those who are responsible are answerable to particular audiences and communities. For example, in the Deepwater Horizon case BP should be answerable to the families of those who died or were injured, to the fishermen at the coast whose jobs are endangered (it has been reported that one of them committed suicide). The US government has to be answerable to citizens who question lacking regulation. Such a relational understanding of responsibility9 renders it much less abstract than it is sometimes made in the philosophical literature.

Lack of Full Control

Philosophical models of responsibility are usually of the arm-lifting type: I decide whether or not I lift my arm, therefore I am responsible for lifting it. Conditions for responsibility are of the negative freedom type: no one interferes with me when I lift my arm (‘external’ negative freedom), there is no demon telling me what to do and no mad scientist manipulating my brain (‘internal’ negative freedom), and there is a lack of conflict between desires (volitional harmony). However, the problem with technological action is that all these conditions can be fulfilled but that still I lack control over the action since it is not exclusively ‘my’ action and I cannot predict the consequences of the action since it depends on others and on what goes on elsewhere. In this sense, technological action has no clearly limited scope: there is no fixed limit to the agency involved and there is no fixed limit to the action since the scope of its consequences cannot be controlled. For example, offshore energy production is ‘done’ by many people, related to many sociotechnological systems10 like oil distribution networks and financial structures, and—as the Deepwater Horizon case shows—has consequences that reach beyond the oil sector. The result is that responsibility can no longer be kept within the boundaries of the (bodily) movements of the individual agent or organisation, but spills over into other ‘compartments’. If engineers, managers, and others involved in a particular technological action fail to acknowledge the scope of that action, for example by saying and thinking that ‘this is not my problem’, then they act irresponsible. Unless, of course, they did not know what they were doing. As noted previously, it becomes increasingly difficult to know what we are doing, although that does not relieve us from the responsibility to acquire knowledge.

Choice When No Option is ‘Right’

Even if one knew the consequences of one’s action, if technological action was an individual matter, and if one had enough control to allow one to make a decision, one could find oneself in the situation of having to choose between two alternatives which are both ‘wrong’. This is a ‘tragic’ situation in the sense of having to make a choice between two or more ‘bads’. Consider again Antigone’s choice situation. Technology often creates such choice situation. For example, today medical technology often creates situation in which one has to decide to prolong life or to let someone die (end of life decisions) or situations in which one has to decide whether or not to give life to someone (beginning of life decisions). Similar life-and-death decisions may occur in technological disasters. Consider again the example of oil platforms. In case of emergency a platform manager may have to make a decision about whether or not to evacuate the platform. However, such situations are also tragic in the specific Kierkegaardian sense proposed in this paper, since they reveal technological action as situated between activity and passivity. The situation creates a kind of ‘destiny’ in the sense that one’s options are pre-configured and in the sense that one has to choose. Yet at the same time there is room for freedom since one still has a choice between options (the consequences of which one can try to imagine) and since one can expand the range of options by using one’s moral imagination. If there was no choice at all and no possibility to expand the range of options, the action would cease to be tragic since the tension between activity and passivity would be lost. However, when questions regarding responsibility are asked, such situations are almost excluded by definition. The question of moral responsibility can only arise if the situation is sufficiently tragic in the sense explained above.

Note that whether or not one is in full control is not (just) a matter of ‘objective’ facts or state of affairs, but also depends on one’s personal experiences. This makes the problem of responsibility in a technological culture even more complex and requires us to develop new and innovative moral epistemologies.

Conclusion

Recognizing the tragic dimension in these technological actions and experiences does not justify evading moral responsibility; it does not promote fatalism or passivity. I defined the tragic as being about the tension between fate (and passivity) and trying to do something about it (activity). I conclude that this way of understanding technological experience and action inspires a concept of moral responsibility that is less ‘harsh’ (Kierkegaard) and works in the real world. Given the nature of technological experience and action in a complex sociotechnological world, the traditional conception of responsibility modelled on the legal-religious culture of the past (guilty or not guilty)11 is no longer adequate. The control condition, which is in line with this culture, is deeply problematic. Usually it is inappropriate to say that someone is either absolutely responsible or absolutely not responsible. Responsibility with regard to technological experience and action is distributed and is a matter of degree. Moreover, understanding and ascribing that kind of responsibility depends on understanding the tragic and personal character of the experiences and actions concerned. Exercising that responsibility also depends on understanding what one is doing, understanding one’s world. This sets the task of (further) developing better moral epistemologies than the ones we inherited from the past, which may lead to more adequate backward-looking and forward-looking responsibility ascriptions. Finally, this discussion yields the insight that the practice of moral responsibility ascription is not threatened by acknowledging the tragic character of human action but instead presupposes it.

Acknowledgment

I would like to thank the editors and anonymous reviewers for their comments on previous drafts of this paper.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Footnotes

2

For a discussion of the distinction in relation to engineering see for instance Nihlén Fahlquist (2006).

3

Nietzsche’s antinomy between the Apollonian and the Dionysian in The Birth of Tragedy (1872) and Heidegger’s reading of Hölderlin, his notion of Gelassenheit, and his thoughts in ‘The Question Concerning Technology’ (1953) can be interpreted as such recovery operations. They were based on the assumption that technology and modernity are radically untragic. Both thinkers promote (their interpretation of) ancient tragedy as remedy for a modern culture obsessed with control and mastery of nature.

4

Heidegger argued that we use nature as a standing-reserve, something to be used for our purposes. Pattison has summarized Heidegger’s view of technology as follows: ‘In the interaction with nature that occurs in technology, nature is no longer allowed to function or to become manifest on its own terms, but is transformed into a quantifiable resource, into energy that can be abstracted from and stored and disposed of independently of its originating context.’ (Pattison 2000, p. 54).

5

However, de Mul’s thinking has a recovery dimension to it: he writes about ‘the rebirth of tragedy from the spirit of technology’. This assumes that tragedy has already died and that contemporary technology revives it. I insist that the tragic experience has never died in the first place. Not only contemporary ‘post-modern’ technological culture is tragic; modern technological culture too has given rise to tragic experiences. (A Heideggerian could of course reply that the reason why we fail to perceive the tragic in technology is that we are in the ban of technology as a way of seeing the world. But I believe this is not true: we do experience the tragic in technology but at most we do not always have access to adequate concepts to express our experience).

6

These two efforts cannot be completely separated; the consequences of technology and the consequences of our attempts to regain mastery of it are intertwined—as is ‘nature’ and ‘culture’.

7

See also Nussbaum’s argument in The Fragility of Goodness (1986): vulnerability is not only a source of peril but also of good. Note also that in many European languages the word for happiness is related to what happens to us, to luck. For example, the English word “happiness” stems from “hap” (chance, fortune) and the Dutch word “geluk” means happiness but also chance, being lucky.

8

Note that this criterion shows that the line between description and normativity fades when it comes to the epistemic condition for responsibility.

9

See for example the work of Duff and others on legal responsibility (Duff 2007).

10

The term is used in social studies of science and technology and refers to the strong connection between the human, social aspect and the technical aspect of technological action.

11

For a critique of legal responsibility see Coeckelbergh (2010b).

References

  1. Aristotle. (1925). Nicomachean Ethics. (W. D. Ross, Trans.). Oxford: Oxford University Press.
  2. Coeckelbergh M. Imagining worlds: Responsible engineering under conditions of epistemic opacity. In: Poel I, Goldberg D, editors. Philosophy and engineering: An emerging agenda. Dordrecht: Springer; 2010. [Google Scholar]
  3. Coeckelbergh M. Criminals or patients? Towards a tragic conception of moral and legal responsibility. Criminal Law and Philosophy. 2010;4(2):233–244. doi: 10.1007/s11572-010-9093-6. [DOI] [Google Scholar]
  4. Coeckelbergh M, Wackers G. Imagination, distributed responsibility, and vulnerability: The case of Snorre A. Science and Engineering Ethics. 2007;13(2):235–248. doi: 10.1007/s11948-007-9008-7. [DOI] [PubMed] [Google Scholar]
  5. Davis M. Thinking like an engineer: The place of a code of ethics in the practice of a profession. Philosophy & Public Affairs. 1991;20(2):150–167. [Google Scholar]
  6. de Mul J. De domesticatie van het noodlot: De wedergeboorte van de tragedie uit de geest van de technologie. Kampen: Klement; 2006. [Google Scholar]
  7. Duff A. Answering for crime: Responsibility and liability in the criminal law. Oxford: Hart Publishing; 2007. [Google Scholar]
  8. Harris CE. The good engineer: Giving virtue its due in engineering ethics. Science and Engineering Ethics. 2008;14(2):153–164. doi: 10.1007/s11948-008-9068-3. [DOI] [PubMed] [Google Scholar]
  9. Heidegger, M. (1953). Die Frage nach der Technik [‘The Question Concerning Technology’ in The Question Concerning Technology and Other Essays] (William Lovitt, Trans.). New York: Harper and Row, 1977.
  10. Kierkegaard, S. (1843). ‘The ancient tragical motif as reflected in the modern’ in Either/or: A fragment of life (Vol. 1) (D. F. Swenson & L. M. Swenson, Trans.). Princeton: Princeton University Press, 1944.
  11. Lenk H, Maring M. Responsibility and technology. In: Auhagen AE, Bierhoff H-W, editors. Responsibility. The many faces of a social phenomenon. London: Routledge; 2001. [Google Scholar]
  12. Nagel T. ‘Moral luck’ in Mortal questions. New York: Cambridge University Press; 1979. [Google Scholar]
  13. Nietzsche, F. (1872). Geburt der Tragödie aus dem Geiste der Musik [The birth of tragedy from the spirit of music] (S. Whiteside, Trans.). London: Penguin Books, 1993.
  14. Nihlén Fahlquist J. Responsibility ascriptions and vision zero. Accident Analysis and Prevention. 2006;38(6):1113–1118. doi: 10.1016/j.aap.2006.04.020. [DOI] [PubMed] [Google Scholar]
  15. Nussbaum MC. The fragility of goodness: Luck and ethics in Greek tragedy and philosophy. New York: Cambridge University Press; 1986. [Google Scholar]
  16. Pattison G. The later Heidegger. London: Routledge; 2000. [Google Scholar]
  17. Sher G. Out of control. Ethics. 2006;116(2):285–301. doi: 10.1086/498464. [DOI] [Google Scholar]
  18. Steiner G. The death of tragedy. London: Faber and Faber; 1961. [Google Scholar]

Articles from Science and Engineering Ethics are provided here courtesy of Springer

RESOURCES