Abstract
Civil liability is traditionally understood as indirect market regulation, since the risk of incurring liability for damages gives incentives to invest in safety. Such an approach, however, is inappropriate in the markets of artificial intelligence devices. In fact, according to the current paradigm of civil liability, compensation is allowed only to the extent that “someone” is identified as a debtor. However, in many cases it would not be useful to impose the obligation to pay such compensation to producers and programmers: the algorithms, in fact, can “behave” far independently from the instructions initially provided by programmers so that they can err despite no flaw in design or implementation. Therefore, application of “traditional” civil liability to AI may represent a disincentive to new technologies based on artificial intelligence. This is why I think artificial intelligence requires that the law evolves, on this matter, from an issue of civil liability into one of financial management of losses. No-fault redress schemes could be an interesting and worthy regulatory strategy in order to enable this evolution. Of course, such schemes should apply only in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity is adequately compliant with scientifically validated standards.
Keywords: Civil liability, Tort law, No-fault, Self driving cars, Artificial intelligence, Robots
Introductory remarks
Civil liability, in its traditional paradigm based on “deterrence”, can be understood as indirect market regulation, since the risk of incurring liability for damages provides an incentive to invest in safety.1 The claim I raise in this article is that such a paradigm may prove inappropriate in the markets for artificial intelligence devices, which are likely to play a very relevant role in several industries, for example with regards to robots in all their uses (from health care to hospitality etc.), self-driving cars, artificial intelligence (hereinafter: AI) services etc..
Indeed, according to the current paradigm of civil liability based on deterrence, compensation is allowed only to the extent that “someone” is identified as a debtor (either through fault or under a strict liability rule). However, it would not be useful to impose the obligation to pay such compensation to producers and programmers: robots and AI algorithms, in fact, could “behave” very independently of the instructions initially provided.
As the way AI operates could be unpredictable, with negative consequences despite no flaw in design or implementation, the use of civil liability as a deterrent mechanism can be a disincentive to new technologies based on artificial intelligence, to the extent that this can lead to charges to the producers and/or programmers even if the damage derives from a perfectly “correct” functioning of the algorithms. There would be no “deterrence”, therefore, because the damage would result from a situation in which there is no “fault” to blame or prevent.
Therefore, I think AI requires that the law on this matter evolves from an issue of civil liability into one of financial management of losses. My statement is not made with reference to a specific legal system but as a point of general theory of civil liability, even if, for specific purposes, legislation and case-law belonging to different legal systems are referred to in this research. This reform appears very relevant, since one can imagine a sharp evolution, in the coming years, toward a much higher use of artificial intelligence and robotisation, which makes it important and urgent that civil liability regimes adapt to favour this evolution rather than hindering or preventing it. Some proposals in this regard are provided in the final part of this article.
A final introductory remark: as indicated below, in § 3, AI is used in many different sectors (health care, aviation, finance etc.) and carries out very diverse activities (monitoring, data mining, forecasting, market analysis and trading, image recognition, designing treatment plans, even performing physical activities etc.). Depending on this, AI algorithms could damage one’s revenues, assets, reputation or even physical integrity (through its use, for example, in surgery or in self-driving cars). Of course, different uses in diverse contexts may require different rules on compensation. My proposal in favour of a “no-fault” system is to be understood generally applicable to all cases in which AI carries out activities with a certain degree of autonomy and, therefore, this article is focused on offering a general scope of the “no-fault” paradigm. However, in order to provide a clearer reference to the factual context referred to, one can understand my proposal as applicable, in particular, to AI algorithms characterized by an high degree of autonomy, execution of physical tasks and impact on human physical integrity, as it happens with reference to self-driving cars.2
The “traditional” paradigm of civil liability based on deterrence
The current paradigm of civil liability laws is primarily based on the assumption that civil liability plays and should play an important role in deterrence. It is believed that any increase of liability to producers and suppliers of goods and services will increase investments in safety to avoid incurring liability. Therefore, it is commonly believed that the stricter the civil liability rules on producers and other professionals, the higher the overall level of safety within the system (Calabresi 1970; Cooter and Ulen 2008; Viscusi and Hersh 2013).
The idea that civil liability must have a deterrent function presupposes that the obligation to pay damages is attributed to the person whom the legal systems identifies as the addressee of such deterrence. The person, in other words, whose investment in safety is to be fostered. This paradigm has remained substantially constant over time and has developed on two main strategies for allocating the obligation to pay damages: liability for fault and strict liability.
The first and most important criterion for attributing the obligation to pay compensation for damages is that of fault. The idea that the damages requires someone’s “fault” is deeply rooted in the legal thought from ancient times: it emerged in Justinian law and was further consolidated in the jus commune and canon law (Mazeaud and Tunc 1957), starting from a thousand and five hundred years ago.
This idea, which until recently inspired the entire system of civil liability, was eloquently called, in German literature, the “dogma of fault” (Verschuldensdogma).3 Roughly speaking, one may say that all the modern legal systems establish their civil liability regime mainly on fault (Bussani and Sebok 2015).
The aforementioned paradigmaticcentrality of “deterrence” has evolved, but has remained in place, when most relevant social, political and economic changes directed legal thought toward a growing quest for solidarity in all western legal systems. This happened regardless of their civil-law or common-law basic structure,4 so that some Authors understand such a change as an example of the case where the common law and civil law of torts “reach similar results because they must address and resolve the same basic fact patterns”.5
The quest for solidarity, strongly driven by the concrete consequences and upheavals deriving from the industrial revolution, has led legislators to consider it unfair that the damages following certain (intrinsically risky) activities should be borne by consumers and other end-users of goods and services unless a “fault” of producers or other professionals could be proven in court.
It was, therefore, considered that professional producers of goods and services should bear the risk of their activities regardless of their “fault”. This liability reallocation strategy, which evolved throughout the XX century, was deemed efficient and ethically grounded to the extent that such professional producers were (and are) in a better position to assess the risk of their businesses, to spread the cost of accidents and set up adequate prevention policies.6
This evolution has led, among other things, to a significant variation in civil liability legislation (within the same paradigm based on deterrence, I believe), which lead to the adoption of loss-spreading strategies in civil liability laws (Comporti 1965); under an economic point of view Cooter 1991). This new allocation strategy ignored the concept of “fault” and regarded the exercise of risky activities as an autonomous criterion for imposing liability for damages.
From a legal point of view, this evolution has expanded the liability imposed on professional producers to include cases in which the latter could not prove that the damage was not attributable to them, cases where there was scientific uncertainty as to the cause of the harmful effects or even cases where such cause was unknown (Montinaro 2012; in an economic analysis of law perspective see Faure et al. 2016). This development has been pursued through similar techniques in all Western legal systems, mainly: the reversal of the burden of proof and the imposition of strict liability on producers and other professionals, the development of the precautionary principle in many fields of application etc.7
Legal systems moved even further in the direction of reallocation of liability for damages through the adoption of different loss-spreading techniques and strategies; this was the case, for example, of mandatory insurance, which was imposed to producers and professionals of specific goods and services in different jurisdiction.8
The emergence of strict liability represented a mere incremental advancement of the same traditional paradigm of civil liability, based on “deterrence”. In fact, the developments just summarised have been limited, essentially, only to reallocate the “cost of accidents” from customers and end-users to producers and professionals within the same conceptual and legal framework already in place, providing, for some cases, the shift of the financial burden for compensation oninsurance companies.
The concept of “fault” has been conceptually replaced, in some cases, by that of strict liability, simply to increase deterrence even in cases where the fault could not be assessed positively in court, with the aim of inducing producers and other professionals to increase investments in safety correspondingly (Savatier 1945; Comporti 1965). Legislation, however, appeared to keep considering civil liability also for its potential of deterrence.
Such an approach to the issue at stake is shown, for example, in the “Principles of European Tort Law” (PETL) developed by the European Group on Tort Law,9 especially as regards the connection of compensation to liability to compensate damages [art. 1:101(1)], which invariably depends solely on fault or “strict liability” [Title III]. The same approach seems to be supported by scholars and even sophisticated studies, at the supranational level, have considered, and continue to consider, civil liability as carrying out the central function of deterrence together with that of compensation (OECD 2006).
Artificial intelligence, its applications and its peculiar characteristics
It should be noted that the paradigm of civil liability based on deterrence has proven to be reliable and appropriate in several cases. In many cases, increased liability resulted in an incentive for producers and other professionals to invest in safer products and services. This happened, for example, with reference to general consumer legislation enacted, among many others, through Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.10
This paradigm, however, has proved inappropriate in other cases11 as it appears to be with regard to the so-called artificial intelligence revolution. Some preliminary considerations on AI and its applications are appropriate before moving on to legal analysis, to take note that artificial intelligence has countless applications in society today. Many of them, in fact, are very common (Kaplan and Haenlein 2019; Solaiman 2017) and are present in every sector (Kurzweil 2005).
For example, in agriculture, AI algorithms increase the efficiency of farming by monitoring crops and soil and using the information collected in order to predict, among other things, the time required for a crop to be ready for harvest (Faggella 2020a, b). In finance, AI allows for huge data extraction and market analysis far beyond any human capacity (Costantino and Coletti 2008) and makes millions of daily trades possible without any human intervention (the so-called High-frequency Trading) together with calculation of asset allocation (portfolio management) (Faggella 2020a, b). It also allows the assignment of credit scores aimed at assessing the risk of consumer default (Asatryan 2017).
If AI is used for programming robots, it can perform physical activities. AI- programmed robots are quite common in many industries and are used to perform jobs that can be dangerous for humans. If sensors are used, robots can even collect information and perform monitoring functions. Self driving cars are currently being tested (Badue et al. 2020), also for military applications (Congressional Research Service 2019).
Current AI algorithms are not limited to executing tasks based on predefined and permanent rules. They are able to collect data (the so-called data mining: Friedman 1998) and self-learning. In particular, algorithms can automatically improve through experience and become capable of making predictions and decisions they were not explicitly programmed for (Mitchell 1997; Koza et al. 1996). Applications, especially those falling within the so-called deep learning, can be supervised, semi-supervised or even unsupervised by humans (Bengio et al. 2013; Schmidhuber 2015; Bengio et al. 2015). Deep learning-based image recognition is currently able to achieve more accurate results than human-based ones (Cireşan et al. 2012). In medical diagnosis (more in general on this issue see: Amisha et al. 2019; Davenport and Kalakota 2019) AI allows detection of tumors through computerised interpretation of medical images (Litjens et al. 2017; Forslid et al. 2017), design of treatment plans also through the extraction of medical records and the creation of drugs. The recent Covid-19 pandemic has confirmed how AI can be used for the control and detection of pandemic cases, diagnoses (Castiglioni et al. 2020) and vaccines and drugs development after AI predicted the RNA structure of SARS-CoV-2 (Baidu 2020).
AI has even shown able to perform tasks such as generating news and financial reports, writing texts (Metz 2019), increasing traffic on social media platforms by detecting users’ preferences (Williams 2016) and even to transform structured data into reports and recommendations. Research is also being conducted to apply deep learning to military robots in order to enable them to perform new tasks through observation (U.S. Army Research Laboratory 2018).
AI and civil liability: the problem(s)
Artificial intelligence is prone to several problems arising from its technical and operational characteristics. Among these one may recall the risk deriving from the poor quality of the data to which the system accesses (so that the AI shows itself prone to racism if the available data are12). The risk arising from conflicts between different objectives pursued by different elements of the same AI device should also be mentioned (Meyer 2007). Of course, all internet-connected software and devices are subject to hacking and unauthorised access (Sheehan et al. 2018).
With reference to the purposes of this article, the peculiar problem arising from artificial intelligence is that AI algorithms can have a certain degree of autonomy in their operation. Therefore, their “behaviour” evolves over time (and will do so much more in the near future), based on the information and feedback collected and processed by thousands of different shared sources (so-called “machine learning” and “deep learning”). In fact, it can be said that algorithms do not only perform tasks, but also learn how to perform them over time.13
In this field, therefore, the relationship of cause and effect, as regards the causality of the damage, may be not linear as we are used to believe (Karnov 2016; Scherer 2016; even if not everyone agrees on this point: Vladeck 2014; Hubbard 2014) since the way causality works is no longer “Aristotelian”.14 As stated by the EU Expert Group on Liability and New Technologies, AI makes it questionable the adequacy of existing liability rules based on “anthropocentric and monocausal model of inflicting harm” (European Commission 2019). On the contrary, it can be considered quite frequent (and even more frequent in the future, due to technological evolution) the possibility that algorithms “behave” very independently from the instructions initially provided by programmers.
The results of the AI activity, therefore, could be unpredictable despite the absence of flaws in the design or implementation. This implies that algorithms mayerr in their “decision making”.15 Such a expansion of the area of “unknown”, which is not capable of being predicted according to our current scientific methods (U. Beck 1996), requires careful consideration of which civil liability regime should apply to damage caused by AI operation.
Many proposals have been made16 in this regard. Almost all of them are based on what I called the “traditional paradigm” of civil liability, rooted on deterrence. Either they suggest applying the fault rules (Abbott 2018) or the strict liability regimes (Buonanno 2019), sometimes pleading extension of the rules on defective products (Borghetti 2004) or on animals under the care of humans (Schaerer et al. 2009).
Application of the traditional paradigm of civil liability to AI, however, might not foster significant improvements of safety and could determine negative externalities, instead.17 This statement can be understood after considering that compensation to damaged consumers and other end-users of AI devices requires, under the said traditional paradigm, that the obligation to pay compensation is imposed to producers and programmers thereof (the only “someone” available to be imposed liability on: Hao 2019).18However, producers and programmers could not do much to forecast unforeseeable “behaviour” of AI algorithms, which would be influenced by innumerable variables provided by databases, big data gathering and the end-users themselves, which are completely out of the reach and control of anyone.
This is why, in my view, civil liability would (and could) not induce virtuous investments in safety within the AI industry: in fact, no further investment, fostered by deterrence, could prevent such kind of risks. On the other hand, the application of the traditional paradigm of civil liability, especially when conceived as a strict liability regime, would expose producers and programmers to unpredictable and potentially unlimited claims for civil liability, with no possibility of reducing the risks by increasing investments in safety (with regard to damage following “unforeseeable” behaviour of AI algorithms). Therefore, it is likely that such an application19 would prevent them from entering the market or developing it, thereby hampering technological progress (what is sometimes called the risk of “technology chilling”: Montagnani and Cavallo 2020; Viscusi and Moore 1991; Huber and Litan 1991; Parchomovsky and Stein 2008; Morgan 2017; Magrani 2019; Policy Department for Citizens’ Rights and Constitutional Affairs 2020; EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b; Bertolini 2015; Pellegatta 2019; Palmerini and Bertolini 2016).
This would be a significant negative externality, since new technologies determine an important increase in safety and reduce the overall number and relevance of accidents (as available data already show with respect to the current situation20).
It can be noted, of course, that the risk of a “technology chilling” is not detectable in these times. Economic and business literature account for significant investments in AI (OECD 2019) and international market races to deploy AI technology (see, e.g., CBI 2018; Welsch and Behrmann 2018). Furthermore, AI has been used in finance for more than ten years and the application of the current civil liability regulation has not chilled that use until now. This is true. However, recent AI applications (driverless cars, medical applications and the like) show a wider and deeper exposure to risk than ever before. Moreover, other markets have shown that ex ante uncertainty on the allocation of the costs of accidents (coupled with the consequent fear of excessive litigation) “may drive otherwise healthy companies outside the market”.21
As a matter of fact, the purpose of this article is precisely to highlight that, as history shows with reference to other sectors, the application of inadequate civil liability rules to evolving markets can raise serious concerns about negative externalities (see, e.g., OECD 2006; Mello et al. 2010; Di Gregorio et al. 2015). Of course, one can hope that the problems do not arise. I propose that a wiser solution would be to adapt the legislation in order to prevent such negative externalities from manifesting themselves in the first place—which appears to be the strategy behind the EU proposal to give legal personality to robots, which is recalled below, under § 5.
A proposal: the need to relieve producers and programmers from civil liability when robots correctly comply with scientifically validated standardised rules
Law scholars have observed that current civil liability legislation can be an obstacle to the development of artificial intelligence and the exploitation of the following benefits (Montagnani and Cavallo 2020; Viscusi and Moore 1991; Huber and Litan 1991; Parchomovsky and Stein 2008; Morgan 2017; Magrani 2019; Policy Department for Citizens’ Rights and Constitutional Affairs 2020; EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b; Bertolini 2015; Pellegatta 2019; Palmerini and Bertolini 2016). A similar obstacle has been observed, in the past, with regard to medical civil liability.22 It should be noted that the reference to civil medical liability, when it comes to tort law reform in the wake of artificial intelligence, appears appropriate as the two systems show similarities in both incentives and (negative) externalities (Gaine 2003).
In fact, as noted above, there is a rather high possibility (which will increase in the future, due to technological evolutions) that AI algorithms “behave” increasingly far independently from the instructions initially provided by programmers. This possibility led the European Parliament to propose “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations” (European Parliament 2016; Solaiman 2017; Bryson et al. 2017; Amidei 2017; Guerra 2018). The main reason for this proposal is to use legal personality as a technique to impute liability to the robot alone and, therefore, isolate its obligations (including damages) from those of its producer and programmer. Consideration of robots as Haftungssubjekte (liability subjects)23 represents, in short, a proposal to solve the problem of a “fair and efficient allocation of loss”, highlighted by the EU Expert Group on Liability and New Technologies (European Commission 2019).
I believe that such a proposal is not desirable, since robots cannot and should not be considered as “persons” under current civil legislation (European Commission 2019; European Parliament 2016; European Parliament 2017; Solaiman 2017; Bryson et al. 2017; Floridi and Taddeo 2018; IEEE Standards Association 2017; Wagner 2018, 2019a, b; Eidenmüller 2017; Chopra and White 2011; Koops et al. 2010). However, this proposal is much relevant within the present discussion, because it clearly shows the need to shift “obligations” away from producers and programmers when robots are capable of acting rather autonomously from their original design (Scherer 2016).
How could such a problem be addressed? The most relevant debate, on this point, is whether modern technology requires new specific legislation or existing legislation and concepts can be adapted to it: this is the so-called “law of the horse” controversy (Easterbrook 1996; Lessig 1999; Calo 2015; Stradella 2013).
Since AI algorithms are able to “behave” in a very different way from what was initially foreseen in their programming, I believe that the problems highlighted above, especially in § 4, do not concern what the algorithm actually does but, instead, how the algorithm is designed from the very beginning. From this point of view, I believe that civil liability rooted on deterrence (which will probably be conceived as strict liability: European Commission 2019; *EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b) should correspond, in these sectors, mainly to lack of conformity to predetermined standards (depending, of course, on available knowledge) (Guerra 2018; Virk 2013). This compliance constitutes, in the AI environment, a sort of “adapted range of duties of care” (European Commission 2019) and represents a more effective form of regulation within mass products (Viscusi 1989).
Conversely, strict liability should not apply if an algorithm programmed in accordance with standards occasionally errs and produces negative consequences despite no design or implementation flaws; this is the case from which the negative externalities highlighted above come from. I believe that, in these cases, producers and programmers of AI algorithms and devices should be released from civil liability for damages. In other words, in all cases where there is no evidence of negligence, imprudence or unskillfulness and the robot (both in its physical components and in its artificial intelligence aspects) complied with production and programming scientifically validated standards, programmers and producers of AI algorithms and devices should not be held liable for damages.24
It should be noted that this proposal is in stark contrast with the current paradigm of allocation of the “costs of accidents”, since, as briefly recalled above in § 2, the current regulatory paradigm shows a tendency to impose strict liability on firms that carry out intrinsically risky activities and, therefore, impose on them the costs of all damages for which there is no positive evidence of diligence, prudence and skill (i.e.: all cases in which firms cannot prove that the damage is not attributable to them, in which there is scientific uncertainty as to the cause of the harmful effects or even cases where this cause is unknown).
It is not ignored that the mere respect of standards could lead to unwanted damage in some cases (they would be also allowed compensation in my proposed “no-fault” system, as noted below). However, my claim is made on the basis of the idea, confirmed by available empirical evidence,25 that the adoption of artificial intelligence in carrying out specific activities such as driving (being destined to increase drastically in the next future) determines a significant increase in safety and reduces the overall number and relevance of damage and deaths compared to human action.
This means that provision of incentives for technological innovation, provided that it respect scientifically validated standards, appears a safer strategy than any other.
A new paradigm of civil compensation for damages related to AI: towards the evolution of compensation from an issue of civil liability to one of financial management of losses
It is necessary, at this stage, to translate the above observations into rules. The law, in fact, binds economic and social activities in order to contribute to the pursuit of welfare; on the other hand, however, the law cannot arbitrarily define its objectives and (especially) the means. The actual functioning of the economic and social contexts faced must be taken into the utmost consideration, in order to develop well-founded, affordable, reliable and effective rules (de Jong et al. 2018).
The failure of the current paradigm of civil liability based on deterrence, when applied to artificial intelligence, observed and (I believe) established above, requires a radical modification thereof. Such a modification appears relevant in these days, since the application of the “traditional” paradigm of civil liability can hinder the development of markets towards the intensive use of artificial intelligence and robotisation in the future (the already mentioned “technology chilling”). Furthermore, civil liability rules rooted in deterrence are likely to place jurisdictions adopting this paradigm at a competitive disadvantage in favour of jurisdictions that are more responsive to the needs and demands of the markets referred to.
What is surprising is that in areas of research other than law problems quite similar have been studied thoroughly and scholars have come to the conclusion that intrinsically risky activities incorporate a certain percentage of risk that does not depend on the person performing them but on the activities themselves (Althaus 2005; Aldred 2013; Aven 2012, 2016; Beck 1996; Lindley 2006). Errors occur and will occur regardless of the severity of the civil liability rules in force.
This theme recalls the concept of “manufactured uncertainties” developed by Beck, which is based on the idea that in modern times the area of “unknown” is widened and risks escape from what is capable of being predicted pursuant to our current scientific methods.26 We need to adapt the legislation to the “risk society”, that is: “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself” (Beck 1992).
Such a conclusion should lead to discarding the “blame culture”, which inspires and supports the current law on civil liability, and replacing it, at least in some cases (as briefly discussed here) with a “no-blame culture”, rooted in risk management27 and scientifically validated standardisation. While literature on risk management is fairly consistent on this point, lawyers and lawmakers seem rather conservative on this point.
In this regards, it was noted, above, that the negative externalities imposed on the AI markets by the traditional civil liability paradigm could be reduced if producers and programmers of artificial intelligence devices could be released from civil liability under certain conditions; in particular, when there is no evidence of their negligence, imprudence or unskillfulness and their activity complied with scientifically validated standards.28
Such release, however, may not (and should not) lead to prevent damaged customers and end-users to get compensation. In fact, on their side, any abrogation of the right to compensation would be inconsistent with the “solidarity” approach that now pervades juridical systems, mentioned above. In addition to this, it would contradict the principle of “functional equivalence”, according to which compensation should not be denied in a situation involving emerging digital technologies “when there would be compensation in a functionally equivalent situation involving human conduct and conventional technology”.29
This is why I believe that a new regulation of the matter should be developed, inspired by a new paradigm, aimed at maintaining compensation for damages on the patient’s side, but shifting away from producers and programmers of AI devices (when there is no evidence of negligence, imprudence, or unskillfulness and scientifically validated standard of production and programming are complied with) the obligation to pay for such compensation.30
In other words, I see room for relevant legislation to evolve from an issue of civil liability into one of financial management of losses. This would take better account of the “systemic” need for proper functioning of the market as a whole. In fact, what could seem in the short term to favour the individual customer (e.g., condemning a producer to pay compensation for a specific damage suffered by an end-user of AI devices or robots, despite compliance with validated standards and no negligence, imprudence or unskillfulness being ascertained in court) can possibly damage systemic safety (determined, in hypothesis, by the development of AI) if it prevents the market from developing into a more technological and safer system (due to the disincentives determined by the sentence itself; in the example above: producers could abandon research and development of AI devices and robots operating in risky environments).
The legal systems should bear the risk that application of scientifically validated standards can determine harmful consequences in individual cases to the extent that, from a systemic point of view, this application allows a significant reduction of the overall risks and damage (Kizer and Blum 2005; Hernandez 2014; US Department of Transportation 2017).
This new paradigm could be built on the basis of “no-fault” systems available in different jurisdictions.31 In this regard, one can cite the no-fault rules issued in the field of medical damage, further described in § 7 (see, in general: OECD 2006; Marchisio 2020); adverse effects attributed to vaccination (World Health Organisation 2009; Looker and Kelly 2011); damages coming from unknown drivers32 etc..
Adopting a “no-fault” scheme would isolate compensation in favour of damaged end-users from liability on producers and programmers of AI devices. It would also help resolve other weaknesses inherent in the traditional paradigm of civil liability. One can mention, here, the risk of civil liability turning into a “damages lottery” due to the fact that, in some cases, the damages cannot be awarded because no one is at fault in the specific event. It is also possible to report the case in which damages cannot be collected because the debtor is (in many instances: deliberately) unable to pay (Atiyah 1997; Cane and Goudkamp 2013).
For the sake of completeness, one might wonder if the proposed no-fault schemes might actually create a preference for AI-driven activities over the use of human labor. This would confirm, in hypothesis, what appears to be a bias against humans that exists, for example, in the immigration and tax laws of many jurisdictions, to the extent that robots can generally be freely imported without work visas and the income they generate from their work is usually not taxed on the robot as it would be for a human. The issue is very complex and cannot be addressed here. In summary, it should be noted that, whatever measures are introduced to compensate for the loss of human work caused by the use of artificial intelligence, such measures shouldcompensate those who have lost their jobs in the short and medium term, contribute to the retraining of unemployed workers and foster study and training in technological subjects, but they should not prevent the success of artificial intelligence.33 The proposal that I have developed in this research is aimed precisely at preventing technological innovation from being hindered since artificial intelligence shows, in many sectors, a more secure strategy than any other based on human action. In these areas, removing the incentives for AI would mean reducing overall efficiency, safety and security.
Some references and observations on some existing “no-fault” laws
It is clear that all the existing pieces of “no-fault” legislation, briefly mentioned above, are targeted to specific sectors and that, when implemented with reference to AI, should be properly adjusted. Even if they provide good examples of financial management of losses and valuable ideas for future legislation on artificial intelligence, in fact, their contribution to the development of an adequate scheme for AI should be further studied carefully. A detailed exam of existing “no-fault” models and any attempt to provide even a concise description of how a no-fault scheme might be designed in order to regulate the issue at stake would fall far beyond the scope of this article, which is intended to outline the need to change a regulatory paradigm of the law of compensation and not to determine its specific content.
However, some remarks may be appropriate, here, to define in what terms existing legislation can represent a model for AI markets and what adjustments are needed to adapt them to the latter.
First, it can be noted that “no-fault” schemes seem to differ, in a very broad view, with respect to six main variables (Dickson et al. 2016): the eligibility criteria for compensation;34 if the compensation is paid automatically upon occurrence of the event35 or an avoidability standard is adopted;36 whether or not the system prevents continuous access to the courts; how the program is funded;37 whether or not the compensation is imposed a financial cap; the definition of the financial entitlement.38
It is clear that the drafting of a “no-fault” scheme for the damages produced by AI algorithms would require a careful definition of the eligibility criteria, especially as regards the definition of the “scientifically validated” standards (and modification procedures) to be complied with in order to have the scheme applicable. It would also be necessary to define a third, independent entity in charge of paying compensation to damaged end-users in application of the “no fault” scheme, of its operation and its financing. Similarly, the definition of a standardized amount of compensation under a “no fault” scheme should also be provided. These issues cannot be discussed here, as this article aims to present the general scopes and principles of my proposal, while the topics briefly listed here are rather detailed aspects of it.
Furthermore, the way in which a “no-fault” scheme is conceived depends to a significant extent on the legal and institutional context in which the scheme operates, particularly with respect to the way in which the social security net is designed in each different country (Dickson et al. 2016). It is clear, for example, that in the USA any such scheme would likely be funded privately while in European countries such as Sweden, Norway and Finland it is more likely to be publicly funded (OECD 2006; Mello et al. 2011; Dickson et al. 2016; Vandersteegen et al. 2015). Acceptance of a standardised compensation scheme can also depend heavily on how the social security net is designed in each different country.39
Secondly, the aforementioned pieces of legislation have a much narrower scope than the issues dealt with here (e.g., within health law they are mainly aimed at avoiding litigation; in case of damages coming from unknown drivers they seek compensation in case no liable person is identified etc.).
To my knowledge, the only “no-fault” scheme that shares a common approach to the scope discussed here is that provided for injuries as a result of vaccination. This scheme, in fact, embodies the idea that compensation of statistically “inevitable” injuries should not, in principle, be imposed on persons who carry out the relevant activities or who supply products on the market, to the extent that negligence, imprudence or unskillfulness is not proven and scientifically validated standards are complied with.
This approach, well functioning with reference to vaccination (adverse effects are very rare compared to the over 2.5 million deaths prevented, only in 2008, by vaccination: World Health Organisation 2009; Looker and Kelly 2011), could represent a model for AI algorithms liability regulation, as their use could determine harmful consequences in individual cases but, from a systemic point of view, would allow a significant reduction of the overall risks and damages. The approach proposed here resembles that of mandatory seat belt in motor vehicles: also in that case “seat belts can cause injuries but it is vastly more likely that they will protect you. It is all about probabilities and the chances are on the side of wearing seat belts” (Giubilini and Savulescu 2019).
Third, “no-fault” legislationis currently showing shortcomings in terms of safety incentives, in the absence of the deterrent brought about by “traditional” civil liability.40 The pure “no fault” models, in fact, raise concerns about their appropriateness to limit the risk of moral hazard, exactly as it happens in New Zealand with respect to medical law, since “the principal weakness of no-fault schemes is the difficulty of ensuring that the socially optimal amount of care is taken by potential loss-causers, as the links between their potential to cause loss and the costs of their actions are severed” (Howell et al. 2002).
This is why the proposed “no-fault” system should not apply outside the scope defined above, namely: relief from liability in the absence of negligence, imprudence or unskillfulness41and in compliance with scientifically validated standards. Out of this scope, “no-fault” rules would unreasonably remove the deterrent effect that civil liability can still produce. I argue that “no-fault” rules should be combined with “fault” rules in order to take advantage of the benefits each of them brings, narrowing their flaws through their reciprocal interaction.
Furthermore, in all cases where “no-fault” schemes apply, they should be combined with a discipline capable of providing incentives for safety.42 I believe that, in those cases in which no one can be blamed for ignoring the standards set, such an approach should be uncoupled from deterrence on individuals (e.g., the deterrence induced by civil liability should not be replaced by the deterrence induced by disciplinary sanctions on employees). Instead, it should be inspired by organizational and procedural criteria, thus shifting the paradigmatic centrality from individuals to risk management.
Concluding remarks: towards a general “law of the horse” for artificial intelligence technologies
As noted above, the intensive use of artificial intelligence in several sectors is very likely to reduce overall risks and harm compared to human action. However, it can give rise to particular risks and harms in specific cases. In this article I have examined, in particular, the risks associated with machine-learning and deep-learning capacity of artificial intelligence devices, consisting of the AI algorithms ability to act in a rather autonomous way from their original design.
From a systemic point of view, the overall benefits of artificial intelligence outweigh the resulting costs. Therefore, technological evolution should be encouraged or, at least, not hindered.
It is recognised that “traditional” civil liability rules can provide a negative incentive towards such evolution, as they can impose the obligation to pay compensation on producers and programmers of AI devicesdespite no design or implementation flaws.43 In these cases, civil liability would provide no virtuous deterrence to utmost care, but would simply discourage technological progress. Therefore, AI creates new challenges with regard to civil liability, which must balance adequate compensation to victims with the need not to hinder technological innovation (EU Commission 2020).
No-fault compensation schemes could be an interesting and worthy regulatory strategy for that purpose, in order to allow an evolution of the matter from an issue of civil liability into one of financial management of losses. Of course, such schemes should only apply in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity has been adequately compliant with scientifically validated standards. In other cases, traditional civil liability rules would have a valid deterrent function.
Therefore, with reference to the AI markets, the evolution toward a “no-fault” system should not abrogate the traditional civil liability paradigm rooted in deterrence. Instead, both of them should coexist as independent and alternative techniques of compensation (a sort of “double track” legislation on damages), in order to exploit the advantages that each of them gives, restricting their defects from their reciprocal interaction.
Funding
This research did not receive any funding or other support.
Compliance with ethical standards
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Footnotes
This is true in all legal systems, even if extension and intensity of such deterrence may vary relevantly. E.g., in civil law systems deterrence is traditionally limited to compensation for damages actually suffered by the injured party while in US law punitive damages may be awarded by courts. On this issue see, among others: Gotanda (2003), Vanleenhove (2012).
When dealing with AI, the need to circumscribe the scope of analysis is frequently felt even in articles dealing with rather general issues. See, e.g., Solaiman (2017), where the Author limits the scope of the article mainly to industrial robots “that exercise some degree of self-control as programmed”.
This approach is represented by the well-known espression “Nicht der shaden verplichtet zum schadensersatz, sondern die schuld”, formulated by von Jhering (1867).
See, e.g., in Italy De Cupis (1979), in France Josserand (1910), in Germany Sperl (1902), in England see the comments made in Lunney and Oliphant (2000). More in general and in comparative perspective see: Taylor (2015).
Engle (2009). Even if this could not be understood as a trend leading to a “remarkably uniform globalized system of tort law”, as Engle believes, it nevertheless shows a certain convergence on the way tort law evolved on this point.
Calabresi (1970). As noted by Engle (2009), tort law may be understood, under this point of view, as “the doctrinal (superstructural) expression of material facts, notably the relationships of productive forces—economic actors and actions”. With reference to the technological issues at stake, see: Martín-Casals (2010).
In medical civil liability such path included sector-specific evolutions, such as the imposition of an obligation of results with respect to many treatments and especially routine ones [in English law through the “res ipsa loquitur” doctrine, as stated in Donoghue v Stevenson (1932) AC 562; in Germany through the Anscheinsbeweis or prima facie Beweis doctrine, on which see: Stauch (2008). Some jurisdictions even turned extra-contractual medical liability into a contractual one (which favors patients, inter alia, as regards burden of proof) following the German doctrine of Faktischesvertragsverhȁltnisse: Haupt (1943). The same evolution may be observed in Italy, with reference to the theory of “contatto sociale”: Cass. 22 gennaio 1999, n. 589; see, on this issue: Castronovo (1990).
What is relevant to note, here, is that mandatory insurance is thought to protect damaged consumers and other end-users of goods and services from the risk that producers or other professionals have an insufficient patrimony to pay redress and not to relieve the latter from deterrence. Mandatory insurance, therefore, determines a mere reallocation of the obligation to pay compensation but does not modify the traditional paradigm of civil liability, insofar as producers or other professionals remain personally liable, may be called to pay redress in case insurance coverage is not applicable and are subject to deterrence indirectly – since insurers would shift onto producers and other professionals (by applying higher insurance premiums) the cost of any redress paid on their behalf.
The relationship between insurance coverage and deterrence is discussed by Wagner (2006), Luntz (2010), Shavell (2000).
Which may be read at http://civil.udg.edu/php/biblioteca/items/283/PETL.pdf.
In fact, it seems that such “liability frameworks in the Union have functioned well”, as noted in EU Commission (2020). On this issue it is possible to read, among others, the five reports on the application of Directive 85/374/EEC concerning liability for defective products (1995, 2000, 2006, 2011 and 2018), which may be found at https://ec.europa.eu/growth/single-market/goods/free-movement-sectors/liability-defective-products_en.
It ought to be noted that, also in this case, the nature of “mass product liability” could suggest reducing relevance of civil liability and emphasize that of regulation and social insurance; in this sense see: Viscusi (1989).
Another relevant area where the said paradigm showed inappropriate is health-care. A rich and valuable literature shows, in fact, that the increase of asymmetric protection of patients through increases of medical civil liability beyond a certain limit does not produce further increments in safety but, instead, determines the adoption of “defensive” strategies and imposes very relevant negative externalities: OECD (2006).
This phenomenon is referred to as “defensive medicine”, which “occurs when doctors order tests, procedures, or visits, or avoid certain high-risk patients or procedures, primarily (but not necessarily solely) because of concern about malpractice liability”: U.S. Congress, Office of Technology Assessment (1994). In fact, such attitude determines much relevant increases of costs which does not benefit patients: see e.g., for the USA, Mello et al. (2010). National health-care systems as a whole do not benefit from massive increase of defensive strategies, which lead to inefficiencies and loss of quality.
As an example, AI devices programmed for recognizing human faces and bodies but instructed only with reference to white faces and bodies happened not to recognize black ones. This happened with soap dispensers, releasing soap only onto white hands (https://metro.co.uk/2020/04/01/race-problem-artificial-intelligence-machines-learning-racist-12478025/). It seems that driverless cars “are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road” (Ibidem).
It even happened that a researcher, Joy Buolamwini (a Ghanaian-American computer scientist at the MIT Media Lab), could not be recognised by the AI system because of her skin so that she could access the laboratory, throughout her bachelor’s and master’s degrees, only wearing a white mask (https://uxdesign.cc/is-ai-doomed-to-be-racist-and-sexist-97ee4024e39d).
This is called “autonomy” of AI: new technologies “are themselves capable of altering the initial algorithms due to self-learning capabilities that process external data collected in the course of the operation. The choice of such data and the degree of impact it has on the outcome is constantly adjusted by the evolving algorithms themselves”; see: European Commission (2019). On this issue see also, among others: Surden (2014), Russel and Norvig (2010).
Even if tort law appears to be built on Aristotelian concepts of causation. On this point see, e.g., Engle (2009).
See Schönberger (2019), where one may find an overview and references to both moral implications of the issue and areas where and how AI may err. It ought to be noted that this Author believes that the present legal framework is largely fit to deal with the challenges AI technologies are posing.
A comparative analysis with reference to the different approaches in the USA, Europe and China may be found in Infantino, Wang (2018–2019). An attempt of reconstruction of the different strategies available under Italian law was made by Ruffolo (2017).
The risk that business decisions may be taken depending on the liability regime in force at any given time within the relevant legal system is highlighted also, among others, by Scherer (2016).
If the obligation to pay compensation only followed their fault, the deterrence mechanism would work appropriately. Liability based on fault, however, would not be considered sufficient in this field (although this solution was sometimes suggested in law literature: Casey 2019) since the risks brought about by the use of artificial intelligence, coupled with the current solidarity approach already mentioned above, would not allow injured consumers and end-users of AI devices not to be paid compensation unless a fault can be proven in court.
There is a tendency, therefore, to interpret the liability of producers and programmers as a case of “strict” liability. In favour of a strict liability regime see, e.g., Buonanno (2019). For such a tendency see Calabresi (1970), Engle (2009), Martín-Casals (ed.) (2010), Comporti (1965), Cooter (1991), European Commission (2019). A proposed mix of strict or tort liability, to be complemented with mandatory insurance provisions, is made in EU Independent High-Level Expert Group On Artificial Intelligence (2019a, b).
Of course, several alternatives may be (and are) proposed in law literature. In favour of a strict liability regime see Buonanno (2019); European Commission (2019); in support of the application of the “traditional” paradigm of civil liability based on fault tout court see Casey (2019). A comparative analysis with reference to the different approaches upheld in the USA, Europe and China may be found in Infantino.
US Department of Transportation, NHTSA (2017), US Department of Transportation (2018). In fact, according to the Department of Transportation and the National Highway Traffic Safety Administration, almost 94% of accidents on US roads occur due to human error, so self-driving vehicles could drastically reduce the number of crashes and fatalities that occur on the roads today: US Department of Transportation (2017). Similar arguments may be upheld also in other sectors. See also, e.g., on health care: Kizer and Blum (2005).
This was noted, with respect to US commercial aviation industry, by Leenes et al. (2017), in particular at footnote n. 58, where they noted that such industry “was almost erased by the high levels of litigation it attracted”. They also note that such a situation changed after the adoption of the General Aviation Revitalization Act 1994 [Act Aug 17, 1994, PL 103–298, § 1–4, 108 Stat 1552; Nov 20, 1997, PL 105–102, § 3(e), 111 Stat 2215], insofar as “the investment in safety by producer did not appear to decline, since the number of registered accidents actually diminished because of the higher investment in safety by the users”. On this issue see also Helland and Tabarrok (2012).
With respect to health care see OECD (2006), Mellofet al. (2010).
As it is shown by the emergence and diffusion of “defensive medicine” strategies, referred to above.
Since robots cannot be considered, in legal language, as Personen. See, from the very title: Wagner (2019a, b).
A similar proposal is made by Scherer (2016), when he proposes that “manufacturers and operators of certified AI systems would enjoy limited tort liability, while those of uncertified AI systems would face strict liability” (at 5 and 393 ff.).
My proposal requires, of course, solution of many variables, such as the definition of what “scientifically validated” standards under the proposed “no-fault” systems should be, who should be in charge of defining them (public body, certified experts etc.), at what level (national, international, global). Such issues cannot be discussed here, since this article is aimed at presenting general scopes and principles of my proposal, while the themes briefly listed here constitute rather detailed aspects thereof.
In general, the issue of technical safety standards with specific respect to robot is dealt with, among others, in Guerra (2018); Virk (2013).
On such empirical evidence, with specific reference to health care, see, e.g., Kizer and Blum (2005), Sigmoidal (2017), Hernandez (2014), PwC (2017).
With reference to car traffic, according to the Department of Transportation and the National Highway Traffic Safety Administration, almost 94% of accidents on US roads occur due to human error, so self-driving vehicles could drastically reduce the number of crashes and fatalities that occur on the roads today: US Department of Transportation (2017).
Their peculiar feature is that they do not come from the “outside” of human society (e.g., natural disasters) nor do they consist in “specific calculable uncertainties—“risks”—which are determinable with actuarial precision”. Instead, they are created by and within society itself, collectively imposed and individually unavoidable: Beck (1996, 2009).
See, on this point, the “Swiss cheese model” developed in Reason (1990).
Such a principle is stated, especially, in health-care with respect to “defensive medicine” issues: OECD (2006), however without deepening the issue. On ex ante regulation with respect to mass products see also: Viscusi (1989).
European Commission (2019). It ought to be noted that the current paradigm of civil liability, based on a micro-systemic approach and focused on the relationship between damaged person (creditor) and offender (debtor), does not allow the possibility of balancing the two apparently conflicting goals noted above, i.e.: conceding redress to damaged users without imposing it onto producers and programmers. Such a possibility would not be available even by reallocating the obligation to pay compensation onto insurance firms, through imposition of a mandatory insurance regime. As noted in § 2, in fact, such a solution would only shift liability for payment of redress but would not relieve producers and programmers from all inefficiencies arising from a deterrence-based system of civil liability. In fact, producers and programmers would remain personally liable, may be called to pay redress in case insurance coverage is not applicable and would be subject to deterrence indirectly anyway, since insurers would shift onto producers and programmers (by applying higher insurance premiums) the cost of any redress paid on their behalf.
A proposal to reduce relevance of civil liability in favour of ex ante regulation and “social insurance”, although developed to meet different scopes, is laid down in Viscusi (1989).
It ought to be noted that the concept of “no-fault” is used, here, with reference to a system where redress is provided by a dedicated fund regardless of any fault by the agent being established. Therefore, it does not make reference to strict liability schemes, which likewise ignore “fault” as a condition to impose liability but operate in the opposite direction, by imposing the obligation to redress on agents regardless of their culpability.
See, e.g., the whole Chapt. 4 (art. 10–11) in Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009, “relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability”.
It should be noted, in the first place, that the proposal I have developed in this article is not aimed at artificially increasing the growth of artificial intelligence but, on the contrary, is aimed at preventing it from being discouraged by outdated liability rules. The introduction of a no-fault system, therefore, would not determine a particular incentive to adopt artificial intelligence at the expense of human work. On the contrary, in all cases in which artificial intelligence is more reliable than human work, it would prevent the use of new technologies from being hindered by compensation rules based on an unfair allocation of costs.
The risk of human jobs being replaced by robots should certainly be addressed. However, I believe this should be done, in the short/medium term, by rethinking the way social security net is designed; for example, supporting unemployed workers with some (in principle: temporary) forms of basic income. In the medium/long term, support should be provided through the retraining of unemployed workers and by encouraging study and training in technological subjects. The need to introduce a specific taxation for robots cannot be excluded, in principle. However, I believe that this should serve to redistribute the wealth produced by new technologies on public welfare and not to eliminate the incentive to use artificial intelligence in favor of human work. If we had penalized the development of tractors to make their use as expensive as the use of hand and horse plows, plowing today would be much slower, less efficient and much more strenuous.
Furthermore, artificial intelligence cannot, and will not, substitute humans in all areas. I think it is desirable that artificial intelligence takes hold in all sectors where it is safer and more efficient than human action. Humans will adapt their practices and adapt to carrying out activities in which they are not replaceable. Again an example from the past is worth a thousand words: if we had prevented the development of industry in order to keep jobs in agriculture, we would probably still be living in a pre-industrial society today. Society evolves. Technology evolves. An opposition of principle to this would risk re-proposing, today, the old Luddite objections.
Compensation may be limited to specified damages, as it happens in Virginia and Florida with respect to birth-related neurological injury. As regards Florida see Va. Code Ann. §§ 38.2 – 5000 ff., known as the Virginia Birth-Related Neurological Injury Compensation Act. Further information may be found at the Program web site: https://www.vabirthinjury.com/. With respect to Virginia see the Florida Statute §§ 766–301 ff.. Further information may be found at the Program web site: https://www.nica.com/.
On the other hand, compensation may apply to all “treatment injuries”, as in New Zealand after reform in 2005, which removed the final “fault” element still present in the system and designed it as a true “no-fault” scheme: Bismark and Paterson (2006).
As it happens, e.g., in Florida and Virginia if proof is given that the neurological birth injury occurred as a result of the birth process. Reference to relevant legislation is provided in footnote n. 40.
As in European nordic countries such as Sweden, Norway, Finland and Iceland, where it is verified whether injuries could have been avoided if the care provided had been of optimal quality. On this issue see Dickson et al. (2016).
The funding alternatives are three: private funding, public funding or mixed scheme. As regards financing there are several models available, ranging from systems financed through contributions made by health care providers to systems funded via tax revenues. Comparative analysis on this issue may be found in OECD (2006), Mello et al. (2011), Dickson et al. (2016), Vandersteegen et al. (2015).
This variable relates to whether only economic damages may be compensated or also non-economic damages fall within the compensation scheme.
It is acknowledged that “no-fault” schemes are likely to lead to lower compensation when compared to judicial claims. With reference to health care, there appears to be evidence to suggest that “no-fault” schemes providing standardized compensation are more easily accepted in countries, such as New Zealand and Scandinavia, where health care is understood as an important provision by central government and other forms of social insurance exist. On the other hand, countries with less of a social security safety net to support individuals with ongoing ill health and disability, such as the USA, are understandably more reluctant to deny claimants the possibility of attaining damages through the court process: Dickson et al. (2016).
In general, on this point, see also Dickson et al. (2016), Wallis (2013). Of course, such a point is raised with particular emphasis by those who believe that deterrence should be considered as an indispensable effect of legislation on redress; see, e.g., Popper (2011).
Therefore, such a “no-fault” legislation should limit its relevance only to “doubtful cases”, i.e.: cases where negligence, imprudence or unskillfulness of producers or programmers cannot be proven and liability could only follow after a strict liability rule, even if producers and programmers cannot not show that the damage was not attributable to them, there is scientific uncertainty as to the cause of the harmful effects or even such cause was unknown.
Since “the principal weakness of no-fault schemes is the difficulty of ensuring that the socially optimal amount of care is taken by potential loss-causers, as the links between their potential to cause loss and the costs of their actions are severed”: Howell et al. (2002).
In these cases, risks depend upon the intrinsic complexity of products, of markets, of the technological development itself: Reason (1990).
References
- Abbott R (2018) The reasonable computer: disrupting the paradigm of tort liability. In: G Wash law rev, vol 86, pp 1–45
- Aldred J. Justifying precautionary policies: incommensurability and uncertainty. Ecol Econ. 2013;96:132–140. [Google Scholar]
- Althaus CE. A disciplinary perspective on the epistemological status of risk. Risk Anal. 2005;25(3):567–588. doi: 10.1111/j.1539-6924.2005.00625.x. [DOI] [PubMed] [Google Scholar]
- Amidei A. Roboticaintelligente e responsabilità: profili e prospettiveevolutivedelquadronormativoeuropeo. In: Ruffolo U, editor. Intelligenzaartificiale e responsabilità. Giuffrè: Milano; 2017. pp. 63–106. [Google Scholar]
- Amisha PM, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Fam Med Prim Care. 2019;8(7):2328–2331. doi: 10.4103/jfmpc.jfmpc_440_19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asatryan D (2017) Machine learning is the future of underwriting, but startups won't be driving it. https://bankinnovation.net/allposts/innovation/startups/machine-learning-is-the-future-of-underwriting-but-startups-wont-be-driving-it/. Accessed 10 Nov 2020
- Atiyah PS. The damages lottery. Oxford: Hart Publishing; 1997. [Google Scholar]
- Aven T. The risk concept—historical and recent development trends. ReliabEngSystSaf. 2012;99:33–44. [Google Scholar]
- Aven T. Risk assessment and risk management: review of recent advances on their foundation. Eur J Oper Res. 2016;253:1–13. [Google Scholar]
- Badue C, Guidolini R, Carneiro R, Azevedo P, Cardoso V, Forechi A, Ferreira Reis de Jesus L, Berriel R, Paixão T, Mutz F, Veronese L, Oliveira-Santos T, De Souza A (2020) Self-driving cars: a survey. Expert Systems with Applications, 165. https://www.sciencedirect.com/science/article/abs/pii/S095741742030628X. Accessed 10 Nov 2020
- Baidu (2020) How Baidu is bringing AI to the fight against coronavirus. 11.3.2020. https://www.technologyreview.com/2020/03/11/905366/how‐baidu‐is‐bringing‐ai‐to‐the‐fight‐against‐coronavirus/. Accessed 10 Nov 2020
- Beck U. Risk society: towards a new modernity. London: Sage Publications; 1992. [Google Scholar]
- Beck U. World risk society as cosmopolitan society? Ecological questions in a framework of manufactured uncertainties. Theory Cult Soc. 1996;13(4):1–32. [Google Scholar]
- Beck U. World risk society and manufactured uncertainties, Iris. Eur J Philos Public Debate. 2009;1(2):291–299. [Google Scholar]
- Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–1828. doi: 10.1109/TPAMI.2013.50. [DOI] [PubMed] [Google Scholar]
- Bengio Y, LeCun Y, Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- Bertolini A. Robotic prostheses as products enhancing the rights of people with disabilities. Reconsidering the structure of liability rules. Int Rev Law ComputTechnol. 2015;29(2–3):116–136. [Google Scholar]
- Bismark M, Paterson R. No-fault compensation in New Zealand: harmonizing injury compensation, provider accountability, and patient safety. Health Aff. 2006;25(1):278. doi: 10.1377/hlthaff.25.1.278. [DOI] [PubMed] [Google Scholar]
- Borghetti JS. La responsabilité du fait des produits. Paris: Etude de droit comparé; 2004. [Google Scholar]
- Bryson JJ, Diamantis ME, Grant TD. Of, for, and by the people: the legal lacuna of synthetic persons. ArtifIntell Law. 2017;25(3):273–291. [Google Scholar]
- Buonanno L (2019) Civil liability in the era of new technology: the influence of blockchain. https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/YLA_Award/Submission_ELI_Young_Lawyers_Award_Luigi_Buonanno_ELI_2019.pdf. Accessed 10 Nov 2020
- Bussani M, Sebok A, editors. Comparative Tort Law: global perspectives. Cheltenham: Edward Elgar; 2015. [Google Scholar]
- Calabresi G. The cost of accidents: a legal and economic analysis. New Haven: Yale University Press; 1970. [Google Scholar]
- Calo R. Robotics and the lessons of cyberlaw. California L Rev. 2015;103:514. [Google Scholar]
- Cane P, Goudkamp J. Atiyah’s Accidents, compensation, and the law. IX. Cambridge: Cambridge University Press; 2013. [Google Scholar]
- Casey B (2019) Robot ipsa loquitur, Georgetown Law Journal. https://ssrn.com/abstract=3327673. Accessed 10 Nov 2020
- Castiglioni I, Ippolito D, Interlenghi M, Monti CB, Salvatore C, Schiaffino S, Polidori A, Gandola D, Messa C, Sardanelli F. Artificial intelligence applied on chest X-ray can aid in the diagnosis of COVID-19 infection: a first experience from Lombardy, Italy. medRxiv. 2020 doi: 10.1101/2020.04.08.20040907v1. [DOI] [Google Scholar]
- Castronovo C. Obblighi di protezione. Encicl: Giuridica, Treccani, Roma, ad vocem; 1990. [Google Scholar]
- CBI (2018) The race for AI: Google, Intel, Apple in a rush to grab artificial intelligence startups, CBI Insights, 27 February. https://www.cbinsights.com/research/top-acquirers-aistartups-ma-timeline/. Accessed 10 Nov 2020
- Chopra S, White LF. A legal theory for autonomous artificial agents. Michigan: University of Michigan Press; 2011. [Google Scholar]
- Cireşan D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural NetwSel Pap IJCNN. 2012;2011(32):333–338. doi: 10.1016/j.neunet.2012.02.023. [DOI] [PubMed] [Google Scholar]
- Comporti M. Esposizione al pericolo e responsabilitàcivile. Napoli: Morano; 1965. [Google Scholar]
- Congressional Research Service . Artificial intelligence and national security. Washington: Congressional Research Service; 2019. [Google Scholar]
- Cooter RD. Economic theories of legal liability. J Econ Persp. 1991;5(3):11–30. [Google Scholar]
- Cooter R, Ulen T. Law & economics. V. Boston: Pearson/Addison Wesley; 2008. [Google Scholar]
- Costantino M, Coletti P. Information extraction in finance. Southampton: Wit Press; 2008. [Google Scholar]
- Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–98. doi: 10.7861/futurehosp.6-2-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- De Cupis A. Ildanno: teoriageneraledellaresponsabilitàcivile. Giuffrè: Milano; 1979. [Google Scholar]
- de Jong E, Faure MG, Giesen I, Mascini P. Judge-made risk regulation and tort law: an introduction. Eur J Risk Res. 2018;9(1):6–13. [Google Scholar]
- Dickson K, Hinds K, Burchett H, Brunton G, Stansfield C, Thomas J (2016) No-fault compensation schemes: a rapid realist review, London, EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London
- Di Gregorio V, Ferriero AM, Specchia ML, Capizzi S, Damiani G, Ricciardi W. Defensive medicine in Europe: which solutions? Eur J Public Health. 2015;25:145. [Google Scholar]
- Easterbrook F (1996) Cyberspace and the Law of the Horse. U. Chi. Legal F. https://chicagounbound.uchicago.edu/uclf/vol1996/iss1/7/. Accessed 10 Nov 2020
- Eidenmüller H. The rise of robots and the law of humans. ZeitschriftfürEuropäischesPrivatrecht. 2017;25:765–777. [Google Scholar]
- Engle E. Aristotelian theory and causation: the globalization of Tort Law. GNLU Law Rev. 2009;2:1–18. [Google Scholar]
- EU Commission (2019) Liability for Artificial Intelligence and other emerging digital technologies, Report from the Expert Group on Liability and New Technologies—New Technologies Formation, EU. https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608. Accessed 10 Nov 2020
- EU Commission (2020) Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee, Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM(2020) 64 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0064. Accessed 10 Nov 2020
- EU Independent High-Level Expert Group on Artificial Intelligence (2019a) Policy and Investment Recommendations for Trustworthy AI, 39. Available on-line: https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence. Accessed 10 Nov 2020
- EU Independent High-Level Expert Group on Artificial Intelligence (2019b) New technologies formation, liability for artificial intelligence and other emerging digital technologies. https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608. Accessed 10 Nov 2020
- EU Parliament (2017) Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html. Accessed 10 Nov 2020
- EU Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 31 may 2016. https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf. Accessed 10 Nov 2020
- EU Parliament Directorate-General for Internal Policies (Policy Department C) (2016) European Civil Law Rules in Robotics. https://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf. Accessed 10 Nov 2020
- Faggella D (2020a) AI in agriculture—present applications and impact. https://emerj.com/ai-sector-overviews/ai-agriculture-present-applications-impact/. Accessed 10 Nov 2020
- Faggella D (2020b) Machine learning in finance applications. https://emerj.com/ai-sector-overviews/machine-learning-in-finance . Accessed 10 Nov 2020
- Faure MG, Visscher LT, Weber F. Liability for unknown risk—a law and economics perspective. J Eur Tor Law. 2016;7(2):198–228. [Google Scholar]
- Floridi L, Taddeo M. Romans would have denied robots legal personhood. Nature. 2018;557:309–309. doi: 10.1038/d41586-018-05154-5. [DOI] [PubMed] [Google Scholar]
- Forslid G, Wieslander H, Bengtsson E, Wahlby C, Hirsch J-M, Stark CR, Sadanandan SK (2017) Deep convolutional neural networks for detecting cellular changes due to malignancy. In: IEEE International Conference on Computer Vision Workshops (ICCVW), pp 82–89
- Friedman JH. Data mining and statistics: what's the connection? ComputSci Stat. 1998;29(1):3–9. [Google Scholar]
- Gaine WJ. No-fault compensation systems. BMJ. 2003;326(7397):997–998. doi: 10.1136/bmj.326.7397.997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Giubilini A, Savulescu J. Vaccination, risks, and freedom: the seat belt analogy. Public Health Ethics. 2019;12(3):237–249. doi: 10.1093/phe/phz014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gotanda JY. Punitive damages: a comparative analysis. Columbia J Transnational Law. 2003;42:391. [Google Scholar]
- Guerra G (2018) La sicurezza degli artefatti robotici in prospettiva comparatistica, Bologna, il Mulino
- Hao K (2019) When algorithms mess up, the nearest human gets the blame. https://www.technologyreview.com/2019/05/28/65748/ai-algorithms-liability-human-blame/. Accessed 10 Nov 2020
- Haupt G (1943) Über faktische Vertragverhältnisse, vol 124, in Leipziger Rechtswissenschaftliche Studien, Leipzig
- Helland EA, Tabarrok A. Product liability and moral hazard: evidence from general aviation. J Law Econ. 2012;55:593–630. [Google Scholar]
- Hernandez D (2014) Artificial intelligence is now telling doctors how to treat you. WIRED. https://www.wired.com/2014/06/ai-healthcare/; PwC (June 2017). Accessed 10 Nov 2020
- Howell B, Kavanagh J, Marriott L. No-fault public liability insurance: evidence from New Zealand. Agenda. 2002;9(2):135–149. [Google Scholar]
- Hubbard FP. Sophisticated robots: balancing liability, regulation and innovation, in Fla. Law Rev. 2014;66:1803–1872. [Google Scholar]
- Huber PW, Litan RE, editors. The liability maze: the impact of liability law on safety and innovation. Washington: Brookings Institution Press; 1991. [Google Scholar]
- IEEE Standards Association (2017) Ethically aligned design, version 2. https://standards.ieee.org/news/2017/ead_v2.html. Accessed 10 Nov 2020
- Josserand L (1910) Les transports, In: Thaller E (ed) Traité général théorique et pratique de droit commercial, vol. XVIII, Paris
- Kaplan A, Haenlein M. Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz. 2019;62:15–25. [Google Scholar]
- Karnov EA (2016) The application of traditional tort theory to embodied machine intelligence. In: Calo R, Froomkin AM, Kerr I (eds) Robot law, Cheltenham, Edward Elgar, pp 51–77
- Kizer KW, Blum LN (2005) Safe practices for better health care. In: Henriksen K, Battles JB, Marks ES (eds) Agency for Healthcare Research and Quality (US), Rockville (MD), Advances in patient safety: from research to implementation, vol IV, programs, tools, and products. https://www.ncbi.nlm.nih.gov/books/NBK20613/. Accessed 10 Nov 2020
- Koops B-J, Hildebrandt M, Jaquet D-O. Bridging the accountability gap: rights for new entities in the information society? Minn J L Sci Tech. 2010;11(2):497–561. [Google Scholar]
- Koza JR, Bennett FH, Andre D, Keane MA. Automated design of both the topology and sizing of analog electrical circuits using genetic programming. ArtifIntell Des. 1996;96:151–170. [Google Scholar]
- Kurzweil R. The singularity is near. New York: Viking Penguin; 2005. [Google Scholar]
- Leenes R, Palmerini E, Koops B-J, Bertolini A, Salvini P, Lucivero F. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law InnovTechnol. 2017;9(1):1–44. [Google Scholar]
- Lessig L. The law of the horse: what cyberlaw might teach. Harv Law Rev. 1999;113:501–549. [Google Scholar]
- Lindley DV. Understanding uncertainty. Hoboken: Wiley; 2006. [Google Scholar]
- Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005. [DOI] [PubMed] [Google Scholar]
- Looker C, Kelly H. No-fault compensation following adverse events attributed to vaccination: a review of international programmes. Bull WHO. 2011;89:371–378. doi: 10.2471/BLT.10.081901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lunney M, Oliphant K. Tort Law Text and Materials. Oxford: Oxford University Press; 2000. [Google Scholar]
- Luntz H (2010) Torts and insurance: the effect on deterrence (conference paper). https://perma.cc/GH6A-2JG9. Accessed 10 Nov 2020
- Magrani E. New perspectives on ethics and the laws of artificial intelligence. Internet Policy Rev. 2019;8(3):1–19. [Google Scholar]
- Marchisio E. Medical civil liability without deterrence: preliminary remarks for future research. J Civil Law Stud. 2020;13(1):87–118. [Google Scholar]
- Martín-Casals M, editor. The development of liability in relation to technological change. Cambridge: Cambridge University Press; 2010. [Google Scholar]
- Mazeaud H, Tunc L (1957) Traité théorique et pratique de la responsabilité civile délictuelle et contractuelle, Paris, Editions Montchrestien, V ed.
- Mello MM, Chandra A, Gawande AA, Studdert DM. National costs of the medical liability system. Health Aff. 2010;29(9):1569–1577. doi: 10.1377/hlthaff.2009.0807. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mello MM, Kachalia A, Studdert DM (2011) Administrative compensation for medical injuries: lessons from three foreign systems, New York, Commonwealth Fund. https://www.commonwealthfund.org/publications/issue-briefs/2011/jul/administrative-compensation-medical-injuries-lessons-three. Accessed 10 Nov 2020. [PubMed]
- Metz R (2019) This AI is so good at writing that its creators won't let you use it. CNN. https://edition.cnn.com/2019/02/18/tech/dangerous-ai-text-generator/index.html. Accessed 10 Nov 2020
- Meyer MD (2007) Artificial intelligence in transportation information for application. Transportation Research Circular. http://onlinepubs.trb.org/onlinepubs/circulars/ec113.pdf. Accessed 10 Nov 2020
- Mitchell T. Machine learning. New York: McGraw Hill; 1997. [Google Scholar]
- Montagnani ML, Cavallo M (2020) Liability and emerging digital technologies: an EU perspective. https://www.academia.edu/43696325/Liability_and_emerging_digital_technologies_an_EU_perspective. Accessed 10 Nov 2020
- Montinaro R. Dubbioscientifico e responsabilitàcivile. Giuffrè: Milano; 2012. [Google Scholar]
- Morgan J. Torts and technology. In: Brownsword R, Scotford E, Yeung K, editors. The Oxford handbook of law, regulation and technology. Oxford: Oxford University Press; 2017. pp. 522–545. [Google Scholar]
- OECD (2006) Medical malpractice. Prevention, insurance and coverage options, Policy Issues in Insurance n. 11
- OECD (2019) Artificial intelligence in society, 121. 10.1787/eedfee77-en. Accessed 10 Nov 2020
- Palmerini E, Bertolini A. Liability and risk management in robotics. In: Schulze R, Staudenmayer D, editors. Digital revolution: challenges for contract law in practice. Nomos: Baden-Baden; 2016. pp. 225–260. [Google Scholar]
- Parchomovsky G, Stein A. Torts and innovation, in Mich. Law rev. 2008;107:285–315. [PubMed] [Google Scholar]
- Pellegatta S (2019) Autonomous driving and civil liability: the Italian perspective. Rivista di Diritto dell'Economia, dei Trasporti e dell'Ambiente 135–161
- Policy Department for Citizens’ Rights and Constitutional Affairs (2020) Directorate-general for internal policies, artificial intelligence and civil liability. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf. Accessed 10 Nov 2020
- Popper A (2011) In defense of deterrence, articles in law reviews & other academic journals, Paper 294. http://digitalcommons.wcl.american.edu/facsch_lawrev/294. Accessed 10 Nov 2020
- PwC (2017) What doctor? Why AI and robotics will define New Health. https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/ai-robotics-new-health.pdf. Accessed 10 Nov 2020
- Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond Ser B. 1990;327:475–484. doi: 10.1098/rstb.1990.0090. [DOI] [PubMed] [Google Scholar]
- Ruffolo U. Per Ifondamenti di un dirittodellarobotica self-learning; dalla machinery produttivaall’auto driverless: verso una “responsabilità da algoritmo”? In: Ruffolo U, editor. Intelligenzaartificiale e responsabilità. Giuffrè: Milano; 2017. pp. 1–30. [Google Scholar]
- Russel S, Norvig P. Artificial intelligence: a modern approach. Pearson College Div: Harlow; 2010. [Google Scholar]
- Savatier R (1945) Traité de la responsabilité civile en droit française civil, administratif, professionnel, procédural, Paris, Librairie générale de Droit et de Jurisprudence, II ed., vol I
- Schaerer E, Kelley R, Nicolescu M (2009) Robots as animals: a framework for liability and responsibility in human-robot interactions. In: Paper presented at the XVIII IEE international symposium on robot and human interactive communication, Toyoma, Japan 27 September–2 October 2009. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2271466. Accessed 10 Nov 2020
- Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harv J Law Tech. 2016;29:353–400. [Google Scholar]
- Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117. doi: 10.1016/j.neunet.2014.09.003. [DOI] [PubMed] [Google Scholar]
- Schönberger D. Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law InfTechnol. 2019;27(2):171–203. [Google Scholar]
- Shavell S (2000) On the Social Function and the Regulation of Liability Insurance, 25 Geneva Papers on Risk and Ins.—Issues and Practice 166. Available on-line: https://perma.cc/RN6A-TE7Z. Accessed 10 Nov 2020
- Sheehan B, Murphy F, Mullins M, Ryan C (2018) Connected and autonomous vehicles: a cyber-risk classification framework. Transportation Research Part A: Policy and Practice. https://www.researchgate.net/publication/328815393_Connected_and_autonomous_vehicles_A_cyber-risk_classification_framework. Accessed 10 Nov 2020
- Sigmoidal (2017) Artificial intelligence and machine learning for healthcare. https://sigmoidal.io/artificial-intelligence-and-machine-learning-for-healthcare/. Accessed 10 Nov 2020
- Solaiman SM. Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. ArtifIntell Law. 2017;25(2):155–179. [Google Scholar]
- Sperl H. Über das SchadenersatzrechtnachdemdeutschenbürgerlichenGesetzbuche. Wien: ManzVerlag; 1902. [Google Scholar]
- Stauch M (2008) The law of medical negligence in England and Germany: a comparative analysis. Oxford and Portland (Oregon), Hart Publishing
- Stradella E. Approaches for regulating roboting technologies: lessons learned and concluding remarks. In: Palmerini E, Stradella E, editors. Law and technology. The challenge of regulating technological development. Pisa: Pisa University Press; 2013. pp. 335–357. [Google Scholar]
- Surden H. Machine learning and law. Wash law rev. 2014;89:87–115. [Google Scholar]
- Taylor S (2015) Differing cultures of civil liability. In: Medical Accident Liability and Redress in English and French Law. Cambridge University Press, Cambridge
- U.S. Army Research Laboratory (2018) Army researchers develop new algorithms to train robots”, EurekAlert!.https://www.eurekalert.org/pub_releases/2018-02/uarl-ard020218.php. Accessed 10 Nov 2020
- U.S. Congress, Office of Technology Assessment (1994) Defensive Medicine and Medical Malpractice, OTA-H-6O2, Washington, DC, U.S. Government Printing Office, 1
- US Department of Transportation (2017) 2016 fatal motor vehicle crashes: overview. in traffic safety facts research note. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812456. Accessed 10 Nov 2020
- US Department of Transportation, NHTSA (2017) Automated driving systems: a vision for safety 2.0. https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf. Accessed 10 Nov 2020
- US Department of Transportation (2018) Preparing for the future of transportation: automated vehicles 3.0 (AV 3.0). https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf. Accessed 10 Nov 2020
- Vandersteegen T, Marneffe W, Cleemput I, Vereeck L. The impact of no-fault compensation on health care expenditures: an empirical study of OECD countries. Health Policy. 2015;119:367–374. doi: 10.1016/j.healthpol.2014.09.010. [DOI] [PubMed] [Google Scholar]
- Vanleenhove C. Punitive damages and European Law: Quo Vademus? In: Meurkens L, Nordin E, editors. The power of punitive damages—is Europe missing out? Intersentia: Cambridge-Antwerp-Portland; 2012. pp. 337–353. [Google Scholar]
- Virk GS. The role of standardisation in the regulation of robotic technologies. In: Palmerini E, Stradella E, editors. Law and Technology. The challenge of regulating technological development. Pisa: Pisa University Press; 2013. pp. 311–334. [Google Scholar]
- Viscusi WK (1989) Toward a diminished role for tort liability: social insurance, Government regulation, and contemporary risks to health and safety. Yale J. on Reg. 6. https://digitalcommons.law.yale.edu/yjreg/vol6/iss1/3. Accessed 10 Nov 2020
- Viscusi WK, Hersh J (2013) Assessing the insurance role of tort liability after Calabresi. Vanderbilt Law and Economics Research Paper n. 12–35. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2189090. Accessed 10 Nov 2020
- Viscusi WK, Moore MJ. Rationalizing the relationship between product liability and innovation. In: Schuck PH, editor. Tort law and the public interest. Competition, innovation and consumer welfare. New York: Norton; 1991. pp. 105–150. [Google Scholar]
- Vladeck DC. Machines without principals: liability rules and artificial intelligence. Washington Law Rev. 2014;89:117–150. [Google Scholar]
- von Jhering R. Das ShuldmomentimrőmischenPrivatrecht. Brühl: Giessen; 1867. [Google Scholar]
- Wagner G (2006) Tort Law and Liability Insurance, 31 Geneva Papers on Risk and Ins.—Issues and Practice 277. https://perma.cc/U4FK-LCE9. Accessed 10 Nov 2020
- Wagner G. RoboteralsHaftungssubjekte? KontureneinesHaftungsrechtsfürautonomeSysteme. In: Faust F, Schäfer H-B, editors. Zivilrechtliche und rechtsökonomischeProbleme des Internets und der künstlichenIntelligenz. Mohr Siebeck: Tübingen; 2019. pp. 1–40. [Google Scholar]
- Wagner G. Robot inc: personhood for autonomous systems. Fordham Law Rev. 2019;88:591–612. [Google Scholar]
- Wagner G Robot liability (2018) Münster Colloquium on EU Law and Digital Economy, Liability for Robotics and the Internet of Things 12.3.2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3198764. Accessed 10 Nov 2020
- Wallis K. New Zealand’s 2005 ‘no-fault’ compensation reforms and medical professional accountability for harm. N Z Med J. 2013;126(1371):33–44. [PubMed] [Google Scholar]
- Welsch D, Behrmann E (2018) Who’s winning the self-driving car race? Bloomberg, 7 May. https://www.bloomberg.com/news/features/2018-05-07/who-s-winning-the-selfdriving-car-race. Accessed 10 Nov 2020
- Williams H (2016) AI online publishing service Echobox closes $3.4m in funding. https://startups.co.uk/ai-publishing-service-echobox-closes-3-4m-in-funding/. Accessed 10 Nov 2020
- World Health Organisation (2009) State of the world’s vaccines and immunization, III ed., Geneva, WHO. http://whqlibdoc.who.int/publications/2009/9789241563864_eng.pdf. Accessed 10 Nov 2020