This is the first of two blog posts from staff members of ALTI at VU-Amsterdam. Both posts are written by Dr. Silvia de Conca, Ioana Bratu, Dr. Mark Leiser, and Zac Cooper. The second post on the proposed liability regime for AI can be read here.
On September 28, 2022, the EU Commission published the first draft of its package to harmonise and reform liability rules in relation to damages caused by AI. The package also deals with liability issues in relation to the circular economy and pharmaceutical products, but for the purposes of this overview, we will primarily focus on those provisions regulating damages deriving from AI systems, smart and digital products, and software in general.
The package is the result of roughly four years of work by the EU Parliament and the Commission, comprised of reports from internal and external experts (such as the Expert Group on Liability and New Technologies from DG Justice) and consultations with stakeholders.
The package is composed of two drafts: the Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), and the Proposal for a Directive of the European Parliament and of the Council on liability for defective products, this latter repealing Directive 85/374/EEC (Product Liability Directive or PLD). The AI Liability Directive and the new PLD should be seen as complementary to the proposal for the AI Act, which regulates the design, implementation, and use of AI systems.
You might wonder, what is so special about damages caused by AI, that reform is needed at the EU level? After all, all Member States have their own civil code provisions dealing with contractual and extra-contractual liability.
The short answer is that an intervention was necessary to avoid fragmentation of the protection granted to damaged individuals, due to something called liability gaps. Before delving into the content of the two proposals, let us briefly outline what liability gaps are, and why AI systems are particularly prone to generating them. If you are curious about liability gaps and want a more detailed account, you can find it in this dedicated chapter written by Silvia in 2021.
Liability gaps and the features of AI that cause them
Legal liability regulates when, how, and on what basis an injured or damaged party can obtain redress, and who has the obligation to provide such redress. In private law, this obligation usually derives from a breach of a contract (contractual liability) or from a behaviour that breaches a certain duty of care (this behaviour is indicated with ‘fault’ in extra-contractual or tort liability). There is also a type of extra-contractual liability, called strict or no-fault, that is ascribed to an actor regardless of their actions or omissions, but simply because the actor engaged in a certain (economic) activity and the output of such activity caused damage. Strict liability is established by the law, as in the case of, for instance, driving a car. While contractual and extra-contractual liability is usually regulated at the national level, the EU intervened in 1985, establishing a form of strict liability for manufacturers and distributors for damages caused by defective products (so-called product liability regulated by the PLD.
There are liability gaps if, in the presence of damages, the allocation of legal liability to the parties involved is disrupted by certain circumstances. Due to the disruption, the existing contractual or extra-contractual liability regimes do not provide satisfactory tools to remedy the harms suffered, and the damaged parties might be unable to obtain redress. In extreme cases, it might simply be impossible to apply the existing liability rules.
Typically, the allocation of either contractual or extra-contractual liability depends on some or all the following elements:
- the existence of the damage.
- any relationship existing among the parties involved (i.e., whether there is a contract for, you guessed it, contractual liability), or possible conduct that the parties were supposed to adhere to according to the law (i.e., general duty of care for extra-contractual liability, or professional care).
- the actual conduct of the parties.
- the causal link between the conduct and the damage.
- the existence of defects if the damage derives from a product.
- the nature of the damage (for instance, if it is susceptible to being calculated in monetary value, if it is psychological, such as with stress, or if it is immaterial, as with discrimination).
Certain characteristics of AI challenge the requisites traditionally established for both contractual and extra-contractual liability, or make their application a diabolical task, increasing the risk of liability gaps. The main challenging factors are:
- AI systems are complex. AI systems are composed of several hardware and software parts interacting with each other. This implies that sometimes these hardware and software elements interact in unpredictable manners, and there are many actors involved (the manufacturers of different components, intermediaries and developers of connected applications or products). These factors make it difficult to establish who is obliged to provide redress: is it the manufacturer of the final product? And what if the cause of the damage lies in a component, or in an unexpected interaction of components? What is the relationship between the parties? The damaged individual might have a contract in place with one manufacturer, but have no relationship with the providers of components, and so on.
- AI systems are dynamic. Algorithms learn from data sets, changing over time. Even simple software can be modified remotely by developers with updates, radically changing the final product, sometimes triggering new or unexpected vulnerabilities or problems. Traditional liability regimes were created for static products that might degenerate over time, but not update or learn after having been purchased by the final users. What kind of liability remains for manufacturers if the AI system learns something or changes after it has been sold to the final user?
- AI systems are opaque. This refers to the so-called black box. It is sometimes impossible to understand what reasoning led to a certain outcome in AI systems or, to reconstruct it, it might be necessary to hire experts and have access to proprietary information. This affects the possibility for a damaged party to prove the fault of the manufacturer or of a party using an AI system. However, in traditional extra-contractual regimes, the damaged party bears the burden of proof of the damage, of the conduct of the party, and of the causal link between the two.
In the absence of a specific provision establishing it, it is also not possible to apply strict liability to manufacturers of AI software or products featuring AI. Furthermore, up until this reform, it was not clear whether software and other intangible digital products could be considered ‘products’ at all under the PLD. All these factors contributed to the emergence of liability gaps, specifically regarding:
- Which manufacturers are liable in case of damages caused by complex devices or multiple products interacting.
- Where extra-contractual liability applies, the damaged party not being able to comply with the burden of proof without having access to the proprietary technology and/or to very expensive experts.
- Whether a software is a product regulated by the PLD, for which strict liability would apply.
- To what extent the manufacturer is liable for self-learning AI software and, as a result, damage caused, and for products causing damage after updates.
- What kinds of damages can be redressed, in particular regarding immaterial damages. The risk of discrimination connected to AI has already materialized in several instances, for example with women being denied credit cards or having lower credit caps (with a credit score equal to or better than male customers)
These liability gaps caught the attention of several EU Member States and national courts, faced with the task of finding a solution within their national laws. This contributed to creating legal uncertainty for both damaged parties and manufacturers/users of AI systems.
Let’s see what solutions the EU Commission is proposing, starting with the new regime for product liability.
To product or not to product, that is the question! The reform of the Product Liability Directive
The Commission’s proposal suggests a new Product Liability Directive will repeal and substitute entirely the old Directive 85/374/EEC. As explained in the accompanying memorandum, Directive 85/374/EEC had become obsolete in a number of aspects, to the point that simply amending it would have required too many changes. Repealing it and proposing a new PLD seemed, therefore, like the best option.
The purposes of Directive 85/374/EEC were to harmonize strict liability for damages caused by defective products, to prevent market distortions, to favour the free movement of goods, and to ensure a higher degree of protection for damaged individuals. Between 1985 and today, however, numerous things have changed. The proliferation of software, digital products and, more recently, smart devices, has caused uncertainty concerning the nature of the software (e.g., is it a product?) and the nature of its role when combined with hardware. The increase in globalized trade has made it possible to purchase goods from manufacturers based and operating completely outside of the EU, leaving individuals unable to raise an action in the jurisdiction where the defective goods are manufactured.
The proposal embraces the purposes of the old Directive 85/374/EEC, seeking to:
- ensure that individuals damaged by digital or smart goods obtain the same level of protection of individuals damaged by more traditional, tangible goods.
- ease the burden of proof for damaged individuals.
- set out rules to identify which businesses are liable for the damages if the original manufacturer is not based nor operated within the EU;
- update product liability based on the case-law and on the other Directives and Regulations developed in the past decades.
The proposal for a new PLD can be divided into four main parts: articles 1 to 4 set the scope, subject matter, purposes, and definitions; articles 5 to 7 and 11 to 13 contain more substantial provisions concerning the rights of the damaged party and the obligations of the economic operators; articles 8 to 10, and 14, establish procedural rules for the national courts. The final part of the proposal deals with its application and transposition by Member States, its periodical review, and other transitory provisions. Below are the most significant novelties in the proposal regarding AI and digital technologies.
Substantive rules: definitions. In the proposal for a new PLD, products include all movables – self standing or integrating other movables or immovables – as it was for the pre-existing Directive 85/374/EEC. To these, the new proposal adds ‘digital manufacturing files’ (i.e., the templates used for 3D printing objects) and software (Article 4.1(1)-(2)). With software, the Commission refers to computer programs, but also apps, AI systems (as defined in the AI Act), OS, and firmware (Recitals 12 to 16). By including software, the proposal ends a decade-long debate about the nature of software. This is a radical change in the Product Liability discipline and a very welcome one. The PLD proposal excludes digital services on their own from the scope; however, if a digital service is embedded or inter-connected to a product in a manner that, without the digital service, the product would not function (completely or in part), then the PLD applies.
These provisions are in line with the suggestions of civil society, consumer protection associations, and many experts. They are based on a simple, yet powerful, rationale: to reflect the actual risks associated with digital and intangible products, their diffusion and permeation into daily life, and ensuring the equal treatment of damaged parties regardless of the circumstances that the damage derives from – a ‘dumb’ blender or a smart one.
Two more definitions also catch attention. First, the damages within the scope of the new PLD now also include, beyond death and physical injuries and loss to property, ‘medically recognized harm to psychological health’ (without prejudice of national rules regulating the redress for immaterial damages, Recitals 17 and 18), and the loss or corruption of data (as defined in the Data Governance Act) (Art. 4.1(6)). With the latter, the Commission acknowledges the economic importance of data – as well as other intangible assets – and entitles individuals to indemnification for loss or corruption, providing another very welcome novelty.
Second, the range of economic operators consumers can hold liable is expanded. Besides manufacturers, authorized representatives, and importers, the new PLD also includes ‘fulfilment service provider[s]’, that is, persons “offering, in the course of commercial activity, at least two of the following services: warehousing, packaging, addressing and dispatching of a product, without having ownership of the product”, excluding couriers and postal services, and ‘online platforms’ (with the exemption in case of mere conduit/intermediaries provided in the DSA). However, the liability of the latter is subsidiary: fulfilment service providers are liable only if, in order, the manufacturer of the product or a component thereof or the authorized representative, are not established in the EU. Online platforms are only liable if it is impossible to identify the manufacturer, authorized representative, or fulfilment center provider (Articles 11-13, Recitals 26-28). In case multiple economic operators are liable (for example the manufacturer of the final product and the manufacturer of one of its components), joint and severe liability applies (Art. 7).
Substantive rules: defectiveness. A product is defective under the proposal for a new PLD when it ‘does not provide the safety which the public at large is entitled to expect’ (Art. 6). The assessment must be done considering circumstances such as the presentation of the product, instructions, and reasonably foreseeable use. Art. 6 adds circumstances that are specifically applicable to software, AI systems, and digital products:
(c) the effect on the product of any ability to continue to learn after deployment.
(d) the effect on the product of other products that can reasonably be expected to be used together with the product.
(f) product safety requirements, including safety-relevant cybersecurity requirements.
These new criteria to determine whether a product is defective acknowledge the features causing liability gaps: complexity, opacity, and dynamic learning.
This novelty also settles a long-standing debate between experts concerning whether machine learning software that learns something and, because of said learning, causes damage, can be considered defective. The question revolved around the idea that software learning something undesirable is not technically malfunctioning: it is functioning correctly (learning), but the output still causes damage. The solution proposed by the new PLD takes a pragmatic approach, cutting to the chase: it does not matter whether the learning process was functioning as intended if the expectation of the public is that the software will perform in a certain safe way.
At the same time, this provision can be a double-edged sword. The possibility of AI systems accidentally providing undesired outputs is well-known and documented, with several cases of accidents or discrimination. Does this mean that in interpreting Art. 6 national courts might consider certain damages to be reasonably expected? This is, after all, what happened in the Italian case Stubing v. Telefunken (of 1987, at the dawn of the European Product Liability regime). In this case, the damage claim of the owner of a television that short-circuited and burned the apartment was rejected, because according to the Court it was common knowledge that electric appliances could short-circuit, and the owner did not turn it completely off before leaving the house. Today, a similar decision would go against the sense of what is just and right, but in the late 1970s/early 1980s, certain domestic appliances were indeed less safe and still in a very developmental phase, similarly to AI today. How Art. 6 could play out concretely, therefore, remains to be seen, and might depend on the technological understanding and the level of expertise (or access to experts) of national courts.
Procedural rules: disclosure of evidence. The new PLD proposal empowers national courts to demand evidence disclosure by economic operators, and to ease the burden of proof on the damaged party. These interventions are very welcome and promise to concretely improve the access of damaged parties to necessary documentation and evidence. They represent a significant step forward in closing or at least in mitigating liability gaps. Let us examine how in more detail.
Under Art. 8, national courts can order economic operators (defendants) to produce relevant evidence upon request of the injured individual (claimant), where the latter has produced sufficient facts and evidence in support of the request. The disclosure should, naturally, ensure the protection of trade secrets and other proprietary information or IPR. This provision aims at balancing the asymmetry of information existing between economic operators and the damaged party, concerning the design, and functioning of the product. It helps claimants with the burden of proof (‘the injured person claiming compensation for damage caused by a defective product should bear the burden of proving the damage, the defectiveness of a product and the causal link between the two’, Recital 30) without incurring unreasonable expenses or other obstacles (such as IPR protection).
Art. 9 further eases the burden of proof for the damaged party, by creating a presumption of defectiveness if:
(a) the defendant did not comply with the obligation to disclose relevant evidence under abovementioned Art. 8.
(b) the claimant establishes that the product did not comply with mandatory safety requirements laid down by the law (and aiming at mitigating the damage itself); or
(c) the claimant establishes that the damage was caused by an obvious malfunction, during normal use, or under ordinary circumstances.
The presumption also extends to proving the causal link where the damage is known for being a typical consequence of a certain defect.
Art. 9 also empowers national courts to presume the defect or the causal link if these are too difficult for the claimant to prove due to technical or scientific complexity. In such an instance, the claimant still needs to prove that the product contributed to the damage, and the likeliness of the defect or of the causal link (or both). The technical or scientific complexity and the excessive difficulty shall be assessed by national courts on a case-by-case basis (Recitals 34-35). The claimant can contest the excessive difficulty, and the presumptions are not absolute and can be rebutted.
To further balance the interests of defendants and claimants, Art. 10 establishes the conditions that mitigate or exclude the liability of the manufacturer or other economic operators. Particularly interesting for this analysis is that the manufacturer is not exempted from liability if the defectiveness of the product is due to:
(a) a digital service integrated into the product, without which the product does not function properly.
(b) a software component, including software updates or upgrades; or
(c) the lack of software updates or upgrades necessary to maintain safety (unless the damaged party did not update or upgrade the product, against the instructions of the manufacturer).
With this additional provision, the Commission acknowledges the problems deriving from the dynamic nature of AI and other software/digital products. It embraces the fact that products with software or digital component can be modified by the manufacturer after they have been purchased by the users (unlike traditional products), and that an update can change the product or induce a defect.
Part 2 of this analysis of the proposed AI liability regime can be read here.
Photo by Gertrūda Valasevičiūtė