< Back to insights hub

Article

Towards a new AI liability regime: presumptions and right of access to evidence in favour of injured parties22 December 2022

On 28 September 2022, the EU Commission adopted a proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (the “AI Liability Directive” or ”Proposal”).

"The Commission started from the assumption that the issue of liability is central to the development and deployment of AI systems."

The Commission started from the assumption that the issue of liability is central to the development and deployment of AI systems and notes in its Explanatory Memorandum to the Proposal that existing national rules are not suitable for handling liability actions for damage caused by AI-based products and services, in particular in the area of fault-based liability.

“Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether”.

To address this problem and complete the legal framework outlined in the so-called Artificial Intelligence Act (“AI Act”), the Proposal introduces mechanisms to improve the evidence procedures in favour of claimants in liability cases involving AI. The key text is in Articles 3 and 4. For Article 3 these are:

(i) the power for a court to order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. Requests for evidence should be addressed to the provider of an AI system or a user as defined by the AI Act. The court may order such disclosure only to the extent necessary to sustain the claim. In assessing the proportionality of the request, the court must balance the various legitimate interests involved, including trade secrets. Member states shall introduce appropriate procedural remedies for breach of the disclosure orders. Where a defendant fails to comply with a disclosure order made by a national court and the evidence requested was intended to establish whether the defendant had complied with a duty of care, that court shall infer the defendant’s non-compliance with the relevant duty of care from the lack of disclosure (Article 3); and

(ii) a further presumption relating to the casual link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output (Article 4).

< Back to insights hub

"The EU legislator is moving in the direction of risk prevention and management, especially with reference to high-risk AI systems, accompanied by a favourable attitude towards potential claimants in the event of damage caused by such systems."

The presumption under Article 4 operates when the following three conditions are met:

(a) the claimant has demonstrated or the court has presumed pursuant to point (i) above, the fault of the defendant or of a person for whose behaviour the defendant is responsible, based on non-compliance with a duty of care laid down in EU or national law that is directly intended to protect against the damage that occurred;

(b) it is reasonably likely, in the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and

(c) the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

National courts shall not apply the presumption where the defendant demonstrates that the claimant should reasonably be able to access sufficient evidence and expertise to prove the causal link, rather than just relying on the presumption.

The mandatory requirements of the AI Act only apply to high-risk AI systems. However, the Proposal also provides for claims for damages concerning other, non-high-risk AI systems. In such cases the presumption above shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link.

The Proposal is not in line with the European Parliament Resolution of 20 October 2020 containing recommendations to the Commission on a civil liability regime for AI. That resolution called first and foremost for the adoption of a regulation, but also for the introduction of a strict liability regime for operators of high-risk systems. The latter suggestion received much criticism from scholars and operators.

The framework outlined by the Proposal that focusses essentially on evidentiary difficulties, with the intention of establishing a unified framework in the EU, leaves several critical issues unresolved. These include: (i) the difficulty of identifying the liable party in the chain of production and use of an IA system; (ii) the reference to national notions, such as that of fault; (iii) the role of national courts, which could create inconsistencies in the single market; and (iv) the differences that may exist between the various legal systems in relation to the notion of compensable damage.

The regulatory framework for AI systems is still uncertain at present, just as the consequences of any damage connected to the production, marketing and use of AI systems are unclear. The EU legislator is moving in the direction of risk prevention and management, especially with reference to high-risk AI systems, accompanied by a favourable attitude towards potential claimants in the event of damage caused by such systems.

In this scenario, operators in the sector will have to constantly monitor the progress of the proposed regulations, but also make the necessary additions or corrections to their activities in order to be able to adapt to the future regulatory framework and its consequences.

< Back to insights hub