The EU AI Liability Directive (AILD): Bridging Information Gaps

Authors

  • Marta Ziosi Oxford Internet Institute
  • Jakob Mökander Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS, UK
  • Claudio Novelli Department of Legal Studies, University of Bologna, Via Zamboni, 27/29, 40126, Bologna, IT
  • Federico Casolari Department of Legal Studies, University of Bologna, Via Zamboni, 27/29, 40126, Bologna, IT
  • Mariarosaria Taddeo Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS, UK
  • Luciano Floridi Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS, UK

Abstract

The proposed European AI Liability Directive (AILD) is an important step towards closing the ‘liability gap’, i.e., the difficulty in assigning responsibility for harms caused by AI systems. However, if victims are to bring liability claims, they must first have ways of knowing that they have been subject to algorithmic discrimination or other harms caused by AI systems. This ‘information gap’ must be addressed if the AILD is to meet its regulatory objectives. In this article, we argue that the current version of the AILD reduces legal fragmentation but not legal uncertainty; privileges transparency and disclosure of evidence of high-risk systems over knowledge of harm and discrimination; and shifts the burden on the claimant from proving fault to accessing and understanding the evidence provided by the defendant. We conclude by providing four recommendations on how to improve the AILD to address the ‘liability gap’ and the ‘information gap’.

Downloads

Published

31.12.2023

Issue

Section

Commentaries