Risk, Harm and Damage as Preset Rational Categories in AI Literature: Do We See or Think the Problem?

Authors

  • Cristina Cocito Vrije Universiteit Brussel
  • Thomas Marquenie KU Leuven
  • Paul De Hert Vrije Universiteit Brussel

Abstract

This article reflects on dominant concepts used in contemporary legal discourse to understand, identify and address problems raised by AI systems, particularly in the GDPR and European Artificial Intelligence Act, that rely on concepts such as risk, harm and damage. Our study questions how far these dominant concepts sufficiently capture the problems AI presents, and whether they guarantee a comprehensive approach to identifying those problems. Building on pragmatist methodologies of problem inquiry (Dewey and Bergson), we argue that while some existing conceptual paradigms may be more suitable than others, they all are located too far ahead in the problem inquiry process, as defined by pragmatists. Existing paradigms for problem-identification, anchored to preset categories of problems, risk marginalising (other) elements, such as feelings, concerns or other problematic issues. This study eventually calls for further research to explore more critically how concepts such as risk, harm and damage are used in literature to map AI systems’ problems. This gives rise to a broader call for research to identify methodologies that can pragmatically frame the challenges of AI systems in order to better and comprehensively address the problems they raise today.

Downloads

Published

30.12.2024

Issue

Section

Refereed Articles