Vicarious liability: a solution to a problem of AI responsibility?

Daniela Glavaničová, Matteo Pascucci

Research output: Contribution to journalArticlepeer-review

Abstract (may include machine translation)

Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility for this wrong. In this article, we focus on a particular (aspect of the) AI responsibility gap: it seems fitting that someone should bear the legal consequences in scenarios involving AI machines with design defects; however, there seems to be no such fitting bearer. We approach this problem from the legal perspective, and suggest vicarious liability of AI manufacturers as a solution to this problem. Our proposal comes in two variants: the first one has a narrower range of application, but can be easily integrated in current legal frameworks; the second one requires a revision of current legal frameworks, but has a wider range of application. The latter variant employs a broadened account of vicarious liability. We emphasise strengths of the two variants and finally highlight how vicarious liability offers important insights for addressing a moral AI responsibility gap.

Original languageEnglish
Article number28
JournalEthics and Information Technology
Volume24
Issue number3
DOIs
StatePublished - Sep 2022
Externally publishedYes

Keywords

  • AI responsibility
  • Regulation
  • Responsibility gap
  • Vicarious liability

Fingerprint

Dive into the research topics of 'Vicarious liability: a solution to a problem of AI responsibility?'. Together they form a unique fingerprint.

Cite this