Liability for AI. Public policy considerations

Lade...
Vorschaubild
Datum
2021
Herausgeber:innen
Autor:innen
Zech, Herbert
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Zusammenfassung

Liability for AI is the subject of a lively debate. Whether new liability rules should be introduced or not and how these rules should be designed hinges on the function of liability rules. Mainly, they create incentives for risk control, varying with their requirements – especially negligence versus strict liability. In order to do so, they have to take into account who is actually able to exercise control. In scenarios where a clear allocation of risk control is no longer possible, social insurance might step in. This article discusses public policy considerations concerning liability for artificial intelligence (AI). It first outlines the major risks associated with current developments in information technology (IT) (1.). Second, the implications for liability law are discussed. Liability rules are seen conceptualized as an instrument for risk control (2.). Negligence liability and strict liability serve different purposes making strict liability the rule of choice for novel risks (3.). The key question is, however, who should be held liable (4.). Liability should follow risk control. In future scenarios where individual risk attribution is no longer feasible social insurance might be an alternative (5). Finally, the innovation function of liability rules is stressed, affirming that appropriate liability rules serve as a stimulus for innovation, not as an impediment (6.).

Beschreibung
Schlagwörter
Public International Law \ European Integration \ Political Science
Verwandte Ressource
Verwandte Ressource
Zitierform
Zech, H. (2021b). Liability for AI: Public policy considerations. ERA Forum, 22(1), 147–158. https://doi.org/10.1007/s12027-020-00648-0
Sammlungen