Liability for AI. Public policy considerations
ISSN der Zeitschrift
Liability for AI is the subject of a lively debate. Whether new liability rules should be introduced or not and how these rules should be designed hinges on the function of liability rules. Mainly, they create incentives for risk control, varying with their requirements – especially negligence versus strict liability. In order to do so, they have to take into account who is actually able to exercise control. In scenarios where a clear allocation of risk control is no longer possible, social insurance might step in. This article discusses public policy considerations concerning liability for artificial intelligence (AI). It first outlines the major risks associated with current developments in information technology (IT) (1.). Second, the implications for liability law are discussed. Liability rules are seen conceptualized as an instrument for risk control (2.). Negligence liability and strict liability serve different purposes making strict liability the rule of choice for novel risks (3.). The key question is, however, who should be held liable (4.). Liability should follow risk control. In future scenarios where individual risk attribution is no longer feasible social insurance might be an alternative (5). Finally, the innovation function of liability rules is stressed, affirming that appropriate liability rules serve as a stimulus for innovation, not as an impediment (6.).