Abstract
Deontic explanations answer why-questions concerning agents’ obligations and permissions. Normative systems are notoriously conflict sensitive, making contrastive explanations pressing: “Why am I obliged to do 𝜙, despite my (seemingly) conflicting obligation to do 𝜓?” In this paper, we develop a model of contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output logic. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues. We prove that the credulous and skeptical (under shared reasons) entailment relation of Input/Output logic, can be characterized in formal argumentation using preferred and grounded semantics. This result allows us to leverage known results for dialogue models of the latter two semantics. Since this work is the first of its kind, we discuss 5 key challenges for deontic explanations through dialogue.
Publication
ArgXAI-24: 2nd International Workshop on Argumentation for eXplainable AI
Assistant Professor (Tenure Track)
My core research interests are in logical and argumentative perspectives on normative reasoning. This involves the investigation of problems in logic, AI, and philosophy. It includes the study of norm explanations in AI, the logical analysis of meta-ethical principles in deontic agency logics, prooftheoretic approaches for nonmonotonic normative reasoning, and argumentative characterizations of defeasible deontic logic.
Professor of Logic in Philosophy and Artificial Intelligence
Christian is a full professor of logic in philosophy and artificial intelligence at the Institute for Philosophy II, Ruhr University Bochum. He is an expert on nonmonotonic logic and logical argumentation. Among other things, he has been investigating nonmonotonic approaches to handling deontic conflicts, deontic detachment principles and proof theoretic approaches in deontic logic.