
AI, including AI agents and LLMs, purports to reason and offers reasoning to explain and justify judgment and decision. In assessing this reasoning, what are the most useful or essential frames and frameworks to develop and apply? In this workshop, we provide a forum for presentation of ongoing technical, philosophical, and ethical research on this topic. The workshop will examine issues such as how to verify that representations are appropriately related to system behavior, bias and transparency in reasoning, how to express uncertainty in reasoning, and how to assess what looks like ethical reasoning (or reasoning that must take ethical constraints into account). Reflection on frameworks for assessing automated explanations in real-world, socio-technical contexts (e.g., administrative or health contexts) is welcome.
Contact persons: Carlos Zednik, Philip J. Nickel
updated
| 09:30 | Arrival and Coffee |
| 09:40 | Welcome and Topic Introduction |
| Part I | Reasons-Explanations and AI regulation |
| 09:45 | Kristof Meding (Tübingen): Explainability and the AI Act |
| 10:30 | Carlos Zednik (Eindhoven): Reasons in AI: Behavioral and Cognitive Approaches |
| 11:30 | Coffee |
| 11:40 | Discussion: Can and should AI system behavior be explained by appeal to reasons? |
| 12:30 | Lunch |
| Part II | Reasoning in AI |
| 13:30 | Berker Bahceci (TU/e PhD): AI Ethical Reasoning |
| 14:00 | Zeynep Kabadere (TU/e PhD): Common-Sense Reasoning |
| 14:30 | Coffee |
| 14:45 | Gregor Betz (Karlsruhe): Ways of LLM Reasoning (hybrid/ remote presentation) |
| 15:45 | Discussion: Can AI systems reason? |
| 16:30 | End of the Workshop |