Workshop - AI, reasoning, and explanation


Date
Nov 19, 2025 9:30 AM — 4:30 PM
Location
TU Eindhoven, Atlas 8.310

AI, including AI agents and LLMs, purports to reason and offers reasoning to explain and justify judgment and decision. In assessing this reasoning, what are the most useful or essential frames and frameworks to develop and apply? In this workshop, we provide a forum for presentation of ongoing technical, philosophical, and ethical research on this topic. The workshop will examine issues such as how to verify that representations are appropriately related to system behavior, bias and transparency in reasoning, how to express uncertainty in reasoning, and how to assess what looks like ethical reasoning (or reasoning that must take ethical constraints into account). Reflection on frameworks for assessing automated explanations in real-world, socio-technical contexts (e.g., administrative or health contexts) is welcome.

Contact persons: Carlos Zednik, Philip J. Nickel

Programm

Part IReasoning in AI
09:45Gregor Betz (Karlsruhe)
10:45Zeynep Kabadere (Eindhoven)
11:15Berker Baheci (Eindhoven)
11:45Discussion: Can AI Systems Reason?
12:30Lunch
Part IIReasons-Explanations and AI regulation
13:30Carlos Zednik (Eindhoven): Reasons-Explanations: Behavioral and Cognitive Approaches
14:30Kristof Meding (Tübingen): Reasoning, Explainability, and the AI Act
15:30Coffee
15:40Discussion: Can and should AI system behavior be explained by appeal to reasons?
16:30End of the Workshop
Philip J. Nickel
Philip J. Nickel
Associate Professor
Carlos Zednik
Carlos Zednik
Co-Director
Assistant Professor