Workshop - AI, reasoning, and explanation


Date
Nov 19, 2025 9:30 AM — 4:30 PM
Location
TU Eindhoven, Atlas 8.310

AI, including AI agents and LLMs, purports to reason and offers reasoning to explain and justify judgment and decision. In assessing this reasoning, what are the most useful or essential frames and frameworks to develop and apply? In this workshop, we provide a forum for presentation of ongoing technical, philosophical, and ethical research on this topic. The workshop will examine issues such as how to verify that representations are appropriately related to system behavior, bias and transparency in reasoning, how to express uncertainty in reasoning, and how to assess what looks like ethical reasoning (or reasoning that must take ethical constraints into account). Reflection on frameworks for assessing automated explanations in real-world, socio-technical contexts (e.g., administrative or health contexts) is welcome.

Contact persons: Carlos Zednik, Philip J. Nickel

Programm

updated

09:30Arrival and Coffee
09:40Welcome and Topic Introduction
Part IReasons-Explanations and AI regulation
09:45Kristof Meding (Tübingen): Explainability and the AI Act
10:30Carlos Zednik (Eindhoven): Reasons in AI: Behavioral and Cognitive Approaches
11:30Coffee
11:40Discussion: Can and should AI system behavior be explained by appeal to reasons?
12:30Lunch
Part IIReasoning in AI
13:30Berker Bahceci (TU/e PhD): AI Ethical Reasoning
14:00Zeynep Kabadere (TU/e PhD): Common-Sense Reasoning
14:30Coffee
14:45Gregor Betz (Karlsruhe): Ways of LLM Reasoning (hybrid/ remote presentation)
15:45Discussion: Can AI systems reason?
16:30End of the Workshop
Philip J. Nickel
Philip J. Nickel
Associate Professor
Carlos Zednik
Carlos Zednik
Co-Director
Assistant Professor