Cognitive Explainability

Photo by Luke Jones on Unsplash

When AI systems are opaque, explanations can be used to help render them transparent. But whatform should such explanations take? Researchers in this cluster explore links between artificial intelligence and human cognition to transfer methodological and theoretical insights from psychology and neuroscience to advance the development and use of explainable AI. Methods from philosophy of science are used to evaluate the explanatory credentials of different XAI tools;insights from philosophy of mind and theoretical psychology are used to gauge the applicability of mental constructs to symbolic and data-driven AI systems; considerations from ethics and policy are considered to ensure that the tools being developed align with societal norms and values. Current application areas include (but are not limited to) automotive vision, natural language processing,and educational support services.

The cluster aims to not just publish philosophical research, but to also directly advance empirical research, technological development, and AI governance efforts. While empirical research on human cognition is a source of inspiration, the research being conducted in this cluster also feedsback to inform theory and methodology in psychology, neuroscience, and cognitive science more generally. Through active collaborations with researchers at e.g. TU/e’s Mobile Perceptual Systemslab, researchers in this cluster also contribute to the transparency and performance of state-of-the-art AI systems. Finally, through active participation in national and international standardization, members of this cluster ensure that their research has a direct real-world impact.

Funded Projects

PhD Projects

  • Tacit knowledge in large language models (C. Budding)
  • Schema-based representations for automotive vision (M. Ghezzi)
  • Reasoning processes in LLMs and AI agents (Z. Kabadere)

Associated Researchers

  • Carlos Zednik
  • Céline Budding
  • Michela Ghezzi
  • Zeynep Kabadere