Projects

Explainability Lab
Explainability is a critical quality requirement for AI systems. It is threatened when systems are too complex and dynamic, and it may be ensured through interpretable design or through post-hoc explanation. Philosophical work is required, however, to understand what this actually means. (read more)
Trustworthy AI
Trustworthiness is recognized as a key quality requirement for AI systems as they become increasingly powerful and prevalent. However, trustworthiness depends on properties such as transparency, fairness, reliability, and robustness–many of which remain poorly understood and difficult to implement. (read more)
Trustworthy AI
Value Alignment
Digital technologies offer great potentials to facilitate better decisions and outcomes, but they also raise ethical challenges and can undercut important human values. (read more)
Value Alignment
AI in Education
Recent progress in artificial intelligence presents a unique opportunity for higher education, but also poses challenges. (read more)
AI in Education
TESTCLUSTER
Recent progress in artificial intelligence presents a unique opportunity for higher education, but also poses challenges. (read more)
SEE ALL NEWS or EVENTS