Trustworthiness is a quality requirement for increasingly powerful and prominent AI systems. Trustworthiness depends on a variety of properties such as transparency, fairness, reliability, and robustness, however, many of which remain poorly understood and difficult to implement. For example, it remains unclear to what extent transparency can actually be achieved in high dimensional nonlinear systems, what fairness actually consists in, and how we can assess the reliability of AI-generated content. To a large extent, these are philosophical questions that require close analysis of relevant concepts, as well as a realistic assessment of current technical capabilities against normative and societal constraints.
Transparency: ECPAI members Carlos Zednik, Philippe Verreault-Julien, Céline Budding, and Yeji Streppel are engaged in several projects to specify norms and standards of explainability and transparency. To this end, they actively collaborate with AI researchers and industry representatives to develop, implement, and evaluate methods in explainable AI in domains such as medical decision-making, automated driving, and language-production. They also participate in national and international efforts to promote explainability and transparency through regulatory means such as standardization.
Reliability: Elizabeth O’Neill aims to uncover and characterize the criteria that can be used to judge whether or not an AI system’s recommendations should be considered reliable. In particular, her NWO-funded research projects consider the reliability of AI systems capable of moral reasoning.
Fairness & Trust: ECPAI members Philip Nickel and Patrik Hummel investigate the conditions for trustworthiness in data-driven decision-making, with a particular focus on medical contexts. To what extent can and should AI systems be trusted to make high-stakes medical decisions? What role does fairness play in the medical decision-making and access to state-of-the-art AI?
- NWO LTP ROBUST project on explainability of machine vision for self-driving cars
- NWO XS grant on “When Computers Join the Moral Conversation”
- Carlos Zednik and Yeji Streppel contributed to technical standard DIN SPEC 92001-3 - Explainability
- Elizabeth O’Neill developed a YouTube video on How to Evaluate the Reliability of a Source?
- Vlasta Sikimić participated in the governmental advisory board creating the Ethical Guidelines for Safe and Reliable Use of AI in the Republic of Serbia.
- Céline Budding
- Elizabeth O’Neill
- Philip J. Nickel
- Yeji Streppel
- Philippe Verreault-Julien
- Carlos Zednik