Opacity in Machine Learning

Image created with DALL·E by OpenAI (prompt: an opaque deep learning model)

Transparency is a cornerstone of the responsible use and development of artificial intelligence. Unfortunately, many state-of-the-art AI systems are notoriously opaque: it is difficult to know what these systems are actually doing, why they do what they do, and how they do it. The Explainable AI research program is dedicated to the challenge of rendering opaque AI systems transparent. However, questions remain about what exactly opacity is, what kinds of transparency are required when and by whom, and how such transparency can actually be achieved.

ECPAI researchers are engaged in the project of defining normative constraints on Explainable AI, and actively collaborate with industry and research to develop methods with which to explain the behavior of opaque AI systems. To this end, they study typical use-cases of Explainable AI, evaluate the possibilities and limits of current explanatory practices, and participate in regulatory efforts to guide their development and use.

Associated Researchers

  • Carlos Zednik
  • Céline Budding
  • Emily Sullivan (Area Lead)
  • Philippe Verreault-Julien
  • Yeji Streppel