Eindhoven Center for the Philosophy of AI
Home
News
People
Members
Visiting
Research
Activities
Workshops
Reading Groups
Publications
2
Inductive Risk, Understanding, and Opaque Machine Learning Models
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In …
Emily Sullivan
Cite
Project
DOI
Data Justice and Data Solidarity
Datafication shapes and gradually transforms societies. Given this impact, issues of justice around data-driven practices have received …
Matthias Braun
,
Patrik Hummel
Cite
Project
DOI
Scientific Exploration and Explainable Artificial Intelligence
Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are …
Carlos Zednik
,
Hannes Boelsen
Cite
Project
DOI
Trust in medical artificial intelligence: a discretionary account
This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI …
Philip J. Nickel
Cite
Project
DOI
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence
Many of the computing systems programmed using Machine Learning are
opaque
: it is difficult to know why they do what they do or how …
Carlos Zednik
Cite
Project
DOI
Understanding from Machine Learning Models
Emily Sullivan
Cite
Project
DOI
Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization
Digitalization affects the relation between human agents and technological objects. This paper looks at digital behavior change …
Andreas Spahn
Cite
Project
DOI
Cite
×