AI and Health

Photo by julien Tromeur on Unsplash

This cluster focuses on the philosophical and ethical issues in usingAI in health (broadly construed). The integration of artificial intelligence (AI) intohealth care introduces significant ethical and philosophical challenges, as ittransforms how care is delivered, decisions are made, and responsibilities aredistributed. AI technologies like diagnostic algorithms, robotic surgeries, andpredictive analytics hold immense potential to improve patient outcomes, but theyalso pose questions about bias, fairness, and the human dimension of care.

One prominent ethical concern is the issue of bias and fairness. AI systems, such asthose used in diagnostic imaging or disease prediction, are trained on large datasetsthat may not adequately represent diverse populations. For example, a widely useddermatology AI system was found to perform poorly on patients with darker skintones because its training data predominantly featured lighter-skinned individuals.This raises questions about how equitable these technologies are and whether theymight inadvertently exacerbate health disparities. Addressing this issue requirescareful scrutiny of training data and ongoing evaluation of AI performance acrossdifferent demographic groups. Another ethical challenge is privacy and data security.AI systems like IBM Watson for Oncology and Google’s DeepMind rely on vastamounts of patient data to generate insights. While this data can improve diagnosesand treatment recommendations, it also poses risks of misuse or unauthorizedaccess. For instance, when DeepMind partnered with the UK’s National HealthService (NHS) to develop an AI tool for kidney disease detection, the project facedcriticism for improperly accessing patient data without adequate consent. Thisincident underscores the need for robust safeguards to protect sensitive healthinformation while ensuring that patients are fully informed about how their data isused. Philosophically, the use of AI in health care challenges traditional notions ofagency and responsibility. For example, autonomous robotic surgeons, such as theda Vinci Surgical System, are increasingly performing complex procedures withminimal human intervention. While these systems can enhance precision, they alsoblur the lines of accountability. If an AI-driven surgical robot makes an error thatharms a patient, who should be held responsible—the manufacturer, theprogrammer, or the supervising surgeon? This raises profound questions about thedistribution of moral and legal responsibility in a health care system increasinglyreliant on autonomous technologies. Finally, the dehumanization of care is a concernas AI takes on roles traditionally filled by humans. Chatbots like Babylon Health areused to triage patients or provide medical advice, but their lack of empathy andpersonal connection can make patients feel alienated. This raises philosophicalquestions about what it means to provide “care” and whether human interaction is anecessary component of healing. While AI can optimize efficiency, it cannot replicate the compassion and emotional understanding that many patients value in their interactions with health care providers.

Research Questions

  • Should AI systems provide explainable results, even if it means reducing accuracy?
  • What level of understanding is necessary for health care professionals and patients to trust AI systems?
  • Could over-reliance on AI erode the empathy and compassion inherent in human care?
  • Does use of AI in medicine foster epistemic injustice in various forms?
  • How should responsibility for decisions made by AI systems be distributed between developers, health care providers, and patients?
  • Do patients fully understand the role AI plays in their care? And does this matter?
  • How can health care providers ensure meaningful consent when using AI-driven diagnostics or treatments?
  • Will the benefits of AI-driven health care be equitably distributed, or will they widen existing gaps?
  • Should there be regulatory frameworks for addressing AI errors in health care?
  • How do biases in training data and algorithms exacerbate health care disparities?

Associated Researchers

  • Lily Frank
  • Philip Nickel
  • Matthew Dennis
  • Filippo Santoni de Sio