For AI in medicine, there is one thing on which all patients agree: Not without my (human) doctor

Amanda Shapiro & Mette Hartlev

 

The lynchpin of the WISDOM project is a patient-centred approach to using artificial intelligence tools for chronic immune-mediated diseases (CIMDs) like multiple sclerosis. Work Package 1 in the project (concerned with legal and ethical frameworks) in particular seeks to understand how CIMD patients actually experience the use of such tools in their care. These perspectives are important as it becomes clear that AI tools in medicine simply do not work if they are not tested adequately in the ‘real world’ among the patients who would benefit from them. In one poignant example of the failure to incorporate patients’ experiences using AI tools: some ophthalmology patients in Thailand resisted even considering using an AI tool to detect diabetic retinopathy—not because they were skeptical of the tool itself—but because a positive diagnosis meant their care would be transferred to a distant, larger hospital, and they couldn’t afford the journey. As anticipated from beginning such research on those experiences, few published studies incorporate the specific perspectives of CIMD patients. However, an overwhelming body of literature on patients in general and their experiences or preferences about AI in their care do indicate agreement on one thing: they are not comfortable with the idea of artificial intelligence taking the place of a human doctor.

This aspect of ‘robophobia’ is expressed in many different ways by patients: wanting a human doctor to be the ultimate decision-maker in their care; wanting a ‘human in the loop’; preferring a human doctor’s conclusion or recommendation over that of an AI tool; or rejecting fully automated care. But it’s confirmed by every possible type of study: in-depth interview studies; focus group studies; survey studies; and even experimental studies. Moreover, such studies show that that view holds among all sorts of different patients (and potential patients), and across different countries and health care systems.

This is not because of a literal phobia of robots, like the specter of a lifeless droid asking patients deeply personal questions about their medical history. In most studies, researchers (or providers) educate patients about how a current or potential AI tool operates (like pattern recognition in radiology scans) and patients therefore understand that robots in medicine are fairly rare. Instead, patients’ preference for human doctors seems to stem from worries that the over-involvement of AI will rob them of the very human aspects of medical care: compassion, empathy, and the consideration of patients as whole, individual persons. In the study of ophthalmology patients in Thailand, for example, the nurses who were administering the AI detection tool were keenly aware of their patients’ hardship, and felt the need to warn patients about the distance of the referred hospital if the tool detected diabetic retinopathy.

There is some evidence that CIMD patients would feel even more strongly that AI not take over their care entirely. Patients overall are less inclined to accept AI in treatment as disease severity increases (AI for diagnosing a pollen allergy? Great. But for cancer? Not so much.); and patients with chronic conditions are more skeptical than other patients of AI in medicine. CIMD patients thus lie at the intersection of patients with greater disease severity and those with a chronic condition.

With all this evidence of CIMD patients’ preferences, would patients have a legal right to avoid fully automated care? Although WISDOM is a European Union project—and therefore EU legal rules on the subject are especially relevant—legal rules in the United States also are relevant since some of the largest AI health tools are designed in the US, and subsequently used by health care providers in the EU. In the European Union, there might be such a legal right; and in the United States, there almost certainly is no such legal right. The EU does not enforce patients’ rights (such rights, like informed consent, are embedded in member states’ laws). Instead, other scholars have noted that the General Data Protection Regulation (GDPR) might provide a path to realizing a ‘human in the loop’ right to patient care. Article 22 of the GDPR states that ‘The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’ (The EU’s AI Act, on the other hand, rarely applies to AI tools in medicine, and does not have a similar right to avoid automated decision-making like the GDPR.) Without getting too into-the-weeds on this aspect of the GDPR, there are some exemptions to this right (like the explicit consent of the data subject), and it is unclear to what extent this provision of the GDPR applies in the medical context. But the GDPR does apply to health data, and Article 22’s right to avoid automated processing provides some recognition of patients’ worries around fully automated, life-altering decisions.

By contrast, the US has neither comprehensive data protection legislation nor AI legislation at the federal level. Health data receive federal protection only through the Health Insurance Portability and Accountability Act (HIPAA) of 1996, which protects the confidentiality of patients’ health information. Under this Act, patients’ health data are protected from disclosure to non-health care entities, like social media companies, without patients’ consent. But the law has not been updated to incorporate the risks of automated processing of health data.

Although state-level legislation in the United States often is confined to that state’s territory, state laws can serve as model legislation for federal law. Only a few US states have adopted comprehensive data protection or AI legislation (namely, California and Colorado). Among those, California’s Consumer Privacy Act (CCPA), for example, contains a right to ‘opt out’ of the ‘sale or sharing of personal information,’ seemingly including a right to opt out of a ‘business’ use of automated decisionmaking technology.’ This Act, however, seems rather limited as to AI health tools since it largely exempts health care entities from its obligations. Colorado’s SB 24-205, An Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems, however, is notable for being the first comprehensive regulation of artificial intelligence in the US and for protecting against AI harms in health care services. The law is designed primarily to protect against discrimination but also includes some transparency requirements. According to § 6-1-1704 of the Act, ‘deployers’ of AI systems (persons in Colorado who use an AI system, as opposed to those who develop them) have an obligation to disclose ‘to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.’ Unlike the GDPR, the Colorado disclosure requirement does not give patients or consumers a right to avoid fully automated decisions.

The upshot of the uneven right to avoid automated health decisions is that AI in medicine hasn’t yet advanced to a level where human doctors could be left out of the system anyway. Most AI in health care is ‘narrow,’ where such tools ‘can fulfil very specific tasks with high accuracy’ and thus are used as a ‘support tool’ for human doctors. Patients also seem most optimistic about AI in their care when it is used as just that: a ‘complement’ or aide to a human provider, such as where an AI tool acts as a ‘second opinion’. Regardless of the level of automation in patients’ care, incorporating into AI tools the perspectives and needs of CIMD patients—who have frequent interactions with providers and the health care system overall—will improve the gains from such tools in the long run.

 

 

 Momtazmanesh, Nowroozi and Rezaei, 'Artificial Intelligence in Rheumatoid Arthritis: Current Status and Future Perspectives: A State-of-the-Art Review', 9 Rheumatology and Therapy (2022) 1249, available at https://doi.org/10.1007/s40744-022-00475-4 (noting that many papers on AI and rheumatoid arthritis suffer from a failure to test in the real world); Morrison et al., 'Assessing Multiple Sclerosis With Kinect: Designing Computer Vision Systems for Real-World Use', 31 Human-Computer Interaction (2016) 191, available at https://www.tandfonline.com/doi/pdf/10.1080/07370024.2015.1093421 (noting challenges in “seeing” what the computer “sees” when testing an MS computer diagnostic tool in the real world).

 Beede et al., 'A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy', in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020) 1.

 Robertson et al., 'Diverse Patients’ Attitudes towards Artificial Intelligence (AI) in Diagnosis', 2 PLOS Digital Health (2023) e0000237, p. 13, available at https://dx.plos.org/10.1371/journal.pdig.0000237.

 Young et al., 'Patient and General Public Attitudes towards Clinical Artificial Intelligence: A Mixed Methods Systematic Review', 3 The Lancet Digital Health (2021) e599, available at https://www.sciencedirect.com/science/article/pii/S2589750021001321 (review of other studies about patients’ attitudes towards AI finds that participants strongly preferred to keep a provider in the loop).

 Johansson et al., 'Women’s Perceptions and Attitudes towards the Use of AI in Mammography in Sweden: A Qualitative Interview Study', 14 BMJ Open (2024) e084014, available at https://bmjopen.bmj.com/content/14/2/e084014 (mammography patients preferred a human radiologist and believing them to be more skilled than an AI breast cancer screening tool); Mikkelsen et al., 'Patient Perspectives on Data Sharing Regarding Implementing and Using Artificial Intelligence in General Practice – a Qualitative Study', 23 BMC Health Services Research (2023), available at https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-023-09324-8 (patients generally positive about AI and data sharing still wanted their general practitioners to be the ultimate decision-maker in their care); Nelson et al., 'Patient Perspectives on the Use of Artificial Intelligence for Skin Cancer Screening: A Qualitative Study', 156 JAMA Dermatology (2020) 501, available at https://jamanetwork.com/journals/jamadermatology/fullarticle/2762711 (dermatology and melanoma patients generally positive about an AI tool to screen for skin cancer but wanted AI to “provide a second opinion” to their physician, rather than replace them); Tran, Riveros and Ravaud, 'Patients’ Views of Wearable Devices and AI in Healthcare: Findings from the ComPaRe e-Cohort', 2 Npj Digital Medicine (2019) 1, available at http://www.nature.com/articles/s41746-019-0132-y (patients with chronic conditions in France preferred AI as a complement to their care rather than as a replacement for a human doctor).

 Richardson et al., 'Patient Apprehensions about the Use of Artificial Intelligence in Healthcare', 4 Npj Digital Medicine (2021), available at https://www.nature.com/articles/s41746-021-00509-1 (potential patients in the midwestern United States were fairly positive about potential AI gains, but wanted doctors to retain final discretion and responsibility for their treatment).

 Lennartz et al., 'Use and Control of Artificial Intelligence in Patients Across the Medical Workflow: Single-Center Questionnaire Study of Patient Perspectives', 23 Journal of Medical Internet Research (2021) e24221, available at https://www.jmir.org/2021/2/e24221 (patients awaiting scans trusted doctors over AI tools in all clinical capabilities except one—keeping up with the most current clinical knowledge)

 Robertson et al. (2023), supra note 3 (diverse group of US patients preferred physicians’ recommendations over AI tool’s); Ryosuke Yokoi and Nakayachi, 'Artificial Intelligence Is Trusted Less than a Doctor in Medical Treatment Decisions: Influence of Perceived Care and Value Similarity', 37 International Journal of Human–Computer Interaction (2021) 981, available at https://doi.org/10.1080/10447318.2020.1861763 (patients preferred medicine the doctor recommended over that of an AI system); Longoni, Bonezzi and Morewedge, 'Resistance to Medical Artificial Intelligence', 46 Journal of Consumer Research (2019) 629, available at https://dx.doi.org/10.1093/jcr/ucz013 (potential patients reluctant to rely on AI in a variety of medical scenarios and preferred a human provider instead); Arkes, Shaffer and Medow, 'Patients Derogate Physicians Who Use a Computer-Assisted Diagnostic Aid', Medical Decision Making (2007), available at http://journals.sagepub.com/doi/10.1177/0272989X06297391?url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&rfr_dat=cr_pub++0pubmed (patients preferred doctor who did not use computer-assisted aid).

 Johansson et al. (2024), supra note 5; Lennartz et al. (2021), supra note 7; Yin, Ngiam and Teo, 'Role of Artificial Intelligence Applications in Real-Life Clinical Practice: Systematic Review', 23 Journal of Medical Internet Research (2021) e25759, available at https://www.jmir.org/2021/4/e25759; Longoni, Bonezzi and Morewedge (2019) supra note 8.

 Beede et al. (2020), supra note 2.

 Lennartz et al. (2021), supra note 7; Tran, Riveros and Ravaud (2019) supra note 5.

 Armero et al., 'A Survey of Pregnant Patients’ Perspectives on the Implementation of Artificial Intelligence in Clinical Care', 30 Journal of the American Medical Informatics Association (2022) 46, available at https://dx.doi.org/10.1093/jamia/ocac200; Tran, Riveros and Ravaud (2019) supra note 5.

 Kolfschooten, 'A Health-Conformant Reading of the GDPR’s Right Not to Be Subject to Automated Decision-Making', 32 Medical Law Review (2024) 373, available at http://dx.doi.org/10.1093/medlaw/fwae029.

 This state-law-as-model occurred most notably through the Patient Protection and Affordable Care Act (also known as ‘Obamacare’), which was modeled after Massachusetts state legislation that expanded access to health care coverage. Long, Stockley and Nordahl, 'Coverage, Access, and Affordability under Health Reform: Learning from the Massachusetts Model', 49 INQUIRY: The Journal of Health Care Organization, Provision, and Financing (2012) 303, available at https://journals.sagepub.com/doi/10.5034/inquiryjrnl_49.04.03.

 Artificial Intelligence 2024 Legislation, National Conference of State Legislatures, available at https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation.

 California Consumer Privacy Act of 2018, SB 1223, § 1798.185(a)(15) (2025) (directing the attorney general to issue “regulations governing access and opt-out rights with respect to a business’ use of automated decisionmaking technology, including profiling”).

 California Consumer Privacy Act of 2018, SB 1223, § 1798.145(c) (2025).

 Lennartz et al. (2021), supra note 7, p. 7.

 Tran, Riveros and Ravaud (2019) supra note 5, p. 3; Nelson et al (2020), supra note 5.

Blog

An error has occurred. This application may no longer respond until reloaded. Reload 🗙