AI in healthcare: legal and ethical considerations in this new frontier

Tuesday 11 February 2025

David Egan

GSK, London

Jieni Ji

A&O Shearman, Hong Kong/Shanghai

Can human beings cure all diseases in our lifetime? For centuries, humanity has strived to cure diseases. With the advent of artificial intelligence (AI), the dream of a disease-free world seems more attainable than ever before. AI holds immense potential to revolutionise healthcare, promising enhanced care quality, expedited drug development and reduced costs. Some experts predict that ‘AI will be as common in healthcare as the stethoscope’.[1] However, the rapid advancement of AI also potentially heightens certain risks, such as safety concerns, privacy issues and bias. It is therefore essential to establish robust regulatory and ethical frameworks to manage them effectively.

AI’s roles in healthcare

AI is revolutionising healthcare by enhancing various aspects such as drug development, disease diagnosis, treatment, patient monitoring, and administrative tasks. Notable examples include Google’s Med-PaLM, Stanford’s CheXNet, and NVIDIA’s partnership with Hippocratic AI. In addition to the advancements by the private sector, the World Health Organization (WHO) launched S.A.R.A.H. (Smart AI Resource Assistant for Health) in April 2024. This digital health promoter prototype, powered by generative AI, features enhanced empathetic responses in eight languages.

Looking ahead, we can expect a growing trend of collaboration among healthcare companies, technology firms, and research institutions. This synergy will drive further innovations and improvements in healthcare delivery and patient outcomes.

Legal frameworks governing AI in healthcare

Regulating AI in healthcare is an intricate task that involves striking a balance between fostering scientific innovation and protecting human rights and safety. Different countries may adopt various approaches to AI regulation, reflecting their unique values and priorities. For instance, jurisdictions such as the European Union and China have AI-specific laws, while others, including the United Kingdom, United States and Australia, are initially assessing how existing technology-neutral laws can be applied to AI.[2] These diverging regulatory approaches can pose challenges for companies looking to integrate AI into their products and operations.

We believe that effective regulation of AI in health requires international collaboration. By working together, countries can create a cohesive framework that enhances human welfare on a global scale. This collaborative effort can help ensure that AI technologies are used safely and ethically, while also promoting innovation and protecting human rights.

Overview of AI legal frameworks

Current AI legal frameworks

International organisations and governments are actively engaging with stakeholders to develop regulations and industry standards. Currently, most of these guidelines are principle-based, focusing on the fair and equitable use of AI. For instance:

  • The World Health Organization (WHO) has published various guidelines on AI in healthcare, emphasizing ethical considerations and best practices. These guidelines stress the importance of designing and using AI systems in ways that respect patient privacy, promote equity, and mitigate biases.
  • In 2024, the Organisation for Economic Co-operation and Development (OECD) updated its AI Principles, marking the first intergovernmental standard on AI. These principles aim to balance innovation, human rights, and democratic values.

From the perspective of legislation by sovereign states, the legal landscape for AI in healthcare is still in its infancy and continues to evolve. Many countries are currently relying on existing technology-neutral laws, such as data protection and equality laws, as well as industry standards, to address AI-related matters. Additionally, some nations are taking proactive steps to develop approaches to address issues arising from AI technologies.

  • In the US, the Food and Drug Administration (FDA) has recently issued several discussion papers on AI drug development and manufacturing medical devices and guidance on decentralised clinical trials.[3] The FDA generally supports the use of AI in healthcare development and has already reviewed and authorised over 950 AI/machine learning (ML)-enabled medical devices.[4] In addition, the Center for Drug Evaluation and Research (CDER) of the FDA has established the Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative to support the adoption of advanced manufacturing technologies that could bring benefits to patients.
  • In the EU, the AI Act is recognised as the world’s first comprehensive AI law. Although most of its requirements will only come into effect from 1 August 2026, and pure research and development AI is excluded much of its scope, the Act imposes regulatory requirements on AI systems based on four specific risk categories: (1) prohibited risk AI; (2) high risk AI; (3) AI triggering transparency requirements; and (4) general-purpose AI. In the context of healthcare, the middle two categories— ‘high risk AI’ and ‘AI triggering transparency requirements’—are likely to be the most relevant. These categories will impose specific regulatory obligations to ensure the safe and ethical use of AI in healthcare applications.
  • Unlike the EU, the UK has, to date, chosen not to pass any AI-specific laws. Instead, it encourages regulators to first determine how existing technology-neutral legislation, such as the Medical Device Regulations and the Data Protection Act, can be applied to AI uses. For example, the Medicines & Healthcare products Regulatory Agency (MHRA) is actively working to extend existing software regulations to encompass ‘AI as a Medical Device‘ (or AIaMD). The MHRA’s new program focuses on ensuring both explainability and interpretability of AI systems as well as managing the retraining of AI models to maintain their effectiveness and safety over time.
  • In China, the National Health Commission and the National Medical Products Administration recently published several guidelines on the registration of AI-driven medical devices and the permissible use cases of applying AI in diagnosis, treatment, public health, medical education and administration. The guidelines all emphasise AI’s assisting roles in drug and medical device development and monitoring under human supervision.

Leading AI developers are also setting up in-house AI ethics policies and processes, including independent ethics boards and review committees, to ensure safe and ethical AI research. These frameworks are crucial while the international landscape of legally binding regulations continues to mature.

Recommendations: scenario-based assessments for AI tools

Healthcare companies face a delicate balancing act. On one hand, their licence to operate depends on maintaining the trust of patients, which requires prioritising safety above all else. Ensuring that patients feel secure is non-negotiable in a sector where lives are at stake. On the other hand, being overly risk-averse can stifle the very innovations that have the potential to transform lives and deliver better outcomes for patients and society as a whole. Striking this balance is critical: rigorous testing and review processes must coexist with a commitment to fostering innovation, ensuring progress without compromising safety. 

In this regard, a risk-based framework is recommended for regulating AI in healthcare. This approach involves varying the approval processes based on the risk level of each application. Essentially, the higher the risks associated with the AI tools, the more controls and safeguards should be required by authorities. For instance, AI tools that conduct medical training, promote disease awareness and perform medical automation should generally be considered low risk. Conversely, AI tools that perform autonomous surgery and critical monitoring should be regarded as high risk and require greater transparency and scrutiny. By tailoring the regulatory requirements to the specific risks, we can foster innovation while ensuring that safety is adequately protected.

Moreover, teams reviewing AI systems should consist of stakeholders representing a broad range of expertise and disciplines to ensure comprehensive oversight. For example, this may include professionals with backgrounds in healthcare, medical technology, legal and compliance, cybersecurity, ethics and other relevant fields as well as patient interest groups. By bringing together diverse perspectives, the complexities and ethical considerations of AI in healthcare can be better addressed, fostering trust and accountability.

Data protection and privacy

Data privacy requirements are a key consideration when using AI in healthcare contexts, especially given that many jurisdictions’ laws broadly define ‘personal data’, potentially capturing a wide range of data. Further, privacy regulators have been the forerunners in bringing AI-related enforcement actions. For example, Clearview AI has faced regulatory scrutiny in several jurisdictions, including in the EU, UK, Canada and Australia, for alleged data privacy violations.

Privacy considerations in AI

There are several privacy considerations to navigate when using AI, including identifying a lawful basis for the processing activity. Many jurisdictions’ data privacy laws contain a legitimate interests basis or similar provisions which, when applicable, permit the data controller to process personal data without first requiring individuals’ explicit consent. However, there are diverging views on whether this basis can be used for AI-related processing.

For instance, Meta, Google, and X have faced scrutiny from the Irish Data Protection Commission (DPC) for using the legitimate interests basis when training their large language models (LLMs) on platform users’ personal data. The European Data Protection Board is expected to release its Opinion on this issue in early 2025. It is possible that different approaches emerge for training LLMs versus other post-deployment use cases.

Individual consent

Alternatively, businesses may need to obtain explicit individual consent for AI-related processing activities. While this can be a difficult basis to use given the high bar for valid consent, it can be particularly challenging in an AI healthcare context given the potential sensitivity of the personal data combined with the potential for public distrust and misunderstanding around AI technologies. Further, in some jurisdictions it is common for individuals to place stringent conditions, including time restrictions, on what their personal data can be used for. This could prevent their personal data being used in connection with AI, given it is not always possible to delete or amend personal data once it has been ingested into an AI system.

Professional accountability

Determining fault when an AI system makes an error can be complex, especially when multiple parties are involved. In the situation of fully autonomous AI decision-making without human supervision, negligence is likely to fall on the AI developer. This is because it would be difficult to hold a human user responsible for a breach of duty of care if they do not control the AI tool. Second, when an AI system operates with human involvement, regulators can introduce a strict liability standard for the consequences arising from AI tools. This approach could protect patients’ interests although it may deter the advancement of technology.

Alternatively, regulators could require AI developers and commercial users to be insured against any product liability claims. The WHO recommends setting up no-fault, no-liability compensation funds.[5] This approach ensures patients are compensated for any harm without the need to prove fault, thereby simplifying the process and providing quicker relief.

The legal responsibility for AI in healthcare is a shared and evolving domain. It involves collaboration among all parties to ensure that AI systems are safe, effective and used responsibly. As AI technology advances, ongoing dialogue and adaptation of legal frameworks will be essential to address new challenges and protect patient safety.

Ethical concerns

There are multiple ethical considerations that developers and deployers may need to address when using AI systems in healthcare. Three prominent examples are explored below.

Bias causing unjust discrimination

Bias in AI systems can lead to unjustified discriminatory treatment of certain protected groups. There are two primary types of bias that may arise in healthcare:

  • Disparate impact risk: this occurs when people are treated differently when they should be treated the same. For example, a study[6] found that Black patients in the US health care system were assigned significantly lower ‘risk scores‘ than white patients with similar medical conditions. This discrepancy arose because the algorithm used each patient’s annual cost of care as a proxy for determining the complexity of their medical condition(s). However, less money is spent on Black patients due to various factors including systemic racism, lower rates of insurance and poorer access to care.[7] Consequently, using care costs created unjustified discrepancies for Black patients.
  • Improper treatment risk: bias in AI systems can arise when training data fails to account for the diversity of patient populations, leading to suboptimal or harmful outcomes. For example, one study[8] demonstrated that facial recognition algorithms often exhibit higher error rates when identifying individuals with darker skin tones. While this study focused on facial recognition, the same principle applies in healthcare, where AI systems used for dermatological diagnoses have been found to perform less accurately on patients with darker skin.[9] This occurs because the datasets used to train these systems often contain a disproportionate number of images from lighter-skinned individuals. Such biases can lead to misdiagnoses or delays in treatment, illustrating the critical need for diverse and representative training data in healthcare AI applications.

Transparency and explainability

Providing individuals with information about how healthcare decisions are made, the process used to reach that decision, and the factors considered is crucial for maintaining trust between medical professionals and their patients. Understanding the reasoning behind certain decisions is not only important for ensuring high-quality healthcare and patient safety, but also helps facilitate patients’ medical and bodily autonomy over their treatment. However, explainability can be particularly challenging for AI systems, especially generative AI, as their ‘black box‘ nature means deployers may not always be able to identify exactly how an AI system produced its output. It is hoped that technological advances, including recent work on neural network interpretability,[10] will assist with practical solutions to this challenge.

Human review

To facilitate fair, high-quality outcomes, it is important for end users – often healthcare professionals – to understand the AI system’s intended role in their clinical workflow and whether the AI system is intended to replace user decision-making or augment it.

However, it may not always be appropriate for the human to override the AI system’s output; their involvement in the workflow will likely vary depending on what the AI tool is being used for. For example, if an AI system has been trained to detect potentially cancerous cells in skin cell samples, and the AI system flags the sample as being potentially cancerous but the healthcare professional disagrees, it may be more appropriate to escalate the test to a second-level review than to permit the healthcare professional to simply override the AI system’s decision. A false positive here is likely to be less risky than a false negative. It is therefore important to take a considered, nuanced approach when determining how any human-in-the-loop process flow should operate.

Conclusion

Artificial intelligence offers significant benefits in healthcare but also presents legal and ethical challenges that must be navigated. Collaborative efforts among policymakers, healthcare professionals, AI developers and legal experts are essential to establish robust frameworks that safeguard patient rights and promote equitable access to advanced healthcare technologies.

 

[1] Katia Savchuk, ‘AI Will Be as Common in Healthcare as the Stethoscope’ (Stanford Business, 15 May 2024), available at www.gsb.stanford.edu/insights/ai-will-be-common-healthcare-stethoscope, accessed 14 January 2025.

[2] In the US, there is no comprehensive federal legislation that regulates the development of AI to date. There are several federal proposed laws aimed at regulating AI, such as the SAFE Innovation AI Framework and the AI Research Innovation and Accountability Act. However, none of these federal proposals have been implemented. In addition, several state legislatures, such as Colorado, have also taken steps to regulate AI.

[3] Conducting Clinical Trials With Decentralized Elements: Guidance for Industry, Investigators, and Other Interested Parties (FDA, September 2024), available at www.fda.gov/media/167696/download, accessed 14 January 2025.

[4] The US FDA. ‘Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices’ (FDA, 7 August 2024), available at www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices, accessed 14 January 2025.

[5] ‘Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models’ (WHO, 2024).

[6] Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan, ‘Dissecting racial bias in an algorithm used to manage the health of populations’ (2019), 366(6464), Science, 447. Available at www.science.org/doi/10.1126/science.aax2342, accessed 14 January 2025.

[7] Kelly M Hoffman, Sophie Trawalter, Jordan R Axt and M Norman Oliver, ‘Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites’ (2016), 113(16), Proceedings of the National Academy of Sciences, 4296. Available at www.pnas.org/doi/10.1073/pnas.1516047113, accessed 14 January 2025.

[8] J Buolamwini and T Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification‘, (2018), 18, Proceedings of Machine Learning Research, 1. Available at www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification/, accessed 14 January 2025.

[9] Marc Hulbert, ‘Making AI Work for People of Color: Diagnosing Melanoma and Other Skin Cancers; (Melanoma Research Alliance, 2022), available at www.curemelanoma.org/blog/article/making-ai-work-for-people-of-color-diagnosing-melanoma-and-other-skin-cancers, accessed 14 January 2025.

[10] T Shaham, S Schwettmann, F Wang et al, ‘A Multimodal Automated Interpretability Agent’ (Forty-first International Conference on Machine Learning, 2024) available at https://arxiv.org/abs/2404.14394, accessed 14 January 2025.