Artificial intelligence and health: current challenges for the law

Monday 29 April 2024

Melisa Romero

Bomchil, Buenos Aires

melisa.romero@bomchil.com

Artificial Intelligence (AI) in the field of medicine and health is constantly expanding across all specialties, and its implementation has raised great expectations not only for improving patient care but also for facilitating access to healthcare in places where patients have difficulties in accessing health professionals or medical care. The World Health Organization (WHO) has recognised that AI holds great promise for public health practice and medicine, but in this regard, has emphasised that the opportunities and challenges of AI are closely linked. It is therefore necessary to design and implement laws and policies which take necessary ethical principles into account.[1]

From this starting point, various issues and challenges arise which require, and will continue to require, legal and ethical analysis in order to implement AI technologies successfully.

AI and its current use in the healthcare sector

AI has been defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.[2] It is also associated with creating computer programs that perform operations comparable to those carried out by the human mind, such as learning or logical reasoning. While the development of AI can be traced back to the early 1950s,[3] it has evolved significantly in recent years due to advances in computing and data processing, to the extent which no technological development is beyond the use or implementation of AI.

The term also comprises several subcategories such as Machine Learning (ML), which uses algorithms to detect patterns in data and Deep Learning, an advanced method of ML which uses large neural networks – networks that function like a human brain to logically analyse data – to learn complex patterns and make predictions independent of human input. Advances within deep learning are also major reasons for the recent success of health AI.

In the field of health and medicine, there have been countless advances in various areas which have made a significant impact. Indeed, in the field of medical research and diagnosis, as reported in specialised medical journals, AI has been used to: systematise and process chest radiographs to identify cases of tuberculosis or drug-resistant cases for tuberculosis treatment; to quantify the risk of dengue; or to identify global transmission patterns of the Zika Virus.[4] AI techniques have also been used for the detection of cervical cancer, to predict disease outbreaks, or to interpret large amounts of data derived from different sources, facilitating diagnosis and enhancing the ability to execute early treatment initiatives and therefore prevent diseases.

AI is also used for drug administration through robots, which perform tasks of supplying, preparing, and dispensing drugs for chemotherapy dose calculation or to accurately create the correct syringes needed by the patient, once the order with the medication and the proportions to be supplied by the doctor has been received. Robots are used in intensive care units. They move around and allow for continuous monitoring of patients, via doctors’ remote control, allowing for simultaneous care. The most famous type is the Da Vinci robot, which is used in surgery, allowing for a reduction in surgical risks, as well as in postoperative care, among others.

The WHO has declared that these new tools and their evolution have imposed the need to assess risks for proper implementation, in such a way that respects the fundamental rights of individuals, as well as the principles of autonomy, transparency and expert supervision.

The fundamental principles of the WHO

In recent years the World Health Organization has published various reports highlighting the importance and impact of AI in providing health services, facilitating access to health, as well as greater autonomy and control of patients over their own health.

The first such WHO report, titled ‘Ethics and Governance of Artificial Intelligence for Health’ (2021),[5] considers the various risks and challenges posed by AI. It establishes the necessity and importance of applying the following ethical principles and good governance, which are fundamental for the formulation, development, and deployment of AI in the health sector. These are: (1) preserve human autonomy; (2) promote the wellbeing and safety of individuals and the public interest; (3) ensure transparency, clarity, and intelligibility; (4) AI technologies must be understandable for developers, health professionals, patients, users, and regulators; (5) promote responsibility and accountability, (6) ensure inclusivity and equity; and (7) promote responsive and sustainable AI.

In 2023, in light of the development of various technologies, including large language models, in support of health personnel, patients, research, and science, the WHO reaffirmed the importance of applying the aforementioned principles to promote the autonomy and wellbeing of individuals.[6]

Current legal and ethical debates

In the field of law and health interaction, the application of AI requires, in principle, the analysis of the legal framework regulating the practice of medicine, patients' rights, as well as the handling of personal health-related data to ensure the full validity of the rights of autonomy, information, and privacy, which are essential for everyone.

Patient autonomy/informed consent

The protection of patient autonomy is a fundamental principle in medical ethics and law. It entails the right to accept or reject certain therapies or medical or biological procedures, with or without stating a reason, as well as the patient’s right to subsequently revoke their expression of will. In view of this, it is required that all professional action in the medical/health field is backed by the patient’s prior informed consent.

Traditionally the doctor-patient relationship has been framed in the context of face-to-face interactions and marked by human intervention. AI however, has enabled the development of medical care through robots or chatbots, as well as through telemedicine systems, the development of which was accelerated by the Covid-19 pandemic.

AI poses challenges to the doctor-patient relationship as it changes the paradigm of medical care. Therefore, any AI mechanism put into practice shall respect the principle of patient autonomy in decision-making and ensure the necessary ‘human intervention’, so that decisions do not derive exclusively from automated processes and a balance between automated decision-making and human judgement is guaranteed.

Patient autonomy and the necessary informed consent poses another ethical and legal issue when implementing AI tools, as healthcare providers are required to provide clear, precise, and adequate information about the health condition, the proposed procedure, the benefits, risks, discomforts, foreseeable adverse effects, alternative procedures, among others.

For instance, in the case of chatbots, such as Ada, which assesses users’ most likely conditions based on their symptoms and recommends the next steps to seek appropriate care, the possibility to influence in patient decision making was detected.[7] Any regulatory framework should therefore ensure the right to obtain a reasonable explanation about the logic applied to the decision based on automated data processing. However, it will be challenging to address under what circumstances or to what extent should healthcare professionals disclose information about AI, the ML technique used, the data input or possibility of biases, in cases AI systems operate under ‘black boxes’, which may result from non-interpretable machine-learning techniques that are very difficult for doctors to interpret fully or understand.

Protection of personal data

An essential and critical aspect of AI is the protection of the individual’s privacy (patient information and data). This is because AI is fundamentally dependent on Big Data for both its development and deployment.

The use of mechanisms that ensure the inviolability of the data, as well as their confidentiality, are therefore critical, as AI learns and uses data provided by patients and individuals, making anonymity vital in the entire process of data systematisation for the implementation of these technologies.

Although there are several laws currently in place to protect the privacy of the individual, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States, and the European Union’s General Data Protection Regulation (GDPR), not all information collected, or entities, fall under the scope of these legal frameworks.

The challenge will be to advance the implementation of mechanisms which ensure the non-identification of the individual for the collection of health data and prevent re-identification later, due to the great capacity of AI to establish significant relationships. Consequently, an international regulatory cross-border framework should be evaluated to address this risk successfully.

Cybersecurity

Cybersecurity is another important issue to consider from a legal perspective, when addressing AI in healthcare. In 2023, the healthcare sector reported the most expensive data breaches, at an average cost of US$10.93m.[8] Although the latest legal developments in the US and the EU have been implemented to promote safety,[9] a collaborative and systemic approach is key as cyberattacks are often a global issue. Consequently, a high level cross-border cybersecurity framework should be ensured.

Liability

AI technologies also poses legal challenges for current liability regimes. Medical malpractice has traditionally been based on a negligence system. However, in cases of AI is not clear where liability should reside or whether doctors have enough transparency and clear understanding when using the tool (because of black box ML algorithm) or event to what extent is the tool substituting for a doctor’s judgement. In contrast, some have proposed a system of strict liability, but it is not always clear who will be the person liable or who are best positioned to assume liability (developers, manufactures, among others). A shared responsibility approach for new technologies is also proposed to ensure a safe and effective use of AI-enabled tools. Updates of liability frameworks should be carried out to deal with these new technological developments adequately.

Conclusion

AI has the potential to become an invaluable ally for healthcare professionals, as well as a tool for increasing access to healthcare. Ensuring the full validity of patients’ rights, as well as respecting ethical and governance principles for the implementation of AI through appropriate legal frameworks, will represent a significant challenge for various stakeholders in the future.

 

Notes

[1] ‘WHO guidance on Artificial Intelligence to improve healthcare, mitigate risks worldwide’, UN News, 28 June 2021 https://news.un.org/en/story/2021/06/1094902 accessed 19 April 2024.

[2] Encyclopedia Britannica, ‘artificial intelligence’ https://www.britannica.com/technology/artificial-intelligence accessed 19 April 2024.

[3] J McCarthy, M L Minsky, N Rochester and C E Shannon, ‘A proposal for the Dartmouth summer research project on artificial intelligence, 31 August 1955’, AI magazine 2006 27 (4), 12.

[4] Nina Schwalbe and Brian Wahl, ‘Artificial intelligence and the future of global health’ The Lancet’ 16 May 2020 https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30226-9/fulltext accessed 19 April 2024.

[5] WHO guidance, ‘Ethics and governance of artificial intelligence for health’ 28 June 2021 https://www.who.int/publications/i/item/9789240029200 accessed 19 April 2024.

[6]WHO calls for safe and ethical AI for health’, WHO 16 May 2023 https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health accessed 19 April 2024.

[7] M Beil, I Proft, D van Heerden, S Sviri, and P V van Heerden, ‘Ethical considerations about artificial intelligence for prognostication in intensive care’ Intensive Care Medicine Experimental 2019 (7) 70.

[8] World Economic Forum, ‘Healthcare pays the highest price of any sector for cyberattacks – that’s why cyber resilience is key’, 1 February 2024 https://www.weforum.org/agenda/2024/02/healthcare-pays-the-highest-price-of-any-sector-for-cyberattacks-that-why-cyber-resilience-is-key accessed 19 April 2024.

[9] Regulation (EU) 2019/88, EU Cybersecurity Act; the Cybersecurity and Infrastructure Security Act of 2018.