Artificial intelligence in healthcare: legal and regulatory challenges in Brazil
Rubens Granja
Lefosse Advogados, São Paulo
Natássia Misae Ueno
Lefosse Advogados, São Paulo
Luís Gozalo
Lefosse Advogados, São Paulo
Introduction
Artificial Intelligence (AI) is increasingly widespread in our society, being used in the most diverse areas of research and for a diverse range of purposes. In the healthcare sector, it is no different. From being involved in the automation of administrative operations to being used as a tool to support clinical practice, the use of AI is increasingly relevant in healthcare.
It can be argued that the use of AI in healthcare started to gain traction in the context of the Covid-19 pandemic. The use of AI tools was relevant for patient screening and the subsequent reduction in the number of patients in hospitals and health units and for the control of the stock and medicines, among others.
Since then, more than two years after the end of the pandemic, the application of AI models in healthcare has continued to grow at an increasingly accelerated pace, in the form of even more sophisticated solutions.
Despite the undeniable benefits associated with AI, the exponential advance in its use has imposed significant challenges for regulators around the world, including in Brazil. As these technologies become more sophisticated and accessible, so does concern about their legal, ethical and social impacts. In this context, given the accelerated pace at which the ongoing technological transformations are taking place, it is still difficult for regulators to keep up with these changes in order to reflect the risks accurately and keep the relevant laws and regulations up-to-date.
The use of AI in healthcare: possibilities and risks
As recognised by the World Health Organization (WHO),[1] AI is an important tool in healthcare, which can be applied for a wide range of healthcare purposes, such as providing more accurate diagnosis and disease screening, improving patient care, supporting health policy decisions and allocating resources within health systems.
Regarding generative AI systems, which are capable of creating original content, such as text, images and videos based on large volumes of data on which they have been previously trained, they have been used in regard to a wide range of applications, including:
- diagnosis, namely the analysis of information contained within electronic medical records and imaging tests (eg, X-rays, magnetic resonance images (MRIs) and computed tomography scans (CT scans)) to identify signs, patterns, anomalies, diseases and risks, and generate reports with clinical findings;
- new drug discovery, namely the use of generative AI models programmed to process and learn from molecular structures to assist in the creation of new molecules and the development of new drugs;
- virtual health assistants, namely creating virtual assistants that interact with patients through natural forms of dialogue to explain symptoms, provide health information and offer initial screening guidance; and
- clinical decision support, namely through the provision of assistance to healthcare professionals in regard to the choice of patient treatment, by providing personalised suggestions, adapted according to the patient’s profile.[2]
Despite the undeniable benefits provided, the use of AI in the context of healthcare also carries inherent risks, which must be considered and addressed. In general, three major risks can be alluded to, as follows:
- the risk of breaching patient privacy and data security: Certain AI models, particularly those aimed at supporting diagnosis and personalising patient care, need to be fed with a large amount of sensitive personal data and are, therefore, subject to the possible leakage of sensitive patient information;
- the risk of algorithmic bias: Biased outcomes resulting from how an AI model is trained or developed pose a particularly serious concern in the healthcare sector, as they may lead to incorrect medical decisions that directly impact patients’ life; and
- the risk of AI being used to produce misinformation: As seen in other sectors, AI can be used to generate convincing but untrue content to deceive the reader and even the population. This issue can arise in the context of health-related information.
The entry into force of Brazil’s General Data Protection Law (Law No. 13709/2018, Lei Geral de Proteção de Dados Pessoais or LGPD) made it mandatory for companies to adopt technical and organisational measures to guarantee the protection of personal data, such as implementing information security protocols, using encryption and implementing access control mechanisms, among others, duties that help to mitigate the risk of violating patient privacy and data security.
Nonetheless, this is a relatively recent scenario, with few precedents capable of providing legal certainty. As a result, uncertainties persist as to how the LGPD will be applied in practice, especially in relation to enforcement and the imposition of sanctions.
Other risks pointed out, however, remain unaddressed by the current Brazilian regulatory framework.
The current AI regulatory framework in Brazil
In Brazil, there is still no specific legal framework governing the use of AI, nor are there any specific guidelines aimed at facilitating its application in the field of healthcare.
Although it was not prepared with a specific focus on AI, Law No. 13709/2018 represents a significant milestone for establishing relevant safeguards applicable to the context of AI. In this regard, the LGPD grants individuals the right to explanations concerning automated decisions and to review automated decisions that have been made, an essential aspect to ensuring transparency in the use of AI-based technologies, especially in the healthcare context. In addition, it classifies health-related data as sensitive personal data, subjecting it to specific rules and a higher level of protection.
On the other hand, the Brazilian Health Surveillance Agency (Agência Nacional de Vigilância Sanitária or ANVISA) does not have any specific rules that directly address the use of AI. However, in March 2022, the agency published Resolution RDC No. 657/2022, which regulates the use of software as a medical device (SaMD). This regulation signals a trend of increasing attention being paid to technological solutions in healthcare, due to their increasingly broad and relevant application.
The only rules currently in force that address the topic of AI do so in a superficial way or within a very specific context, rather than addressing its use in the context of healthcare.
In order to guide the development of AI in Brazil, the Ministry of Science, Technology and Innovation (Ministério da Ciência, Tecnologia e Inovação) enacted Ordinance No. 4,617/2021, which established the Brazilian Artificial Intelligence Strategy (Estratégia Brasileira de Inteligência Artificial or EBIA). The document outlines priority areas for the advancement of AI in the country, which includes the healthcare sector. However, the EBIA still lacks provisions specific to the healthcare sector.
In the same way, Ordinance of Consolidation No. 2/2017 issued by the Ministry of Health (Ministério da Saúde), which compiles the national policy guidelines for Brazil’s Unified Health System (Sistema Único de Saúde or SUS), states, within the scope of the National Policy on Health Information and Informatics (Política Nacional de Informação e Informática em Saúde or PNIIS), that AI should be promoted as a means to improve the management of and service provided, without regulating how this should occur in practice.
More recently, on 14 March 2025, the National Council of Justice (Conselho Nacional de Justiça or CNJ), the body that oversees the functioning of the Brazilian judiciary, published CNJ Resolution No. 615/2025, which sets out parameters for the development, use and governance of AI solutions within the judiciary.
In view of the scenario involving the scarcity of normative references to the use of AI in healthcare, Bill No. 2,338/2023 is being processed by the Brazilian National Congress (Congresso Nacional) , which proposes the establishment of a legal framework for the use of AI in Brazil.
Bill No. 2,338/2023 is inspired by the Artificial Intelligence Act approved by the European Parliament on 14 June 2023 and proposes a set of specific obligations for stakeholders responsible for AI systems that are considered to be high risk. The objective is to ensure the protection of fundamental rights, mitigate damage, reduce the potential impacts on vulnerable groups and guarantee the protection of personal data.
The classification of a high-risk AI system requires that, before introducing such a system into circulation, companies adopt certain governance measures and internal procedures, in addition to carrying out an algorithmic impact assessment (ie, analysis of the impact on fundamental rights, presenting preventive, mitigating and measures to reverse any negative impacts, as well as measures to enhance the positive impact of the AI system).
Moreover, Bill No. 2,338/2023 defines generative AI as ‘an AI model specifically designed to generate or significantly modify, with different degrees of autonomy, text, images, audio, video, or software code’, subjecting its developers to certain specific obligations. A preliminary assessment is required to be carried out in order to identify the respective expected risk levels, including potential systemic risks (ie, potential negative adverse effects arising from a general purpose and generative AI system with a significant impact on individual and societal fundamental rights).
Specifically with regard to the use of AI in healthcare, Bill No. 2,338/2023 considers AI systems with ‘applications in the health area aimed at assisting diagnoses and medical procedures to be high risk, when there is a relevant risk to the physical and mental integrity of people’.
By the same logic, it is possible to infer that according to Bill No. 2,338/2023, AI systems used in healthcare that do not pose significant health risks, and are not classifiable in regard to the other hypotheses concerning high-risk AI systems provided for by the Bill, will not be considered to be high-risk AI systems and, therefore, will not be subject to the legal obligations imposed on high-risk AI systems. This exemption, however, could be reviewed in the future, since the Bill assigns to the National System for Regulation and Governance of Artificial Intelligence (SIA), under the coordination of the National Data Protection Agency (Autoridade Nacional de Proteção de Dados or ANPD), the competence to identify new hypotheses concerning high-risk AI systems.
Moreover, whenever an AI system meets the definition of generative AI proposed by the Bill, regardless of whether or not it is classified as high risk, its developer would have to carry out the preliminary assessment required for this type of AI.
The Bill acknowledges the accelerated pace of innovation that characterises this type of technology, by regulating the possibility of establishing a regulatory sandbox in order to enable the development, testing and validation of innovative AI systems for a limited period, before they are placed on the market. This measure will be important to ensure that innovative or even disruptive technologies do not undermine the balance that must exist between the protection of the right to health and the development of innovative health-related tools.
On the other hand, it is not possible to make a prognosis regarding the potential effectiveness of the model proposed by Bill No. 2,338/2023 or even assess whether it will receive the necessary approval. It is necessary to wait for further developments on the matter and its eventual enactment in order to determine whether it will be capable of adequately achieving the objectives proposed.
Notes
[1] WHO guidance, ‘Ethics and governance of artificial intelligence for health’, 2021.
[2] Chen Y, Esmaeilzadeh P, ‘Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges’ J Med Internet Res 2024;26:e53008, https://www.jmir.org/2024/1/e53008 DOI: 10.2196/53008, last accessed on 8 April 2025.