Regulating artificial intelligence in healthcare in Brazil: role of the Professional Council of Medicine and challenges of a fragmented framework

Monday 11 May 2026

Renata Fialho de Oliveira
Veirano Advogados, São Paulo
renata.oliveira@veirano.com.br

Isabel Hering
Veirano Advogados, São Paulo
isabel.hering@veirano.com.br

Thais Cristina de Jesus
Veirano Advogados, São Paulo
thais.jesus@veirano.com.br

Nicole Knobel
Veirano Advogados, São Paulo
nicole.knobel@veirano.com.br 

Introduction

Artificial intelligence (AI) has rapidly transformed the healthcare sector, particularly in areas such as diagnostic imaging, clinical decision support systems, and predictive analytics. The global AI in the healthcare market is projected to reach approximately US$505.69bn by 2033, with a compound annual growth rate (CAGR) of 38.9 per cent between 2026 and 2033, according to a report by Grand View Research, Inc[1] reflecting the growing reliance on data-driven technologies to enhance efficiency and patient outcomes.

Brazil has followed this trend, although adoption remains uneven. Data from the TIC Saúde 2024 survey[2] indicates that approximately 17 per cent of physicians in Brazil already use AI tools in their professional activities, while only around four per cent of healthcare establishments have implemented such solutions at an institutional level. These figures suggest that AI is increasingly integrated into individual clinical practice, even though its broader institutional adoption remains limited. The survey further indicates that physicians use generative AI tools in both public and private healthcare settings, at rates of approximately 14 per cent and 20 per cent, respectively, reinforcing their gradual incorporation into clinical routines.

Against this backdrop, Resolution No 2,454/2026 (AI CFM Resolution), issued by the Brazilian Federal Council of Medicine (Conselho Federal de Medicina, or CFM), represents a significant sectoral initiative to address the use of AI in medical practice.[3] However, whilst the Resolution reflects a proactive institutional response, it also raises important legal and structural questions. In particular, it exposes the limitations of regulating a cross-sectoral, rapidly evolving technology through a professional regulatory body, especially in the absence of a comprehensive, coordinated national framework for AI. This context reveals broader challenges related to regulatory coherence and the effective allocation of authority across multiple institutions.

The evolving regulatory landscape of AI in Brazil

Brazil currently lacks a unified legal framework governing AI. Although several legislative initiatives are underway, the most advanced proposal is Bill No 2,338/2023 (the ‘Brazilian AI Bill’),[4] which remains pending approval before the Chamber of Deputies. The proposal is largely inspired by the European Union’s AI Act, particularly its risk-based approach and emphasis on protecting fundamental rights. In the absence of a comprehensive framework, the country continues to rely on a fragmented regulatory model, which may give rise to regulatory gaps and overlapping competencies.

The Brazilian General Data Protection Law (Lei Geral de Proteção de Dados, or LGPD) establishes safeguards regarding automated decision-making, transparency, and data protection, which are particularly relevant in healthcare due to the sensitivity of medical data. In parallel, authorities such as the Brazilian National Data Protection Agency (Agência Nacional de Proteção de Dados, or ANPD) have begun to shape the governance of data-driven technologies.

In the healthcare sector, the Brazilian Health Regulatory Agency (Agência Nacional de Vigilância Sanitária, or ANVISA) plays a key role through its regulatory framework for software as a medical device (SaMD), under which AI-based systems may be classified and subject to requirements related to safety, efficacy, and performance. However, these instruments were not specifically designed to address the broader implications of AI, which may limit their effectiveness when applied to complex AI-driven systems.

The Brazilian AI Bill seeks to address these structural limitations by proposing a multi-layered governance system in which a central authority would coordinate sectoral regulators, allowing them to exercise their mandates within a more coherent institutional structure. While this model aims to enhance regulatory coherence, its practical implementation may present operational challenges, particularly in ensuring effective coordination among multiple authorities.

In this regulatory context, the CFM Resolution emerges as a sector-specific response to uncertainty, while partially anticipating elements of the broader framework that is still under discussion.

The CFM resolution: key provisions and regulatory approach

Prior to the issuance of the AI CFM Resolution,[5] physicians' use of AI in Brazil was already a reality, especially in areas such as diagnostic imaging, clinical decision support, and administrative tasks, ranging from data analysis to preparing medical records.

Existing legal frameworks, such as data protection rules and general principles of medical liability, provided some guidance but did not specifically address the role of AI in clinical practice. In particular, there was no clear definition of the limits of AI use in medical decision-making or of the allocation of responsibility between physicians and technological systems.

This lack of specific regulation resulted in legal uncertainty, particularly regarding liability and the acceptable degree of reliance on automated systems in medical practice. As AI tools became more integrated into clinical routines, the absence of clear parameters increasingly highlighted the need for a more structured regulatory approach.

The Resolution thus introduces a structured regulatory approach governing the deployment and use of AI systems in medical practice. Although it represents a significant step towards formalising the use of AI in healthcare in Brazil, it is important to recognise that many of its provisions reflect principles that have long underpinned the medical profession – namely, physician autonomy, patient safety, informed consent, and the primacy of human judgement in clinical decision-making.

At its core, the Resolution adopts a ‘human-in-the-loop’ approach, establishing that AI systems must function strictly as support tools and cannot replace medical decision-making. This principle, whilst now formally codified, is not novel; it reflects the longstanding ethical premise that physicians must exercise independent professional judgement in all clinical matters. Accordingly, physicians retain ultimate authority over diagnosis, treatment, and clinical judgement, and are entitled to accept or reject AI-generated recommendations. They may also refuse to use systems that lack adequate scientific validation or regulatory approval.

From a liability perspective, the Resolution introduces a nuanced framework. Whilst physicians remain responsible for medical decisions, liability may be mitigated where failures are attributable exclusively to AI systems, provided that physicians act with diligence and exercise critical judgement. This approach seeks to balance technological integration with the preservation of professional responsibility.

The Resolution also reinforces patient rights by requiring transparency regarding the use of AI in medical care and by prohibiting AI systems from autonomously communicating diagnoses or treatment decisions. These provisions aim to preserve the physician–patient relationship and ensure that human oversight remains central to clinical interactions.

In addition, the Resolution introduces governance obligations for healthcare institutions, including internal oversight mechanisms, continuous monitoring, and risk mitigation measures. Overall, it reflects a cautious regulatory approach that seeks to enable the use of AI whilst maintaining ethical standards, human control, and institutional accountability.

Regulatory competence: can the CFM regulate AI?

The regulatory authority of the CFM derives from Law No 3.268/1957,[6] which entrusts the body with supervising the practice of medicine and ensuring compliance with professional and ethical standards. Within this mandate, the CFM is empowered to regulate medical conduct and professional responsibilities.

From this perspective, the CFM may legitimately regulate the use of AI to the extent that it directly affects the exercise of the medical profession. The integration of AI into clinical practice directly impacts diagnosis, treatment decisions, and the physician–patient relationship, all of which fall within its traditional scope.

This expansion nonetheless gives rise to a structural tension. Whilst the CFM is well positioned to regulate the use of AI by physicians, the regulation of AI systems as technologies, including their development, validation, and market placement, falls within the remit of other authorities, such as ANVISA and ANPD. This distinction reflects the functional limits of professional self-regulation in addressing the broader technological and commercial dimensions of AI.

Accordingly, the CFM’s initiative may be understood as a legitimate and necessary response, but it also illustrates the limitations of addressing systemic regulatory challenges through isolated institutional action.

Intersection with medical device regulation: the role of ANVISA

An additional layer of complexity arises from the interaction between the AI CFM Resolution and the regulatory framework applicable to medical devices.

In Brazil, AI-based tools may qualify as medical devices, particularly when intended for diagnostic or therapeutic purposes. In such cases, they fall under the ANVISA framework, including Resolution RDC No 657/2022,[7] which governs SaMD. However, this framework was not originally designed to address the specific challenges posed by adaptive or continuously learning AI systems.

As a result, AI technologies may be subject to a dual regulatory regime. Whereas ANVISA regulates the product itself, the CFM regulates its use in clinical practice. Although conceptually clear, this division may prove difficult to operationalise where regulatory requirements intersect.

An AI system approved by ANVISA may still face restrictions under the CFM’s professional rules, particularly where its level of autonomy conflicts with the requirement of direct physician oversight. Similarly, compliance with one regime does not necessarily ensure compliance with the other, creating challenges for healthcare providers and developers and potentially increasing both compliance burdens and barriers to innovation in the healthcare sector.

Future outlook: towards a coordinated regulatory framework

The approval of the Brazilian AI Bill is expected to reshape the regulatory landscape by introducing a comprehensive framework grounded in fundamental rights and coordinated oversight, reducing fragmentation by allowing sectoral regulators to exercise their mandates within a coherent governance model. In this context, authorities such as the CFM and ANVISA would continue to play a significant role, but within a broader, more integrated institutional structure.

However, there is no guarantee that the current version of the Brazilian AI Bill will be approved, as it may undergo significant amendments or be replaced by alternative proposals. This uncertainty, especially in an election year, raises challenges regarding the interaction between future legislation and existing sectoral regulations, particularly if inconsistencies arise in areas such as risk classification, governance, and the allocation of responsibilities.

Final considerations

The AI CFM Resolution represents a timely and pragmatic response to the growing use of AI in healthcare in Brazil. Addressing concrete challenges in medical practice helps reduce uncertainty in a rapidly evolving technological environment while providing physicians and healthcare institutions with clearer parameters for the responsible integration of AI tools.

At the same time, the Resolution highlights the limitations of sectoral approaches in addressing a cross-cutting technology such as AI. The coexistence of multiple regulatory authorities – each with distinct competencies and perspectives – reinforces the need for greater coordination and clearer institutional boundaries to avoid regulatory fragmentation and ensure consistent oversight.

It is equally important to recognise, however, that the AI CFM Resolution does not represent entirely new regulatory ground. Rather, it largely codifies ethical standards and common-sense principles that have always guided the medical profession and patient care. The requirement that physicians retain ultimate decision-making authority, the emphasis on informed consent and transparency, and the insistence on human oversight are not innovations introduced by this Resolution – they are foundational tenets of medical ethics that predate the advent of AI. What the Resolution accomplishes, therefore, is the formal articulation and contextualisation of these pre-existing values within the specific framework of AI-assisted medicine.

This perspective offers reassurance to the medical community: the integration of AI into clinical practice need not be viewed as a disruptive departure from established norms, but rather as an evolution that remains firmly anchored in the ethical principles that have long defined the profession. The Resolution serves as a formal reminder that, regardless of technological advancement, the physician’s duty of care, professional responsibility, and commitment to patient welfare remain paramount.

Looking ahead, the regulation of AI in healthcare in Brazil is likely to remain dynamic, requiring continuous alignment between legal frameworks and regulatory authorities. The anticipated approval of the Brazilian AI Bill may provide a more coherent institutional structure, but the fundamental principles enshrined in the CFM Resolution – grounded as they are in enduring medical ethics – will likely continue to inform the profession’s approach to emerging technologies. In this sense, the Resolution is both a response to the present and a bridge to the future, ensuring that innovation in healthcare remains guided by the timeless values of the medical profession.

Notes

[1]  Grand View Research, Artificial Intelligence in Healthcare Market (2026–2033): www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market, accessed 6 May 2026.

[2]  Regional Centre for Studies for the Development of the Information Society. cetic.br, ‘Brazilian healthcare establishments are advancing in digitization, but computer skills applied to the area are still limited among professionals in the sector, research reveals’ (Estabelecimentos de saúde brasileiros avançam na digitalização, mas habilidade em informática aplicada à área ainda é reduzida entre os profissionais do setor, revela pesquisa), 11 October 2024: https://cetic.br/noticia/estabelecimentos-de-saude-brasileiros-avancam-na-digitalizacao-mas-habilidade-em-informatica-aplicada-a-area-ainda-e-reduzida-entre-os-profissionais-do-setor-revela-pesquisa, accessed 6 May 2026.

[3]  CFM, CFM regulates the use of AI in medicine: https://portal.cfm.org.br/noticias/cfm-normatiza-uso-da-ia-na-medicina, accessed 6 May 2026.

[4]  PL No 2338/2023: https://www.camara.leg.br/proposicoesWeb/prop_mostrarintegra?codteor=2868197&filename=PL%202338/2023, accessed 6 May 2026.

[5]  See CFM Resolution No 2454 of 11/02/2026: https://www.legisweb.com.br/legislacao/?id=491437, last accessed on 30 March 2026.

[6]  Cameral of the Deputies, Law No 3.268, of 30 September 1957: www2.camara.leg.br/legin/fed/lei/1950-1959/lei-3268-30-setembro-1957-354846-normaatualizada-pl.html?utm_source=chatgpt.com, accessed 6 May 2026.

[7]  Resolution of the Collegiate Board of Directors – RDC No 657 of 24/03/2022, https://anvisalegis.datalegis.net/action/ActionDatalegis.php?acao=abrirTextoAto&tipo=RDC&numeroAto=00000657&seqAto=000&valorAno=2022&orgao=RDC/DC/ANVISA/MS&cod_menu=9434&cod_modulo=310&pesquisa=true, accessed 6 May 2026.