Between algorithms and fundamental rights: the EU’s AI Act and its impact on Latin American employment law

Tuesday 21 April 2026

Cleber Venditti
Mattos Filho, São Paulo
venditti@mattosfilho.com.br

Domingos Fortunato
Mattos Filho, São Paulo
dfortunato@mattosfilho.com.br

José Daniel Vergna
Mattos Filho, São Paulo
daniel.vergna@mattosfilho.com.br

Rafael Caetano
Mattos Filho, Brasília
rafael.caetano@mattosfilho.com.br

Introduction

Artificial intelligence has become an integral part of everyday life, increasingly relied on to simplify and accelerate tasks which were once more time-consuming. Whether placed in personal activities, corporate procedures or public administration, AI’s constant presence reinforces the fact that such technology is not only here to stay but has established itself as a powerful tool. Its growing influence is so substantial that this has triggered important debates about the potential risks associated with its use.

This symbiotic relationship with AI naturally has extended into the workplace, where employers are rapidly adopting systems and tools to support decision‑making, streamline operations, and optimise daily routines. A study conducted in 2025 by the Brazilian Institute of Geography and Statistics (IBGE) indicates that the use of AI tools among companies increased by 163 per cent between 2022 and 2024.[1]

In employment law discussions, much attention is given to whether AI might replace human workers. While AI can undoubtedly facilitate and improve many aspects of work, it also introduces new vulnerabilities, raising concerns about job security, fairness and the overall quality of working conditions.

Unlike previous technological revolutions, such as the Industrial Revolution, which unfolded slowly and allowed the law to adapt over time, AI is advancing at a far faster pace. Its rapid integration into daily life and the workplace leaves little room for gradual institutional adjustment. As a result, legal systems must react quickly, developing clear and effective rules to address the immediate and significant effects of AI‑driven decision‑making in employment relations.

It is within this context that the European Union’s AI Act, enacted in 2024, stands out as a landmark regulatory instrument. Its international influence illustrates the so‑called ‘Brussels Effect’: the EU’s ability to export its regulatory model beyond its borders by crafting rules that are impactful enough that companies and even governments outside of the EU adopt similar standards. As a result, the Act has been shaping AI regulatory debates and legislative developments across Latin America and other regions.

The EU’s AI Act and its impact on the world of employment law

The Act is globally recognised as the first comprehensive regulatory framework built around a clear definition of what AI is and a risk‑based approach (unacceptable, high, limited and minimal risk) to uphold fundamental rights, ensure safety, and promote trustworthy AI development.

Adopted in 2024, the Act marks a decisive shift in how AI technologies are expected to operate both within the European Union and across companies that supply or use AI systems connected to the EU market.

Within the employment context, the Act significantly reshapes how employers and human resources (HR) departments may rely on AI tools. Systems commonly used in employment – such as automated recruitment software, CV (resumé)‑screening mechanisms, candidate assessment models, employee monitoring tools and technologies supporting decisions on promotions or contractual terms – can be classified as high‑risk.

As a result, organisations deploying these systems must comply with a set of obligations, including adopting risk‑mitigation procedures, comprehensive technical documentation, transparency measures towards workers, meaningful human oversight and safeguards to prevent discriminatory or otherwise harmful outcomes.

The Act explicitly prohibits AI systems deemed ‘unacceptable risk’, such as technologies enabling intrusive biometric surveillance or manipulative behavioural techniques. These exclusions underline the EU’s commitment to preventing AI from undermining fundamental rights, human dignity or democratic values.

The Act’s significance extends well beyond the EU. Due to its extraterritorial effects, the Act applies whenever AI outputs are used within the EU, even if the developers, providers or deploying companies are located elsewhere. This reality places multinational corporations, particularly those operating global HR and personnel management systems, under mounting pressure to ensure that imported AI technologies meet EU compliance standards.

Effects of the EU’s AI Act in Latin America

There is an increasingly visible movement across Latin America towards adopting or at least using international standards, such as the EU’s regulatory models for AI. This convergence to use the EU’s AI Act is driven both by political alignment with European human rights standards and by economic considerations, especially the need to ensure compatibility with AI tools developed in the EU. As a result, several countries in the region, such as Argentina, Brazil, Chile and Uruguay, have begun to shape their regulatory ecosystems around principles found in the EU’s AI Act and instruments such as the Council of Europe’s Framework Convention.

Chile

Among the above-mentioned countries, Chile is most closely aligned with the EU’s model. The country has been working on an AI legal framework for several years, but since 2024 it has advanced an AI bill structured around a risk‑based classification, including categories such as unacceptable, high, limited and minimal risk. The proposal also incorporates Organisation for Economic Co-operation and Development (OECD) AI principles and UNESCO’s ethics‑based principles, and places strong emphasis on fundamental rights, transparency and human oversight of automated systems.

Brazil

Brazil has discussed AI regulation since 2020. The most recent proposal, Bill 2,338/2023, incorporates key elements of the EU approach, such as human review of automated decisions and a comprehensive risk‑based model designed not to restrict innovation but to create safeguards that protect fundamental rights and promote responsible AI development, as well as protection against discrimination, governance and civil liability. In addition to legislative debates, Brazil’s constitutional landscape regarding automation gained significant definition with the Supreme Federal Court’s recent judgment in Direct Action of Unconstitutionality by Omission No 73, which recognised that the National Congress has been unconstitutionally omissive by failing to regulate the right to protection in the face of automation established in Brazil’s Federal Constitution.[2] As a result, in 2025 the Supreme Federal Court established a 24-month deadline for the National Congress to provide legislation about protecting workers from automation. This ruling does not aim to hinder technological development but rather ensure safeguards for workers during technological transitions, including those driven by AI.

Argentina

Argentina has had data privacy legislation since 2000, which is considered one of the earliest comprehensive data‑protection frameworks in the region. Multiple legislative proposals have been introduced in recent years regarding AI‑specific regulation, but no unified AI law has yet been enacted. Initially, Argentina’s governance approach to AI was fragmented and did not incorporate the EU’s risk‑based classification model. Over time, however, newer legislative initiatives have shifted towards frameworks aligned with international standards. These bills introduce transparency obligations, mandatory audits, AI system registries, requirements for human oversight, and other accountability measures that reflect the structure and reasoning found in the EU’s AI Act.

Uruguay

Uruguay is the first country of this region to sign the Council of Europe’s Framework Convention on AI and Human Rights, Democracy and the Rule of Law, which is expected to be ratified domestically. This treaty aims to ensure that the use of AI remains compatible with human rights, democracy and the rule of law. Uruguay’s regulatory framework also reflects global best practices, aligning with the OECD AI Principles and the UNESCO Recommendation on the Ethics of AI.

The four countries share several emerging legal frameworks, all rooted in the EU’s AI Act, that:

  • acknowledge the specific risks of AI‑based HR and management systems;
  • incorporate mandatory human oversight mechanisms;
  • require transparency, documentation and accountability from deployers and developers;
  • adopt EU‑style risk‑mitigation and compliance practices; and
  • establish penalties in case of non-compliance.

Several factors explain why Latin American countries are moving towards regulatory models that resemble the EU’s AI Act. First, governments increasingly seek alignment with international regulatory standards to ensure compatibility with global markets and encourage trustworthy AI systems. Growing concerns about algorithmic discrimination, especially in employment, public administration and access to essential services, have also pushed policy makers to embed rights‑based safeguards. The widespread use of AI systems imported from Europe and the United States has strengthened incentives to follow EU rules. Because the EU’s AI Act applies extraterritorially, companies outside the European Union must prepare for compliance whenever high‑risk AI systems or their outputs are used within the EU, even without physical presence there.

Alongside legal convergence, this region shows strong ethical alignment as well with values such as non‑discrimination, human dignity, privacy and data protection, and institutional accountability for algorithmic decisions.

The following five principles constitute the ethical foundation of Latin America’s emerging AI norms and reflect a shared effort to ensure that AI adoption reinforces, rather than undermines fundamental rights.

Practical implications in the workplace

As a result of this regulatory landscape and the region’s growing tendency to use the EU’s AI Act as a reference, companies operating in Latin America must adopt a clear and proactive approach to the use of AI in the workplace.

Auditing AI used in recruitment and performance management

Organisations need to assess and audit AI systems applied in hiring, promotion, performance evaluation and workforce analytics. These systems are typically treated as high‑risk and require careful analysis to determine how they fit within the risk categories. This is especially relevant because most HR‑related AI tools used in the region are imported and therefore already shaped by EU regulatory expectations.

Ensuring human review of automated decisions

Companies must guarantee that workers are never evaluated, selected, or dismissed solely by algorithmic outputs. ‘Human-in-the-loop’ oversight is essential to prevent errors, mitigate bias and ensure the fairness of decisions that affect employment conditions or career trajectories.

Transparency towards workers

Employers should inform workers about how AI systems operate, what types of data they use and how algorithmic processes may influence decisions affecting their roles or evaluations. Maintaining open and ongoing communication builds trust and gives employees the ability to understand and contest automated decisions when necessary.

Implementing internal algorithmic governance policies

Companies need internal policies that formalise procedures for oversight, documentation, risk management and bias prevention. Such governance structures ensure organisational control over AI tools, provide clarity regarding responsibilities, and support continuous compliance as legal and ethical expectations evolve.

The main risks and opportunities of AI in employment relations

In the context of AI‑mediated employment relations, Latin American countries face a complex landscape marked by risks, tangible opportunities and structural challenges, including but not limited to the following risks, opportunities and challenges.

Risks:

  • discriminatory bias in recruitment;
  • algorithmic opacity (black‑box decision systems);
  • excessive surveillance and related impact on mental health; and
  • increasing algorithmic subordination.

Opportunities:

  • greater standardisation and rationality in job‑selection processes;
  • reduced discrimination when systems are auditable;
  • potential to expand accessibility and inclusion; and
  • new tools for performance analysis.

Challenges faced by countries in the region:

  • limited institutional capacity;
  • technological asymmetry;
  • dependence on imported systems; and
  • displacement of lower‑skilled workers.

While AI tools, such as algorithmic instruments, can bring efficiency and greater consistency to HR procedures, they also raise concerns about fairness, transparency and the preservation of workers’ autonomy. At the same time, uneven institutional capacity and technological dependence shape how effectively countries in the region can regulate and benefit from these systems.

The International Labour Organization (ILO) has also emerged as a pivotal actor in this space, examining and shaping the governance frameworks which address AI’s growing impact on the world of work. Through its initiatives, the ILO seeks to ensure that the deployment of AI in employment settings safeguards human wellbeing, upholds fundamental employment rights, and promotes equitable development on a global scale.[3]

Ensuring responsible and rights‑respecting uses of AI in employment will require countries to strike a balance between promoting technological innovation while guaranteeing transparency, accountability, non-discrimination and meaningful human oversight.

Conclusion

AI in the workplace demands robust, coordinated and forward‑looking regulatory frameworks capable of addressing both innovation and the protection of fundamental rights.

As global governance models continue to evolve, the EU’s AI Act has emerged as the central reference point, and it is already shaping debates and legislative trajectories across Latin America. While the EU’s AI Act is used as a benchmark, certain aspects of the legislation are still under review and discussion. Companies should closely monitor the ongoing developments and regulatory updates to ensure timely compliance and to anticipate potential impacts on their operations.

Within the region, Chile and Uruguay stand at the forefront of this convergence, adopting regulatory structures and ethical commitments closely aligned with EU standards. Argentina and Brazil, while advancing at different speeds and through distinct institutional paths, also incorporate core elements of the EU model, particularly regarding transparency obligations, risk‑based classification and human‑oversight safeguards in automated decision‑making.

The future of work in Latin America will depend on each nation’s capacity to harmonise ethical principles, legal protections and practical governance mechanisms in ways which align, directly or indirectly, with the benchmark introduced by the EU’s AI Act, while still responding to local social, economic, cultural and institutional realities.


Notes

[1] Rayane Moura, ‘Use of Artificial Intelligence at work grows 163% in Brazilian industry, says IBGE’, Globo.com, 24 September 2025, available at https://g1.globo.com/trabalho-e-carreira/noticia/2025/09/24/uso-de-inteligencia-artificial-no-trabalho-na-industria-brasileira-ibge.ghtml accessed 15 April 2026.

[2] ‘Brazil's Supreme Court gives Congress 24 months to legislate on protecting workers in the face of automation’, STF, 10 September 2025, available at https://noticias.stf.jus.br/postsnoticias/stf-da-prazo-de-24-meses-para-que-congresso-legisle-sobre-protecao-de-trabalhadores-diante-da-automacao accessed 15 April 2026.

[3] ‘Regulations.ai: The source for AI regulation. Laws. Governance. Research. Worldwide’, available at https://regulations.ai accessed 15 April 2026.