How AI can reshape anti-corruption compliance
Alessandro Musella
Vectis Legal, Milan
Introduction
The rise of artificial intelligence (AI), particularly generative AI, has captured the attention of legal and compliance professionals (1).
Concurrently with generative AI, a more sophisticated technology known as agentic AI is emerging. Agentic AI systems are engineered to make decisions and act autonomously to achieve complex objectives with limited supervision (2). These systems integrate the flexibility of large language models (LLMs) with the precision of a set of programmed actions, emulating human-like reasoning processes to solve real-time challenges (3). Key capabilities include autonomy, goal-oriented behaviour, adaptability, reasoning, planning and the ability to interact with their environment using tools and application programming interfaces (APIs).
Unlike generative AI, which primarily focuses on content creation, agentic AI ‘does’ rather than merely ‘creates’. This evolution signifies a shift from content generation to task execution and problem-solving, and holds great implications for compliance, where many tasks involve complex workflows and big data analysis that transcend simple content generation.
Navigating the future: IBA and OECD research on AI adoption in the legal and compliance sectors
The International Bar Association (IBA) report titled The Future is Now: Artificial Intelligence and the Legal Profession has provided valuable initial insights into the anticipated adoption of AI within the legal sector (4). The report forecasts a significant impact on the legal profession, with differences based on organisations’ size and geography.
The Organisation for Economic Co-operation and Development (OECD) working paper titled 'Generative AI for anti-corruption and integrity in government’ explores the potential of generative AI in bolstering anti-corruption efforts and integrity within government (5). The OECD acknowledges the potential of this technology to detect corruption risks, strengthen compliance control, monitor financial transactions and uncover sophisticated bribery schemes. Based on a questionnaire administered to 59 organisations across 39 countries, the paper highlights the varying levels of AI integration among different entities, such as audit institutions, anti-corruption agencies and internal control functions. Approximately 50 per cent of the organisations were in the exploratory stage, while 24 per cent are in the development stage. Supreme audit institutions (SAIs) are generally more advanced in their use of generative AI compared to other types of organisations. The report underscores the potential of AI to enhance operational efficiency, support investigations and facilitate pattern analysis, thereby strengthening anti-corruption efforts.
Table 1: Summary of key forecasts from IBA and OECD reports
Forecast area | IBA report | OECD report |
Current state of AI adoption | Widespread adoption with variations based on law firm size and region; primarily for internal use, but increasing for client-facing applications in larger firms. | Generative AI offers opportunities to enhance the work of integrity actors, particularly through LLMs; employed to detect corruption risks, strengthen compliance and monitor financial transactions. |
Anticipated impacts | Significant impact on law firm structure, hiring practices and business models; potential shift towards fixed or value-added fees; prioritisation of AI skills in hiring. | Potential to strengthen anti-corruption and integrity in government by providing advanced tools for detection, prevention and analysis; may improve efficiency and effectiveness of anti-corruption agencies. |
Challenges and opportunities | Significant challenges in data governance, security, intellectual property and privacy; need for human oversight and ensuring ethical standards; importance of stakeholder consultation and regulatory consistency. | Concerns regarding ‘hallucinations’ in generative models and opaque decision-making processes; need to address privacy, security and ethical use of data; importance of building trustworthy data ecosystems and advancing ethical and legal frameworks. |
Future adoption | AI is expected to become a key factor in legal service delivery and talent acquisition; need for legal professionals to adapt to working alongside AI tools. | OECD emphasises the importance of addressing inherent risks and ethical considerations to ensure responsible and reliable implementation in the context of government integrity. |
Transforming compliance: promising application areas for AI
Generative AI can play a role in supporting the fight against corruption, spanning detection, investigation, prevention and control (6) (7).
In the detection phase, AI can monitor transactions and contract databases to spot unusual patterns that may signal fraud, such as inflated payments or non-competitive contract awards. It can identify suspicious actors and relationships, helping uncover conflicts of interest or bid-rigging schemes.
During investigations, AI streamlines the review of large volumes of documents and communications. It can flag key concepts, detect the tone of conversations and build timelines that visualise the flow of funds and involvement of different actors, which is essential in complex cases.
When it comes to prevention, AI enhances compliance awareness by delivering personalised training and tracking employee progress. It also supports leadership by offering data-driven insights, simulating outcomes of regulatory changes and promoting transparent, well-informed decision-making.
Finally, AI strengthens internal controls by continuously monitoring processes, detecting anomalies and optimising audits. This proactive risk management helps organisations stay ahead of compliance issues and reinforces a culture of integrity.
The potential of agentic AI
Agentic AI offers further potential for enhancing the monitoring and management of compliance (8).
Regulatory monitoring: agentic AI can continuously monitor regulatory sources across different jurisdictions, extract new requirements and indicate potential impact on an organisation. It can also automate the process of updating policies and procedures to reflect regulatory changes. Agentic AI is also capable of developing personalised and adaptive training modules to improve employee understanding of compliance requirements.
Contract management: managing contract analysis and lifecycle for compliance can be significantly streamlined with agentic AI. This technology can autonomously analyse consistency of clauses and applicable policies and flag potential compliance issues.
Risk analytics: agentic AI can autonomously monitor compliance risks by continuously update and analyse risk surveys from internal risk owners and collect key data from various sources. Its ability to analyse large datasets in real-time allows for more effective risk analysis and detection of issues. Agentic AI has also the potential of using predictive analytics to shift from a reactive to a proactive approach in risk management, anticipating potential issues (9).
Third-party due diligence (TPDD): agentic AI can deliver a transformative approach to TPDD by automating and scaling the evaluation of thousands of suppliers and partners. Leveraging structured data and real-time monitoring from leading data providers, the agent conducts mass diagnostics, risk profiling and link analysis. By continuously scanning for red flags and reputational shifts, the agent enables proactive interventions and targeted enhanced due diligence (10).
Transaction monitoring: agentic AI is capable of analysing thousands of transactions in real-time, identifying patterns indicative of suspicious activities. It can detect red flags such as excessive commissions, disproportionate discounts or vague consulting agreements, payments to offshore accounts, etc (11).
A test case: agent-based framework for third-party screening
As part of our exploration into the practical applications of agentic AI, in Vectis and MesaGroup, we conducted an experimental implementation focused on TPDD, one of the most promising use cases, given the significant resources this activity consumes across many organisations.
We designed an agentic AI system for large-scale TPDD, structured around a multi-layered prompt architecture, which defines its role as a domain expert in corporate risk assessment and instructs it to operate across four key objectives: (i) mass diagnostics of over 7,500 third parties using pre-configured key risk indicators (KRIs) and a risk scoring model; (ii) deep analysis of flagged entities, including false positive sanitisation and link analysis; (iii) enhanced due diligence on a selected panel of high-risk entities; and (iv) continuous real-time alerting on reputational risks.
Prompt for AI agent – advanced TPDD
Role:
Act as an AI agent specialised in due diligence and risk analysis (reputational, financial, legal and corporate), operating at scale. Your task is to support the organisation in monitoring, analysing and evaluating third parties (active suppliers, partners and potential suppliers) through the dedicated platform integrated with the data providers.
Operational objectives
- Execute large-scale diagnostics on all active and registered third parties, assigning different priority levels. Use the predefined risk model and KRIs.
- Aggregate and analyse data from official and reputational sources (corporate data, PEP, sanctions, financials, legal and adverse events).
- Identify anomalies and risk profiles, classifying third parties as: ‘No Risk’, ‘Potential Risk’ or ‘Confirmed Risk’.
- Support false positive sanitisation, using cross-checking of identifying and reputational information (eg, name matches, missing identifiers).
- Perform link analysis to detect connections between individuals and companies, including indirect relationships (eg, shared executives or ownership structures).
- Assist in selecting a panel of 100 third parties for enhanced due diligence (EDD) based on potential legal red flags relevant under Articles 94–95 of the Italian Procurement Code (Legislative Decree 36/2023) and generate an individual report for each entity including:
- summary of findings;
- legal assessment of detected red flags; and
- recommendations for mitigation actions aligned with internal policies.
- Activate continuous alerting system for all diagnosed third parties, providing real-time notifications of negative changes in reputational KRIs, based on SGR Compliance data.
Data sources to query
- Corporate data: business registry data (Italy and abroad), CCIAA (La Camera di Commercio Milano Monzabrianza Lodi, or the Milan Chamber of Commerce) classification codes (ATECO, NACE, SIC, RAE, SAE), registered and operational addresses, PEC, company roles and shareholders, control bodies, shareholdings.
- Reputational data: sanctions lists, enforcement data, politically exposed persons and local politicians, adverse media.
- Financial data: financial statements, key financial ratios (liquidity, profitability, leverage, solvency)
- Legal and adverse events: mergers/splits, ongoing legal proceedings, early warning signals per the Business Crisis Code (Italy).
Operational mode
- Operate in batch mode (for mass diagnostics) and interactive mode (for detailed investigations).
- All data processed must be logged and traceable.
- Deliver structured and exportable outputs (PDF, Excel or via API) for legal, compliance and procurement teams.
- Use technically sound but accessible language, highlighting critical issues with severity level and mitigation suggestions.
Expected outputs
- Risk classification for each third party.
- Summary risk report for flagged subjects.
- Full individual report for entities selected for enhanced due diligence.
- Real-time alerts for reputational key risk indicator deteriorations.
The agent operates through a hybrid execution mode - batch for mass screening and interactive for detailed investigation - and is integrated via API with two leading data providers covering corporate, financial, reputational and legal data sources, both domestic and international. The output is delivered in structured, exportable formats for human analysts and decision-makers, ensuring traceability, auditability and regulatory alignment.
Conclusion: embracing the agentic future of compliance
Agentic AI demonstrates significant promise in advancing anti-corruption compliance, particularly through its ability to autonomously collect, synthesize and interpret information from diverse and unstructured sources. Our experimental implementation in the area of TPDD confirms the concrete value of such systems, highlighting their potential to streamline high-effort activities and uncover risk signals at scale.
However, realising this potential requires a thoughtful and rigorous approach. Identifying the right use cases is essential, as is the need for careful evaluation of the agent's outputs, iterative fine-tuning and validation before moving into production environments. Equally important is the development of a well-calibrated set of KRIs and risk models that align with the organisation’s internal standards and industry best practices.
Ultimately, the integration of agentic AI into compliance functions must be both technically sound and ethically grounded. This includes strong governance around data quality, privacy, algorithmic transparency and the preservation of human oversight (12).
When these foundations are in place, agentic AI has the potential to become a powerful and trusted ally in building more resilient and proactive compliance systems.
Bibliography
- Valentina Lana and Nizar Ouarti, ‘The Potential and Limitations of AI in the Legal Field’, ALM Law.com (9 April 2025) https://www.law.com/international-edition/2025/04/09/the-potential-and-limitations-of-ai-in-the-legal-field-giving-back-to-humans-and-machines-what-belongs-to-them-/ Accessed 10 June 2025.
- Teaganne Finn and Amanda Downie, ‘Agenic AI vs generative AI’, IBM https://www.ibm.com/think/topics/agentic-ai-vs-generative-ai#:~:text=Agentic%20AI%20describes%20AI%20systems,the%20accuracy%20of%20traditional%20programming Accessed 10 June 2025.
- Katherine Forrest, ‘The Emerging World of AI Agents’, ALM Law.com (29 April 2024 https://www.law.com/newyorklawjournal/2024/04/29/ai-sat-down-at-the-table-and-began-to-negotiate-the-deal-the-emerging-world-of-ai-agents/ Accessed 10 June 2025.
- International Bar Association and The Center for AI and Digital Policy, The Future is Now: Artificial Intelligence and the Legal Profession (September 2024) https://www.ibanet.org/document?id=The-future-is%20now-AI-and-the-legal-profession-report Accessed 10 June 2025.
- OECD, Generative AI for anti-corruption and integrity in government (22 March 2024) https://www.oecd.org/en/publications/generative-ai-for-anti-corruption-and-integrity-in-government_657a185a-en.html Accessed 10 June 2025.
- Business at OECD, Harnessing AI for Integrity: Opportunities, Challenges, and the Business Case Against Corruption https://www.businessatoecd.org/hubfs/Harnessing%20AI%20for%20Integrity.pdf?hsLang=en Accessed 10 June 2025.
- Nicolas Pinaud and Julia Fromholz , ‘How can cutting-edge technologies support the global fight against corruption?’, OECD (4 April 2025) https://www.oecd.org/en/blogs/2025/04/how-can-cutting-edge-technologies-support-the-global-fight-against-corruption.html Accessed 10 June 2025.
- Jagreet Kaur, ‘Agentic AI for Compliance | The Ultimate Guide’, XenonStack (29 March 2025) https://www.xenonstack.com/blog/agentic-ai-compliance Accessed 10 June 2025.
- Jagreet Kaur, ‘Revolutionizing Risk Management in Banking with Agentic AI’, Akira AI (4 February 2025) https://www.akira.ai/blog/risk-management-with-agentic-ai Accessed 10 June 2025.
- CENTRL Team, ‘AI Agents in Diligence: A Look At The Future of the Industry’, CENTRL (4 April 2025) https://www.centrl.ai/resources/ai-agents-in-diligence-a-look-at-the-future-of-the-industry/ Accessed 10 June 2025.
- Jagreet Kaur, ‘Transforming Real-Time Transactions Monitoring: AI Agents Unleashed’, Akira AI (4 February 2025) https://www.akira.ai/blog/real-time-transactions-monitoring-with-agentic-ai Accessed 10 June 2025.
- Lucinity, ‘Ethical Considerations in Deploying Agentic AI for AML Compliance’, Lucinity (23 January 2025) https://lucinity.com/blog/ethical-considerations-in-deploying-agentic-ai-for-aml-compliance Accessed 10 June 2025.