AI and regulatory overlap

Neil HodgeFriday 29 November 2024

A company’s use of AI can place it in the crosshairs of multiple regulators simultaneously when things go wrong. In-House Perspective examines how legal teams can mitigate the risks.

The proliferation of artificial intelligence (AI) – as well as the business case promoting its use – has led to companies globally embracing both off-the-shelf and bespoke products. Within a relatively short timeframe, AI technologies have established a track record of making processes faster and more efficient, thereby reducing costs and enabling businesses to remain competitive.

But there’s a significant problem: many companies are unaware as to whether the AI technologies they’ve deployed are legally safe, or where the data these systems are trained on has come from. They may have also not realised that they as users – and not necessarily the companies that developed the technology – are in the firing lines of regulators for any incidence of non-compliance.

As a result, commentators warn that companies are at serious risk of facing multiple fines and other sanctions from a number of different regulators for the same misconduct if the AI systems they use harm customers or misuse their data. And the penalties can be substantial. The EU’s AI Act, for example, which entered into force in August, allows regulators to impose fines of up to €35m or seven per cent of worldwide group turnover, whichever is higher. In short, says Lee Ramsay, a practice development lawyer at Lewis Silkin in London, ‘it would be folly’ to think that only tech companies are on the hook.

Multi-agency perils

To get a sense of how AI use could result in multi-agency scrutiny, imagine that a large UK bank with a European presence implements an AI-driven credit scoring system to streamline loan approvals and personalise financial product offerings. The AI system is designed to assess whether an applicant is creditworthy, predict future financial behaviours and tailor marketing campaigns for different customer segments – all routine business so far.

But if the AI fails, it could lead to several regulatory breaches and draw scrutiny from multiple agencies. Apart from the EU’s AI Act, misuse could breach equality laws if the system inadvertently introduces biases and discriminates against certain demographic groups, such as by systematically assigning lower credit scores to applicants from certain backgrounds, leading to disproportionately higher rejection rates.

It could also break privacy rules such as the EU General Data Protection Regulation (GDPR) if the AI model uses extensive customer data without obtaining explicit consent for some data processing activities. There may also be a breach of the UK Financial Conduct Authority (FCA) rules if AI profiling leads to mis-selling and unfair customer treatment – for example, targeting high-risk customers with complex financial products unsuitable for their needs, resulting in financial losses and complaints.

Other regulators could also open investigations for the same misconduct. For example, misleading claims about the capabilities of AI in marketing communications could fall foul of broadcasting, advertising and marketing watchdogs, while competition and consumer protection authorities may examine how companies use algorithms and AI systems to set prices, target consumers or make personalised offers.

Such a scenario only focuses on the action UK regulators may take. If the organisation has operations or customers in the EU – or has used the data of the bloc’s citizens in any way – it could face the wrath of multiple authorities in individual EU Member States, too. Worse still, the enforcement mechanisms associated with these regulations may apply in parallel and lead to concurrent enforcement – though the EU AI Act states that the amount of any penalty, in principle, should take into account whether the same behaviour has been subject to another administrative fine.

The company using AI is responsible for the technology’s use, the data it’s trained on and its outcomes. It’s clear then that companies need to take AI governance and auditability seriously. While tech companies ‘are accountable for the design and functionality of their products, companies that deploy these technologies must ensure they are used responsibly and in compliance with relevant regulations,’ says Ramsay. ‘Ignoring danger signs and failing to carry out the usual compliance and risk management checks mean potential exposure to significant legal and operational risk, which in turn can lead to regulatory action, reputational damage and loss of consumer trust. Ensuring robust oversight and governance are crucial, regardless of any perceived liability of the tech providers.’

“Ignoring danger signs and failing to carry out the usual compliance and risk management checks mean potential exposure to significant legal and operational risk


Lee Ramsay
Practice Development Lawyer, Lewis Silkin

Getting to grips with AI

Marlene Schreiber, Vice-Chair of the IBA Technology Law Committee and a partner at Härting in Berlin, doesn’t believe that many companies are yet aware that multiple regulators can investigate and sanction organisations for the same misuse of AI. In her opinion, most companies haven’t yet engaged with the subject of AI regulation, or – at best – have only touched upon it superficially. However, she adds that larger multinational companies are more likely to identify such legal risks early on and set up frameworks to ensure compliance across multiple jurisdictions, while smaller businesses may struggle and ‘underestimate the complexity of compliance with varying AI laws and any possible sanctions’.

She says there are several key steps companies should take to ensure that their use of AI complies with different regulatory requirements. Firstly, companies should map and make an inventory of all the AI systems in use across the organisation, including their data sources, decision-making processes and target application. This exercise should also include cross-checking AI usage by remote teams as the location of workers may trigger compliance obligations in regions such as the EU, even if the business is primarily based elsewhere.

Organisations also need to retain awareness of how AI-related regulations are developing and changing, particularly in key jurisdictions such as Asia, Europe and the US, she says. Furthermore, companies should conduct dynamic risk assessments to align the use of AI by the business with the most restrictive regulations across multiple markets and ensure that regulatory compliance is considered in the design phase of AI development to achieve ‘compliance by default’.

To best utilise internal resources and to ensure the organisation is receiving adequate AI risk assurance, Schreiber says companies should align their AI compliance efforts with existing processes and systems, and ensure adherence to data protection rules, cybersecurity standards and robust internal controls. She suggests they conduct regular audits of AI systems that have been – or are about to be – implemented, focusing on transparency, explainability and bias mitigation, as well as review and – if necessary – adjust contracts with third-party vendors who are supplying AI technologies, either bespoke or off-the-shelf. They should also conduct regular training for employees to raise awareness and ensure compliance.

In-house value

In-house legal teams have plenty of scope to add value, too, says Schreiber. Firstly, they should adopt a proactive, ongoing review process to manage risks across different jurisdictions and ensure that external legal advice is sought when they believe the organisation needs to navigate complex, cross-border compliance challenges. She adds that they should prioritise the categorisation of AI systems under the EU AI Act and equivalent local regulations to determine the level of risk exposure early in the product development cycle.

Organisations should also classify AI systems into risk categories – for example, they could consider using a ‘traffic light’ system – and assess the company’s role within each system, such as ‘provider’, ‘user’ and ‘distributor’. Schreiber warns that ‘if AI applications fall within or even on the fringe of high-risk systems, the implementation effort is significantly higher and must start very early in the value chain’. She therefore believes that in-house lawyers should ‘ensure that regulatory compliance is considered in the design phase of AI development to minimise the risk of non-compliance’.

“If AI applications fall within or even on the fringe of high-risk systems, the implementation effort is significantly higher and must start very early in the value chain


Marlene Schreiber
Vice-Chair, IBA Technology Law Committee

However, Schreiber adds that the role of in-house counsel strongly depends on the company’s size, the capabilities and capacity of the legal team and the scope of AI usage. ‘To establish a sufficient AI governance framework within the company, it needs an interdisciplinary team that the in-house legal function should be a part of,’ which includes coordinating the work and expertise of the information technology (IT) and cybersecurity teams, compliance, risk management and external counsel, she says.

Raphaël Dana, Vice-Chair of the Fintech Subcommittee of the IBA Technology Law Committee, also believes the level of awareness that AI-related misconduct could trigger investigations by various regulators across different jurisdictions ‘often depends on the industry and the company’s maturity level in AI governance’. Dana, who's also the Founder of boutique tech law firm Dana Law in Paris, warns that ‘the intricacies of overlapping regulatory mandates, especially when it involves AI, can sometimes be overlooked, leading to insufficient mitigation strategies’. More worryingly, he says, ‘there is still a knowledge gap regarding the potential for enforcement actions across multiple regulators, including emerging AI-specific laws that compound existing oversight in areas like data privacy and competition’.

“There is still a knowledge gap regarding the potential for enforcement actions across multiple regulators


Raphaël Dana
Vice-Chair, Fintech Subcommittee of the IBA Technology Law Committee

Like Schreiber, he believes in-house counsel can play a critical role in ensuring that AI governance frameworks are comprehensive and adaptive. He says in-house legal teams should actively collaborate with their colleagues in the areas of compliance, data and technology to create a risk matrix that aligns with jurisdictional obligations, while leading efforts to assess legal risks and ensuring the technical team understands the regulatory implications of AI deployment.

Dana says conducting effective AI risk assessments requires a robust, multi-layered approach that considers both sectoral and cross-border regulatory requirements. The first step, he says, is to map out the specific AI applications the company is deploying to identify their potential risks – whether ethical, operational or compliance-related. He adds that in-house counsel should also identify all relevant AI regulations in every jurisdiction where the company operates and then establish an AI governance programme that incorporates risk management, accountability and transparency measures. They should also ensure ongoing audits and internal controls are in place to monitor AI systems and address compliance risks, as well as implement policies to identify, mitigate and prevent bias in AI decision-making processes, particularly in areas such as health, recruitment, financial services and consumer products.

AI assurance

Commentators say that a key problem for many organisations is that they’re so afraid of replicating the same work when setting up an AI risk management framework – especially in terms of data protection issues – that they actually leave assurance gaps because they presume another function, such as IT, tech, compliance or risk management, is reviewing particular areas of AI use when they’re not.

One way around this, says Adam Penman, an employment lawyer at McGuireWoods in London, is to appoint an AI ‘go to’ person within the business who is sufficiently empowered to make decisions around navigating AI risk and who will be held accountable for managing them. He believes such a role ‘will be increasingly key in this space’, much like data protection and money laundering reporting officers are in their respective areas. ‘Although not a mandated role as yet, the earlier such a contact person is put in place, the better the role will evolve with best practice and greater regulation,’ he says.

Another possibility is to set up a cross-disciplinary function to address AI and multi-jurisdictional risks both proactively and reactively, especially since data flow doesn’t respect national boundaries and ‘there is a presumption that it will be for the company or developer to prove innocence, rather than the claimant to prove guilt,’ says Richard Kerr, a senior director in the Data Insights and Forensics practice at Kroll in London. ‘This overlapping jurisdiction underscores the importance of comprehensive compliance strategies and proactive risk management for businesses, regardless of their size.’

Kerr says the outcome of a case will depend on the quality and robustness of the evidence, ‘so having a strong and defensible strategy on how the AI was trained and its actions taken will become essential to successful outcomes. It’s a question of degree but, as ever, ignorance of the law is no defence at all’.

Other commentators also suggest that in-house lawyers should deliberately zoom in on areas where AI rules and other legislation may overlap so they can provide the organisation with assurance that areas of interest to multiple regulators are being identified, managed, mitigated and reported. Lauren Wills-Dixon, Head of Privacy at law firm Gordons in the UK, says that some provisions of the GDPR and the EU AI Act overlap, for example, which should enable data protection impact assessments and AI risk assessments to be documented side by side. For example, if an AI system involves the processing of personal data, the principles and provisions of the GDPR must be applied to that activity, too.

Ultimately, however, providing adequate AI assurance depends on how regularly – and well – the organisation actually reviews what technologies it’s using, how they’re being used and by whom and how their impact is assessed and monitored. Christopher Holder, Member of the IBA Technology Law Committee Advisory Board, says the idea that a company can be subject to investigations from multiple regulators for the same alleged wrongdoing is ‘nothing new’. The problem with AI, however, is that because it can change its functionality as its capabilities evolve, regulators will be keen to scrutinise how the technology is affecting operations, as well as the impact it’s having on decision-making and other outputs, and how the risk of harm is being monitored, assessed and mitigated.

“You need to be able to explain to a regulator that the technology is not harming people, which requires thorough and timely impact assessments


Christopher Holder
Member, IBA Technology Law Committee Advisory Board

Holder, who's also a partner at law firm Bristows in London, says companies will need to conduct regular reviews of what AI they’re using and how they’re using it. ‘These reviews can’t be quarterly – they need to be continuous. You need to be able to explain to a regulator that the technology is not harming people, which requires thorough and timely impact assessments,’ he says.

One of the key issues with AI governance, says Holder, is ‘maintaining humans in the loop’. Without ‘human oversight and the ability to intervene and push a "kill switch" if necessary, companies are simply begging for trouble,’ he says. ‘In-house counsel have a strong role in preventing harm through AI use, and if that fails, then it is vital to have processes and controls in place to limit the level of damage it can cause. There is very little case law around what actions regulators will take regarding harmful AI, but we know under the EU AI act – as well as other legislation that covers AI use, such as the GDPR – potential sanctions are very severe.’


The IBA report, The Future is Now: Artificial Intelligence and the Legal Profession, can be viewed on the IBA website: ibanet.org/The-future-is-now-artificial-intelligence-and-the-legal-profession.


How AI can raise competition concerns among regulators

Companies may be running the risk of violating competition law if ‘self-learning’ algorithms help fix prices and encourage collusion with other industry players. According to Nazli Cansin Karga, Membership Officer – Young Lawyers within the IBA Technology Law Committee, AI poses several risks for companies in terms of competition law infringements that could see them sanctioned by multiple regulators across jurisdictions for the same misuse of AI.

‘Competition law may not be top of the list of risks for companies, but it can be just as important as data protection, product safety and other more recognised issues, depending on the particular functionality and use of the AI,’ says Karga, who's also a senior associate at Dentons in Glasgow. For example, pricing algorithms – which are commonly used – could prove particularly high risk. These algorithms, says Karga, range from those designed to follow simple rules such as matching the lowest price, to those that are complex, sophisticated and operate autonomously. In theory, these ‘self-learning’ algorithms can learn to collude without human intervention or instruction – and collusion is one of the chief risk areas of non-compliance with competition rules.

Key risks associated with pricing algorithms include competitors using them to enforce an anti-competitive agreement; competitors utilising them to facilitate and maintain collusion by programming the algorithms to monitor market prices, follow price leadership and punish deviations from a tacit agreement; and organisations subscribing to the same third-party pricing tool using commercially sensitive information from competitors – such as future prices – that could result in an unlawful exchange of information or collusion.

Karga says that conduct that's being investigated for data misuse by the UK’s data protection regulator, the Information Commissioner’s Office (ICO), could also be investigated by the country’s antitrust watchdog, the Competition and Markets Authority (CMA), if it has reasonable grounds for believing that the same conduct breaches competition law. In fact, the UK has specifically set up the Digital Regulation Cooperation Forum to enhance cooperation among the CMA, the ICO, advertising regulator the Office of Communications (Ofcom) and financial services regulator the FCA to improve supervision of online regulatory matters. ‘If bad conduct comes to the attention of one regulator, others are likely to find out about it and will consider whether to use their own powers,’ says Karga.

‘In the same way as companies need to ensure compliance with competition law by employees, they now need to start thinking about how their AI is used or face liability for competition law infringements facilitated by AI technology,’ says Karga. ‘If a practice is illegal when implemented offline or online, there is a high risk that it will also be illegal when implemented using AI.’