Creating the legal limits for AI

Arthur Piper, IBA Technology Correspondent

The Council of Europe’s Framework Convention marks the first major international effort to define the legitimate use of artificial intelligence. Global Insight assesses the Convention’s ambition, and its limits.

The popularity of large-language models (LLMs), such as ChatGPT, has helped to thrust artificial intelligence (AI) into the spotlight. Its seemingly revolutionary ability to provide human-like answers to a whole range of queries, as well as its ease of use and ubiquity, promised to boost productivity and usher in the long-awaited age of meaningful human–machine cooperation.

Not surprisingly, businesses have poured unprecedented amounts of money into AI and are reaping benefits from cost reductions and efficiencies as well as through redesigning their operating models, according to a report, published in May, by consultancy firm McKinsey. Yet the technology’s shortcomings remain a cause for concern. Given that the huge language corpus used to train such algorithms pulls on historical texts, gender and racial bias are baked into those systems. Inaccuracy and fabrication – often referred to as ‘hallucinations’ – make LLMs inherently unreliable for serious decision-making, which was the most significant concern outlined by McKinsey.

The ethics of the AI gold rush

Realising the big promise of AI, then, largely depends on being able to improve services without creating fresh liabilities. As Angelo Anglani, a Commissioner for the IBA Future of Legal Services Commission and a partner at ADVANT Nctm in Rome, has told Global Insight previously, ‘bias, the potential for plagiarism, the possibility of even unintentional inaccuracy – due to incomplete information entered into, or instructions given to, the system – are ever-present risks’.

LLMs are, of course, just the poster-bots for a much larger infiltration of AI into the world’s systems of government, finance and business. And since, in essence, algorithms are simply human decisions embedded in software programs, AI also threatens to greatly enhance the ability of its owners to exploit, control and manipulate those who use such technologies.

What makes this a pressing ethical issue is that the very gold rush fuelled by LLMs is extending the global reach of an increasing number of technology platforms, thereby exposing more people to badly regulated AI systems. With only uncoordinated, regional regulation in places such as Asia, Europe and North America, there’s a danger that the rights of individuals will be sidelined in the hurry by businesses and governments to gain competitive advantage in AI.

There’s a danger that the rights of individuals will be sidelined in the hurry by businesses and governments to gain competitive advantage in AI

Perhaps the potential for a global solution is on the horizon. In September, after two years of extensive discussions, the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law opened for signature during a conference of the Council of Europe Ministers of Justice in Vilnius. According to Ewelina Dobrowolska, Lithuania’s Minister of Justice, who was speaking at its launch conference, the Convention sets out a legal framework that aims to ensure that AI operates ethically and responsibly wherever it’s implemented in the world.

Unlike extrajudicial EU laws, such as the groundbreaking General Data Protection Regulation 2018 and the more recent Artificial Intelligence Act, the framework text was developed in close cooperation with a wide range of non-European countries, including Australia, Japan and the US. In fact, the EU, Israel, the UK and the US were among the early signatories – although China, an AI superpower, is still pursuing its own path. In theory, the signatories have agreed that their future AI regulation will follow the prescriptions set out by the framework.

Examining the exemptions

In practice, Article 3 of the Convention says that parties to the document must ‘address the risks and impacts arising from activities within the lifecycle of artificial intelligence systems’ in order to fulfil the duties of states to protect human rights, democratic processes and respect for the rule of law.

However, there are three critical exceptions to these requirements. First, Article 3.1 waives the need to adhere to the Convention if AI is used in the interests of national security – as long as those purposes are consistent with international laws on, for example, human rights. Second, systems that are deemed to be for research and development and that aren’t in general use are exempt. Testing such systems must not interfere with people’s rights. Finally, AI connected to national defence falls outside the scope of the Convention.

These exceptions are reportedly the fruits of two years of wrangling between the EU and its non-European partners, with EU negotiators wishing to align the framework with the bloc’s AI Act, which regulates the technology based on how harmful such programs are to humans. But because of the prescriptive nature of such risk assessments, which are designed to ensure product safety, exemptions have been made in a wide range of areas that may have been covered under a less prescriptive framework.

‘The loopholes in the actual scope of the Convention have the potential to enable conduct that has a negative impact on human rights protection’, write Karolína Babická, a senior legal adviser, and Cristina Giacomin, a legal intern, both with the International Commission of Jurists. ‘States Parties will have a wide discretion in deciding whether the Convention applies to private actors, and [an] exemption for research and development activities along with two exemptions based on national security and national defence, leave out two highly relevant fields for the application of AI systems.’

While the Convention genuinely marks the first real effort to define global legal boundaries around the legitimate use of AI – an approach missing from the regulation of most advanced technologies – the existence of so many exemptions threatens the Convention’s potential to protect individual rights.

In the absence of anything overarching, organisations – including law firms – need to get a better grip on the patchwork of fast-evolving regulations in the jurisdictions in which they operate. That has been made clear by a recent IBA report, The Future is Now: Artificial Intelligence and the Legal Profession, which found that two-thirds of respondents had already adopted AI technologies within their organisations. All firms with over 500 lawyers reported doing so. Respondents said that they were using a mix of models to help with tasks from drafting newsletters and posts for social media, to conducting legal research and drafting contracts.

Yet worryingly, controls around the use of AI remained under-developed. ‘AI governance is still a work in progress for many firms, presenting several challenges such as data governance and distribution, AI tool requirements, security, IP [intellectual property] and privacy’, the report highlights, suggesting that firms are opening themselves up to potential risks in their rush to implement these new systems. Like regulation, governance is most effective when least fragmented.

Arthur Piper is a freelance journalist. He can be contacted at arthur@sdw.co.uk

The IBA report, The Future is Now: Artificial Intelligence and the Legal Profession can be viewed on the IBA website here.

Shutter2U/Adobe Stock