The employability of artificial intelligence in the legal sector: should robots aid or call the shots?
Thursday 6 July 2023
Gaurav G Arora
JSA, National Capital Region
gaurav.arora@jsalaw.com
Aditi Richa Tiwary
Student, V Year, Dharmashastra National Law University, Jabalpur
Introduction
Human civilisation has come a long way from the era of telephones to the metaverse. Technology has revolutionised the world in multiple manifestations. As the most contemporary demonstration of technology, artificial intelligence (AI) is taking the world in its stride. The contemporary AI market of multi-US$100bn is reported to reach US$2tn by 2030.[1] The exponential surge of AI market is reflective of its rising demand across industries. While the advantages of AI are unparalleled, challenges such as AI-induced risks are not uncommon. Consequently, amid the global stir around GPT-4 acing the Uniform Bar Exam,[2] and predictions of AI replacing legal professionals,[3] the permissible degree of intrusion of AI in legal sectors across jurisdictions needs ample deliberation.
Understanding AI
AI is artificially achieved intelligence capable of primarily two tasks – simulation of human cognitive capabilities and facilitation of tasks requiring human skill. While AI might appear self-reliant, much of its sustenance depends on machine learning.
Machine learning causes AI systems to refine their cognitive and operational capabilities continuously through constant data-oriented training enabled by pattern-recognition algorithms.[4] In simpler terms, machine learning trains AI systems to evolve constantly by employing pattern recognition techniques to analyse data inputs and generate desired outputs, thereby enhancing the efficacy of manually-run processes manifolds. With the magnitude of jurisdictions welcoming AI across industries, it appears that the world has crossed the Rubicon for AI integration into manual processes, and AI is here to stay.
Some evolved AI use examples in legal sectors across the globe: how far have emerging economies caught the pace?
The global proliferation of AI in legal sectors is enormous. Particular references to developed economies reflect an ever-conducive ambiance facilitating the contemporary AI revolution. Tasks such as automated contract reviewing, document discovery, legal research and even predictive analysis of legal outcomes are undertaken by legal professionals through AI assistance across a majority of jurisdictions including those in inter alia North America, Europe and the Asia Pacific.
Reports reflect the market size of legal tech sector as the largest in North America and second largest in the European Union.[5] In addition to employing AI in regular legal tasks, US courts even employ predictive analytics by AI-oriented devices such as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) to predict recidivism for the purposes of granting bail.[6]
The Asia Pacific region, home to most emerging economies accounts for little less than a fifth of the revenue of global legal tech sector,[7] and employs AI-based devices as assistance primarily in undertaking legal research and procedural tasks such as record maintenance. Contemporarily, even India’s legal sector is proactively adapting to the AI revolution with lawyers relying on AI-based devices for legal research and the Supreme Court of India relying on AI-based ‘Supreme Court Portal for Assistance in Court’s Efficiency’ for record-keeping, processing facts and gaining legal inputs to facilitate decision-making.[8] While jurisdictions in Asia Pacific region such as India can be seen to be catching up with the AI revolution, legal sectors including courts in emerging economies such as the Philippines are in nascent stages of adapting to AI-oriented methods.[9]
In addition to emerging economies in the Asia Pacific, legal sectors in South America such as Brazil are also advancing, with the its Supreme Court relying on VICTOR-AI for record-keeping and evaluation of the likelihood of appeals.[10]
While legal sectors in the developed states can be observed to rely on AI for predictive analysis such as recidivism in addition to regular AI-assisted processes, the general trend of emerging economies reflects a safer approach of employing AI solely for assistive and procedural purposes with the intent of enhancing the efficacy of legal procedures.
Understanding AI-induced risks to the legal sector
Multiple use-cases of AI result in the possibility of countless AI-induced risks. As data forms the fuel of AI, effectiveness of all the use-cases of AI depends on its ability to analyse data inputs impartially and apply pattern recognition techniques to augment its accuracy. Consequently, in the absence of rationally assorted datasets fed as input, AI outputs are prone to inherent biases.
As an instance reflecting inherent data-driven biases in AI employed by the legal sector, the case of COMPAS and its use in the US is the most relevant. While COMPAS’s central function of predicting recidivism appears useful, reports continue to express concerns about its inherent racial and ethnic biases. Research demonstrates the inherent tendency of COMPAS to produce racially-biased outputs, identifying African-American individuals more than twice more likely to commit crimes than white individuals.[11] Furthermore, reports point to instances of white persons receiving a lower COMPAS score (reflecting a lower probability of committing crimes) despite their undeniable criminal history. Without any explicit training to factor racial attributes, COMPAS happened to automatically pick up race and ethnicity as determinants of recidivism in defendants, clearly demonstrating the tendency of AI to acquire unintended biases during pattern recognition through data inputs.
Other instances of AI systems being biased in terms of attributes such as gender, nationality etc,[12] continue to surface worldwide. Such instances inflict serious ramifications on the rights of citizens and are particularly harmful to the legal sector.
Instances of such discriminating output occur as AI acquires unintended biases through pattern recognition of data sets fed as input, calling for manual human intervention to eliminate biases by filtering and rationalising data inputs, or making appropriate changes in AI programming to neutralise the effect of biased datasets. Risk of biases in AI is the primary factor of it being used as an assistive tool as against a replacement of legal professionals.
Regulation of AI: a quick glance
Currently, AI is heavily regulated as a collateral to data protection. However, with a surge in AI proliferation, there arises a necessity of exclusive regulation of AI, especially with an effect to eliminate inbuilt or acquired biases. While white papers, policy documents and other regulatory guidelines for AI regulation exist in jurisdictions such as the United Kingdom[13] and United States,[14] other jurisdictions inter alia including the European Union,[15] have proposals in place for exclusively AI-based regulations targeting algorithmic impact assessment and elimination of algorithmic biases. The European Parliament’s recent adoption of amendments to its proposed Artificial Intelligence Act is particularly appreciable. The amendments adopted on 14 June 2023 inter alia concretise the categorisation of AI systems into different levels of risk: mandating varying degrees of regulatory compliances across the levels of risk identified on the basis of a combination of specified factors of intensity; severity; probability of occurrence; and duration, weighted in addition to the harms inflicted by AI on individuals or groups of people.[16] The amendments also increase the ambit of human oversight over AI systems, and aim to mandate human intervention in handling the sophistication surrounding AI, banning substitution of human autonomy by AI.[17]
As another possible route of AI regulation, emerging economies such as India are considering adopting an international framework for the regulation of AI.[18] The legal landscape of AI-regulation across the globe looks relatively nascent, with absence of concrete legislative solutions aimed at resolution of the core concerns of AI-induced risks.
The way forward
As inbuilt and acquired biases in AI form real threats to the legal sector, human intervention in the use of AI should not be done away with, especially in cases concerning the legal sector where decisions by AI inflict ramifications on citizens’ rights. Consequently, while AI assistance in the legal sector is strongly encouraged, absolute replacement of legal professionals by AI systems is a slippery slope. As jurisdictions forge ahead in their journey of AI regulation, provisions for compulsory human intervention in AI use are suggested. Moreover, incorporation of provisions requiring manual rationalisation of data-inputs to neutralise the effect of inbuilt and acquired biases remains a necessity, particularly for the global legal sector.
Notes
[1] Statista, ‘Global artificial intelligence market size 2021-2030’, May 2023 https://www.statista.com/statistics/1365145/artificial-intelligence-market-size accessed 8 June 2023.
[2] Pablo Arredondo, ‘GPT-4 Passes the Bar Exam: What That Means for Artificial Intelligence Tools in the Legal Profession’, SLS Blogs, 19 April, 2023, https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry accessed 8 June 2023.
[3] Steve Lohr, ‘A.I. Is Coming for Lawyers, Again’, New York Times, 10 April 2023 https://www.nytimes.com/2023/04/10/technology/ai-is-coming-for-lawyers-again.html accessed 8 June 2023.
[4] Microsoft, ‘Artificial intelligence (AI) vs. machine learning (ML)’ https://azure.microsoft.com/en-in/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning/#introduction accessed 8 June 2023.
[5] Thomas Alsop, ‘Legal tech market revenue share from 2021 to 2027 by region’ Statista, 10 August 2022 https://www.statista.com/statistics/1155885/legal-tech-market-revenue-share-by-region-worldwide accessed 8 June 2023.
[6] ‘State of Wisconsin, Department of Corrections, ‘COMPAS’ https://doc.wi.gov/Pages/AboutDOC/COMPAS.aspx accessed 8 June 2023.
[8] Swati Deshpande, ‘Can AI speed up disposal of cases? Verdict awaited’ Times of India, 2 August 2021 https://timesofindia.indiatimes.com/city/mumbai/mumbai-can-ai-speed-up-disposal-of-cases-verdict-awaited/articleshowprint/84959302.cms accessed 8 June 2023.
[9] The Supreme Court of Philippines, ‘SC to Use Artificial Intelligence to Improve Court Operations’ https://sc.judiciary.gov.ph/sc-to-use-artificial-intelligence-to-improve-court-operations accessed 8 June 2023.
[10] Daniel Becker and Isabela Ferrari, ‘VICTOR, the Brazilian Supreme Court’s Artificial Intelligence: a beauty or a beast?’ 8 June 2020 https://sifocc.org/app/uploads/2020/06/Victor-Beauty-or-the-Beast.pdf accessed 8 June 2023.
[11] Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ‘Machine Bias’ ProPublica, 23 May 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing accessed 8 June 2023.
[12] Michael McKenna, ‘Machines and Trust: How to Mitigate AI Biases’ Toptal https://www.toptal.com/artificial-intelligence/mitigating-ai-bias accessed 8 June 2023.
[13] UK Office for Artificial Intelligence, ‘A pro-innovation approach to AI regulation’ 29 March 2023 https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper accessed 8 June 2023.
[14] US The White House, ‘Blueprint for an AI Bill of Rights’ October 2022 https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf accessed 8 June 2023.
[15] Artificial Intelligence Act, European Union, 2021/0106 (COD).
[16] Amendment 60, Artificial Intelligence Act (amendments adopted by the European Parliament on 14 June 2023), P9_TA (2023) 0236.
[17] Ibid, Amendments 15, 71, 73, 92, 254, 334, 401 and 413.
[18] ‘Not considering law to regulate AI growth in country: IT Ministry’ India Business Standard, 6 April 2023 https://www.business-standard.com/india-news/not-considering-law-to-regulate-ai-growth-in-country-it-ministry-123040600207_1.html accessed 8 June 2023.