2023: an intelligent learning odyssey
Monday 20 November 2023
Lucía Rosso
Ferrere, Montevideo
lrosso@ferrere.com
Introduction
Just like the game theory scenario of the prisoner’s dilemma, in which two prisoners must choose to cooperate by remaining silent or confess and implicate the other, the decision to adopt or reject artificial intelligence (AI) technology in education and in the legal profession, also involves a strategic decision-making process with significant consequences, both for those who adopt and those who reject.
Spoiler alert: AI is here to stay. Advances in AI, and particularly the launch of ChatGPT in November 2022, are leading to the inevitable integration of chatbots and intelligent systems in the whole education system and the legal profession. The question which now faces professors, universities, lawyers and law firms is whether to resist or embrace it.
As I reflect on such a question, I am deeply grateful that this topic has emerged during my time as an educator, as it allows me to address it in a more thoughtful and comprehensive way, both for my own personal benefit as an academic and for that of my students.
‘The direction in which education starts a man, will determine his future life’, Plato wisely said.[1] As educators, we play a major role in the education system and consequently, in students’ future. It is therefore imperative to get involved in matters which may revolutionise students’ way of learning. Without doubt, AI technology is challenging education as traditionally conceived and prompting educators to reconsider their approaches towards students. Consequently, we cannot stay oblivious to this matter.
In this article, I will explore the potential implications of AI tools on education and on the legal profession. As a starting point, I will analyse plagiarism and ethics within the use of AI technology and then swiftly refer to the perks and limitations of AI technology in the legal profession. Using this analysis as a foundation, this article will then examine the various approaches that educators and legal institutions can take to optimise a responsible use of AI technology while minimising its limitations.
Plagiarism and ethics in AI
Automatic essay-writing technologies and services such as ChatGPT are facing mixed reactions within the realms of law and education. While some universities and law firms are stimulating the use of AI tools,[2] others across the world are restricting (and even banning) their use.[3]
AI technology has given rise to two major questions in the academic legal world: (1) Does the use of automatic essay-writing technologies constitute plagiarism? and (2) Assuming the first answer is no, is such use considered ethical?
Does the use of automatic essay-writing technologies constitute plagiarism?
Is AI an author?
When it comes to using AI-generated text, there is a debate over whether it constitutes plagiarism.[4] This is because historically, plagiarism has been defined as copying text or ideas from others without proper attribution.[5] Consequently, and based on a literal understanding of plagiarism, as the text is created by a machine, it cannot be regarded as the work of someone else.
Such reading can be supported by the fact that, to date, and under almost all copyright laws, AI itself may not be categorised as an author, as it is not a legal entity.[6] Additionally, the US Copyright Office recently rejected ownership rights in an AI-generated artwork by cancelling a copyright registration of a comic book which involved the use of AI.[7] Moreover, the OpenAI terms and conditions provides that, if there is a right in the output, such output is owned by the user.[8]
However, it is important to note that, from a legal standpoint, ownership and authorship of AI-generated works have not been fully addressed. As such, it is possible that regulatory changes and case law emerge in future to settle, or at least begin to settle, the legal implications around copyrights within AI-generated texts.
AI models may rely on existing content to generate content
It is essential to highlight that it can only be argued that no plagiarism is involved when AI generates a text completely from scratch. This is because chatbots can also rely on pre-existing texts, combining them to create what may appear to be unique content.[9] Under this scenario, the risk of plagiarism arises.
In this sense, when questioned about the notion of plagiarism in AI-generated text, ChatGPT itself provided the following response: ‘If the AI is being used to generate original content that is not copied from existing sources, then it is not plagiarism. However, if the AI is being used to copy content from existing sources without proper attribution, then it is plagiarism.’
Such distinction gives rise to a highly complicated problem surrounding AI-generated text: telling the difference between an original content and a paraphrased existing content. And more importantly, it elicits the question: Do students have a responsibility to ensure that AI-generated text is not plagiarised?
Post-Plagiarism era
In connection with the above controversial questions, Sarah Elaine Eaton, Associate Professor at the Werklund School of Education at the University of Calgary, talks about the imminent post-plagiarism era.[10] In her view, historical definitions of plagiarism will no longer apply when using AI tools. In this new era, Eaton claims, humans and technology co-writing text will be normal, and the result will be a hybrid ‘humantechnology’ output.[11]
From my point of view, Eaton introduces a very interesting view of plagiarism in AI because it transcends the definition of plagiarism as conceived today. With AI-generated text, trying to uncover the source of information is practically impossible, so traditional boundaries of plagiarism need to be adapted to this new reality. From an ethical standpoint, however Eaton’s concept of post-plagiarism needs to be approached with nuance. This is because, as will be discussed below, AI content may not be plagiarised, but it still present ethical concerns.
Beyond plagiarism: is the use of automatic essay-writing technologies ethical?
Content may not be plagiarised, but it may not be ethical
While AI content creation might not fall under the traditional definition of plagiarism (or even under a modern-evolved one), AI writing still presents a major ethic problem. If students present an essay as their own when, in fact, it was created by AI technology (without a slight of their intervention), it can be argued that it is something very close, if not the same, as cheating. In the end, a student taking credit for something they did not create, whether it is considered as plagiarism or not, is not ethical.
Nevertheless, it is important to consider the specific circumstances when evaluating the ethical implications surrounding AI writing because the use of AI-generated text does not necessarily imply dishonesty.
The use of AI-generated text does not necessarily imply dishonesty
The risk of cheating with AI technology is one of the main reasons why some educational institutions are restricting its use.[12]
Nevertheless, the blame should not be placed on technology, but rather on its misuse. As will be later analysed, the use of AI language models can be beneficial if used properly. However, if individuals rely too heavily on AI to the extent where AI does their work for them, such reliability can be seen as cheating, not to mention jeopardising critical thinking skills and creativity.
While this matter may seem straightforward at first glance, it is actually far more complex. There is no doubt that if a student uses ChatGPT to improve an essay’s wording into a more sophisticated style, no cheating is involved. At the end of the day, this can be achieved in a more time-consuming manner using a dictionary or the internet. However, there are several grey areas to consider. For instance, what if a student writes the whole essay but leaves ChatGPT to write the conclusion? And what if the student comes up with the ideas but the entire essay is written by ChatGPT? In some cases, the line between influence and cheating when it comes to AI technology may be terribly thin. This is where educators have a critical role to play. We are the ones in charge of tracing the line, however difficult this may be.
In my opinion, there are two main aspects to consider: (1) the intention of the user; and (2) the disclosure of AI use.
Discovering the user’s intentions is essential as they will reveal whether AI was used as an aid tool or as a cheating tool. As intentions are subjective, it can be challenging to determine them objectively. One way to address this challenge in conjunction with data privacy policies and regulations, would be by deconstructing the user’s search history and analyse the initial prompts which were provided to the chatbot. By doing this, it would be possible to gain insight into the user’s intentions and better understand how the text was generated. Secondly, disclosing the use of AI technology is essential. When students mention the use of a chatbot in an essay, they are acknowledging that they did not complete the work entirely on their own, or at the very least, that AI assisted them in some way. It is important to recall a similar approach that has been taken by Springer Nature, an academic publisher, which declared that while AI chatbots cannot be authors of articles, scientists may use AI as a writing aid or research tool, having to disclose such use.[13]
The perks and limitations of AI in the legal profession
The significance of deciding on the legality and ethics of using AI technology rests on the advantages it offers to the legal field, when used correctly. The legal profession involves handling vast amounts of work within tight timeframes, all while having to satisfy the demanding expectations of clients. AI technology brings the possibility of automating routine tasks and therefore significantly reducing the amount of time invested in their completion. For instance, document review, proofreading, error correction and summarising are tasks that can be completed by AI technology in a matter of seconds. Such time saving not only reduces costs but also enables lawyers to focus more on other complicated tasks. Moreover, AI technology can assist lawyers in conducting legal research as it may process large amounts of data in a fraction of the time it would take a lawyer.
However, AI also has limitations which cannot be ignored, not least AI content may be inaccurate. Such a condition is not only set out in the Open AI terms,[14] but even the company’s CEO, Sam Altman, has made reference to this shortcoming.[15] AI content may also be biased if the data used to train an AI system is skewed. In this context, responses generated by AI should be regarded as initial insights which require further analysis and evaluation by lawyers.
The role of legal firms and educators
Educators and legal firms have the tough work of educating and promoting an ethical and responsible use of AI technology, so that its benefits are exploited, and its limitations avoided, or at least, lessened.
The need for a new approach
The arrival of AI technology calls for a shift in the traditional methods of legal writing and research. As I see it, law firms and educators (myself included) will have the difficult task to identify, in the long term, tasks which:
- AI may do properly without human intervention;
- AI may not do; and
- AI and a human working together may do in a way neither of them could accomplish independently.
Such a distinction will be essential in differentiating the education we are providing from the education we should be providing. It is vital that educational institutions and legal firms focus on tasks and skills which machines cannot replicate. Critical thinking, empathy, creativity, emotional intelligence, client liaison and problem-solving are skills that need to be trained and exploited as they will make the difference between humans and machines.
There are certain tasks in which AI, when properly used, may provide an invaluable difference. Research, essays, memorandums, email, and every other legal product which involves writing and research skills can and should be done with the aid of AI. This is what ‘intelligent augmentation’ means and claims for.[16] In a ground-breaking case in February 2023, a Colombian judge claimed to have used ChatGPT to facilitate the drafting of texts.[17]
Encourage and inform
Law firms and educators need to encourage lawyers and students to use AI technology and educate them on how it can be utilised in a responsible way.
It is also important to discuss with students and lawyers the advantages and disadvantages of AI, emphasising the importance of verifying chatbots responses. For instance, a few weeks ago, during a lecture in contracts, we showed students a conversation with ChatGPT involving the regulation of donations in Uruguay. Despite the chatbot’s answers being incorrect and having no legal standpoint whatsoever, we showed this to our students to highlight the limitations of AI tools and the risks of solely relying on them.
Elaborate a framework
As the boundary between responsible/irresponsible uses of an AI tool may be unclear, law firms and educational institutions need to introduce a set of rules or guidelines to ensure its ethical and effective use. In this sense, such rules and guidelines should cover the ways in which AI can and cannot be used as well as setting out mechanisms to mitigate its potential risks.
Conclusion
The prisoners’ dilemma law firms and universities now face is whether to cooperate with each other and agree on a responsible use of AI or to reject it.
If they all cooperate and decide on a responsible use of AI technology, setting down rules and guidelines to prevent plagiarism and cheating, they will all benefit, as AI will increase efficiency and boost students and lawyers’ irreplaceable skills.
However, if certain law firms discard AI, while others adopt its use, then the ones which adopt AI will surpass the detractors by reducing their costs and improving productivity, and being therefore more attractive to clients who demand faster and more efficient legal services. Similarly, law schools which embrace AI may offer students new teaching methods, preparing them for the future job market which is likely to require a combination of AI technology and human skills, and therefore be more attractive to students.
As Charles Darwin (1859) said, over 150 year ago: it is not the strongest or most intelligent of the species that survives but the ones which are most adaptable to change.
Notes
[1] Plato, The Republic, Book IV, Chapter IV.
[2] This is the case of Allen & Overy, see David Wakeling, ‘A&O announces exclusive launch partnership with Harvey’, Allen & Overy, 15 February 2023, https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey. See also Sara Merken, ‘PwC's 4,000 legal staffers get AI assistant as law chatbots gain steam’, Reuters, 15 March 2023, https://www.reuters.com/world/uk/pwcs-4000-legal-staffers-get-ai-assistant-law-chatbots-gain-steam-2023-03-15. See also Andrew Perlman, ‘The Implications of ChatGPT for Legal Services and Society’, Center on the Legal Profession, Harvard Law School, March/April 2023, https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/#:~:text=At%20the%20same%20time%2C%20ChatGPT,prepare%20people%20for%20their%20careers. See also Michael Pelly, ‘Law firms say ChatGPT an “opportunity, not a threat” ’, Australian Financial Review, 9 February 2023, https://www.afr.com/companies/professional-services/lawfirms-say-chatgpt-an-opportunity-not-a-threat-20230208-p5cj2j. All accessed 17 November 2023.
[3] eg, in New York, Samantha Kelly and Jennifer Corn ‘New York City public [state] schools ban access to AI tool that could help students cheat’, CNN, 6 January 2023, https://edition.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html. See also in Australia, Ashleigh Davis, ‘ChatGPT banned in WA public [state] schools in time for start of school year’, ABC News, 29 January 2023, https://www.abc.net.au/news/2023-01-30/chatgptto-be-banned-from-wa-public-schools-amid-cheating-fears/101905616. See also in France, ‘Sciences Po bans the use of ChatGPT without transparent referencing’, Sciences Po Newsroom, 27 January 2023 https://newsroom.sciencespo.fr/sciences-po-bans-the-use-of-chatgpt. See also in India, Pathi Venkata Thadhagath, ‘Why this Bengaluru institute has restricted ChatGPT use for students’, Hindustan Times, 28 January 2023 https://www.hindustantimes.com/cities/bengaluru-news/why-this-bengaluru-institute-has-restrictedchatgpt-use-for-students-101674901758231.html. All accessed 17 November 2023.
[4] Sarah Elaine Eaton, ‘Artificial intelligence and academic integrity, post-plagiarism’, University World News, 4 March 2023 https://www.universityworldnews.com/post.php?story=20230228133041549. See also Alex Hern, ‘AI-assisted plagiarism? ChatGPT bot says it has an answer for that’, The Guardian, 31 December 2022 https://www.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that. Both accessed 17 November 2023.
[5] ‘Plagiarism: information about what plagiarism is, and how you can avoid it’, University of Oxford https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism accessed 17 November 2023.
[6] Kristy Stewart and Hannah Smethurst, ‘Copyright and ChatGPT’, Thorntons, 1 March 2023 https://www.thorntons-law.co.uk/knowledge/copyright-and-chatgpt. See also Joe Mckendrick, ‘Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?’, Forbes, 21 December 2022 https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms. See also Shawn Helms and Jason Krieser, (2023, March). ‘Copyright Chaos: Legal Implications of Generative AI’, Bloomberg Law, March 2023 https://www.bloomberglaw.com/external/document/XDDQ1PNK000000/copyrights-professionalperspective-copyright-chaos-legal-implic. See also EU Intellectual Property Office, ‘Intellectual Property in ChatGPT’, 20 February 2023 https://intellectual-property-helpdesk.ec.europa.eu/news-events/news/intellectual-property-chatgpt-2023-02-20_en#:~:text=Indeed%2C%20under%20European%20(and%20US,%E2%80%9Cjust%E2%80%9D%20an%20artificial%20intelligence. All accessed 17 November 2023.
[7] Menachem Kaplan, Edward Beard and Marissa Yu, ‘ChatGPT emerges as US Copyright Office rejects copyright in AI-generated artwork’, Freshfields Bruckhaus Deringer, 16 March 2023, https://technologyquotient.freshfields.com/post/102iale/chatgpt-emerges-as-us-copyright-office-rejectscopyright-in-ai-generated-artwork accessed 17 November 2023.
[8] OpenAI terms of use, s 3(a), https://openai.com/policies/terms-of-use accessed 17 November 2023.
[9] Ron N Dreben and Matthew T Julyan, ‘Generative artificial intelligence and copyright current issues’, Morgan Lewis, 23 March 2023, https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues. See also Javiera Bedilla, ‘Artificial Intelligence as a creator, it's time to talk about the intellectual property of the future’, Lex Latin, 7 December 2022, https://lexlatin.com/opinion/inteligencia-artificial-creadora-obras-imagenes-propiedad-intelectual. Both accessed 17 November 2023.
[10] Eaton, University World News, see n4, above.
[11] Sarah Elaine Eaton, ‘6 Tenets of Postplagiarism: Writing in the Age of Artificial Intelligence’, video conference, uploaded 25 February 2023 https://www.youtube.com/watch?v=NxFMMw1QZX0 accessed 17 November 2023.
[12] A representative for state school in Seattle told Geekwire that the district had banned ChatGPT from all school devices, citing the district ‘does not allow cheating and requires original thought and work from students’, see Arianna Johnson, ‘ChatGPT In Schools: Here’s Where It’s Banned – And How It Could Potentially Help Students’, Forbes, 18 January 2023 https://www.forbes.com/sites/ariannajohnson/2023/01/18/chatgpt-in-schools-heres-where-its-banned-and-how-it-could-potentially-help-students. See also: ‘Los Angeles Unified School District was one of the first districts to block the site to “protect academic honesty” see Beatrice Nolan, ‘Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears’, Business Insider, 30 January 2023, https://www.businessinsider.com/chatgpt-schools-colleges-ban-plagiarism-misinformation-education-2023-1. Both accessed 17 November 2023.
[13] Ian Sample, ‘Science journals ban listing of ChatGPT as co-author on papers’, the Guardian, 26 January 2023 https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers. See also James Vincent, ‘ChatGPT can’t be credited as an author, says world’s largest academic publisher’, The Verge, 26 January 2023 https://www.theverge.com/2023/1/26/23570967/chatgpt-author-scientific-papers-springer-nature-ban. Both accessed 17 November 2023.
[14] OpenAI terms of use, s 3(d), https://openai.com/policies/terms-of-use accessed 17 November 2023.
[15] Sam Altman tweeted on 11 December 2022: ‘ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.’ https://twitter.com/sama/status/1601731295792414720?lang=en accessed 17 November 2023.
[16] Chris Dede, Ashley Etemadi, and Tessa Forshaw, ‘Intelligence augmentation: Upskilling Humans to Complement AI’, The Next Level Lab at the Harvard Graduate School of Education. President and Fellows of Harvard College: Cambridge, MA, 2021 https://pz.harvard.edu/sites/default/files/Intelligence%20Augmentation-
%20Upskilling%20Humans%20to%20Complement%20AI.pdf accessed 17 November 2023.
[17] Gabriela Quevedo, ChatGPT: Colombia’s ruling and its foray into law’, Lex Latin, 8 February 2023 https://lexlatin.com/noticias/chatgpt-en-el-derecho-sentencia-de-colombia accessed 17 November 2023.