AI and tech: in focus
The IBA has created a dedicated page collating its content on artificial intelligence and technology issues:
Following publication of its report The Future is Now: Artificial Intelligence and the Legal Profession, the IBA AI Task Force hosted a showcase session to discuss the report’s findings.
Almudena Arpon de Mendivil, IBA President (AM): Last year when we were in Paris, I announced that the IBA would undertake a project related to AI. This has resulted, thanks to the work of all parts of the Association, in the report – The Future is Now: Artificial Intelligence and the Legal Profession – that we are launching today. I am especially proud to present today this, I believe, landmark report at a very timely moment.
Marc Rotenberg, Center for AI and Digital Policy, Washington, DC; Chair, IBA AI Task Force (MR): I’m sure many of you are familiar with this story that appeared in the press just a little more than a year ago about attorneys in New York who had taken advantage of a new AI service to prepare a filing. They didn’t review their work particularly carefully. There were citations that did not exist. And when the court became aware of these issues, it was obviously disturbed. And sanctions followed.
Almost immediately, bar associations around the world began to consider what were the ethical and legal obligations of this profession. To courts, to clients and to society at large. In fact, this has been a remarkable year for the development of new governance frameworks for AI. And many of these may be familiar to you. The Artificial Intelligence Act of the European Union was finalised this year. The Council of Europe AI Treaty was opened for signature just last week and so the list goes on.
I believe that the Council of Europe AI Treaty is an excellent foundation, and I hope we will have the opportunity to work together to see it adopted in countries around the world.
If we look at this issue of AI in our present-day society, we think about it not only as lawyers, but also as leaders in a field of increasing consequence for our societies. Just in the past week, many leading experts in AI have urged national governments to do more.
This is, frankly, a message to the IBA about your important role in this future. But what are we to say about this future? Well, this, of course, is the focus of our report. The future is now. It’s not science fiction. We experience it every day in the decisions we make about the use of AI in our firms and practice as our governments deploy AI, as our clients make use of AI, as courts consider how to integrate AI. And so we set out over the last several months with your participation to provide direction on these critical issues. To provide guidance, not only to others in the legal profession, but also to leaders around the world who are also seeking to answer these difficult questions.
Steven Cohen, Wachtell, Lipton, Rosen & Katz, New York; Legal Practice Division representative, IBA AI Task Force (SC): In the LPD, we were focusing very much on approaches to AI regulation. And at first, we were thinking of whether it would be possible to actually develop a model set of AI regulation. But we decided, I think, pretty quickly, that given the pace of the change and the expertise of our committee, it wouldn’t make sense to do that and to try and compete with the EU. Instead we thought it made more sense to try and figure out what our membership thought about AI regulation.
Most people favour moderate or comprehensive regulation, and people see a real need in the balance of risk and innovation. Most favour widespread stakeholder consultation. This is important because there’s often a debate as to who should be in the room when the regulation is being put together. And I think the committee’s view is that it should be widespread and it should be tech experts and industry also. If you have multiple points of view from within those groups to listen to, then it will be very useful, I think, in terms of getting the result right.
In a world where we have incomplete regulation, which we do now and probably always will, it’s really important, I think, as lawyers that we remember that we have to engage in the same process in our firms and with our clients that we’re talking about for the government. What I mean by that is just like the governments adopt regulations, we have to adopt rules and policies to govern the use of AI.
And just as the government exercises oversight in the law firms and then the courts and the companies, you also have to have oversight of how it’s used to make sure it’s used properly. Also to build trust. This is a community approach. You have to have input from lots of different sources to get it right in your law firm.
Steven Richman, American Bar Association, Washington, DC; BIC Vice Chair (SR): What we were trying to do is to look at the ethics issues and see whether or not we should be recommending changes to our own IBA principles, whether we should be looking to provide other guidance. But most importantly looking to see what other bars have done.
What we found in our survey among various groups – and it’s a continuing project – are various common denominators. In addition to competence, the jurisdictions that we looked at talked about supervision. We looked at bias. Some jurisdictions have even talked about the bias rules to make sure that the AI program you’re using complies with the ethics rules there. What we’re going to do going forward is build on what we’ve done, expand our review of jurisdictions to include not just an emphasis on common law countries, but also civil law countries.
Shirley Pouget, DLA Piper, Paris; Co-Chair, IBA Human Rights Law Committee (SP): The first question is: are we all ready to practice law differently? Because that’s what is going to happen. And I think from a human rights standpoint, there’s a question about the future of work. What does that mean in practice? The use of AI, of course, will increase productivity [and] cost effectiveness. But ChatGPT and other software can do what we do manually, in terms of drafting contracts or analysing documents, so the first human rights impact is actually the right to work.
So where does that leave us? I think one of the main recommendations of the report is we must keep abreast of innovation and the use of AI. That’s very important. It will bring lots of challenges, but also lots of opportunities as well. The good news is something that won’t be superseded by ChatGPT and other software is the relationship we build with our clients. Of course, there are salient human rights risks when using AI: abuses to the right of privacy, perpetuation of bias and stereotypes. So there have been potential discriminatory outcomes, especially against minority groups. AI can also amplify misinformation and disinformation.
And finally, it can undermine labour rights. What is it we could do to mitigate those risks? It’s a question of awareness. And we are all thinking about incorporating AI into our offering. But we must be aware that the use of AI in innovation can have adverse impacts on people. And just being aware of this is the easiest step forward. So when using AI, we as lawyers and also when we are dispensing advice to our clients, we must identify, manage, mitigate human rights risks and also provide a remedy in terms of a grievance mechanism. That’s not something that is necessarily easy.
AM: How can you advise all of us at this stage in order to identify those risks?
SP: I think it really depends where law firms are. So if you’re using software for the purpose of analysing documents and contracts, I think it’s okay. It’s likely that business models, but also hiring procedures, will be affected by AI. Also for cost effectiveness and productivity. When it comes to hiring people, it can have an impact depending on if you’re using AI to hire new lawyers, for example, and depending on how the AI is being trained and how diverse the data is, it can have some impact on the hiring procedures and also against minority groups. So I think it’s a question of awareness when it comes to business models and when it comes to hiring procedures.
For criminal lawyers, obviously if we are using AI in the context of criminal law, it can have an impact on fair trial rights. Just always advising your clients to think through the potential impacts on people is really important.
MR: Risk mitigation is rapidly evolving because we’re still trying to understand these tools. In fact, the companies themselves are still trying to understand these tools.
SR: While we’re not necessarily looking to change ethical rules, one of the fundamental points by way of mitigation has been education and to not take it for granted. And that’s why you are seeing ethics opinions that address this issue. Some of it may not even be immediately intuitive like confidentiality. You’re putting information into a system and you want to make sure there are appropriate protections. At a fundamental level, I think the more education people have, as opposed to learning all the technicalities, is an important part of the mitigation.
SC: I think about how the internet developed and I think we weren’t as focused on how revolutionary it was at the time, the way we are with AI. But obviously phones and internet were revolutionary, but there were a lot of smaller actors all competing. And now the government, particularly the US government, doesn’t like the fact that, it’s kind of coalesced into a handful of large companies. The way to get to the right place is just to have many different people competing.
MR: The approach to internet governance led by the US was largely hands off. It’s very different this time. The key question is: will markets alone solve the challenges that we’re now confronting? And I think most governments have concluded that they won’t.
SC: Why would they not if it worked for the internet?
MR: As we would say in regulatory terms, the problem is the race to the bottom. And you see this in the industry because a year ago, Anthropic, OpenAI [and] Microsoft, they had these big safety teams to manage misinformation and to ensure reliable and ethical use. Those teams have largely been dismantled. And the reason that they’ve been dismantled is because of the growing competition among the leaders. They don’t want the guardrails put in place. This is why I don't think markets are going to solve this. I think they’re going to make problems more severe.
This is an abridged version of ‘The Future is Now – Artificial Intelligence and the Legal Profession’ session at the IBA Annual Conference in September 2024 in Mexico City. The filmed session can be viewed in full here.
Find out more and read The Future is Now: Artificial Intelligence and the Legal Profession report here.
Audience questions
Q: What is the preferred mode of regulation you advise – ex ante or ex post?
SC: I think given the velocity of change and also the bureaucracy of government, I think really it would be great to try to do it ex ante and then measure, but I don’t know that that’s achievable. But the way I think about it is that it should be continuous. In a sense, always a work in progress because it’s changing so rapidly and if it is continuous, then you can afford to do a little bit of the ex-ante because ex-ante is going to have some mistakes.
SR: Some of it is also particular and you can’t address all of that at once. For example, in New York, there is a piece of legislation proposed to deal with the use of AI in admissibility of evidence. It’s a very particular type of application that doesn’t need to wait on the broader issues but may be useful to address as things are coming up.
Q: I wonder if I could have the panel’s thoughts on the need for lawyers to be transparent with fellow lawyers about their use of the AI in order to maintain those relationships and that trust?
SR: From the ethics standpoint, you have an obligation to have candour and discussion with your clients. It goes both ways. If your client is insisting on you to use it, you need to be able to talk about your limits. And conversely, the client asks you: are you using AI? Just in the US, you’d have an obligation to be forthcoming about it. I do think there is that obligation to have a candid discussion with your client about it. There is also the obligation of candour to the court. Now some courts are requiring you to certify that you’ve used or not used AI, and if you’ve had a human being do it.
SC: I think you definitely have an obligation to talk to your client about it. But, and maybe it’s the New York perspective, whatever you do to prepare yourself for a negotiation, you don't have to tell the other side. I think inside your firm, definitely, because it gets to the question of supervision and reliability and evaluating the work of other people.
SP: But then there’s a question of equality of arms, right? So if you’re using AI without necessarily notifying the other party and the other party can’t necessarily afford the use of AI for X and Y reasons, then you have a breach of equality of arms.
The IBA has created a dedicated page collating its content on artificial intelligence and technology issues: