An interview with Zack Kass, AI business strategist
Monday 20 November 2023
Artificial intelligence (AI) – in essence, machines and computer systems simulating human intelligence to perform various tasks – is transforming many areas of life. AI business strategist and former Head of Go-To-Market at OpenAI, Zack Kass, discusses AI and the legal profession, regulation and the potential benefits and pitfalls of the technology with Global Insight’s Managing Editor, Tom Maguire.
Tom Maguire (TM): Everyone is talking about AI now. With that has come an overwhelming spectrum of commentary from dire warnings through to promises and evangelism and all sorts of advice. How do you cut through the noise to make sense of it?
Zack Kass (ZK): Right now, a lot of people are trading on fear. People who know a lot and people who don't know a lot. And I think that has been true for a long time. It is easy to trade on the currency of fear, especially at a time when we are, as a society, more cynical than ever.
We are looking for reasons to be more cynical. That's dangerous. I think that there's also a phenomenon happening right now where everyone has become an AI expert overnight. And so you pair the two phenomena and you're dealing with a lot of overnight AI experts who are really pessimistic!
My personal opinion is there's a lot of signal in the papers themselves, in the scientific publishing. Most people don't read that. I do my best to distil that. Then there's a lot of signal in the community that is willing to express itself in places like Reddit, X [formerly known as Twitter] and blogs. And if you are thoughtful enough, you can find those people and those thinkers.
In order to build the next great [law firm], should you take an existing firm and inject AI into it, or should you build it AI first from the ground up?
TM: You've been at the IBA Annual Conference for a couple of days now, surrounded by lawyers. You've probably noticed that we've had many working sessions relating to AI. How have you found the general feeling of the people you've talked to about AI?
ZK: Here's the thing about professional services in my experience. Lawyers, doctors to an extent, consultants and accountants. All of these people who trade in knowledge are often very self-disparaging in that they are self-aware enough to know that they are stereotyped, appropriately.
What I have found very uplifting is that the people I have met with are willing to admit that their industry is stodgy and that they are conservative in nature. And they have come to the conversations I've had with an open mind. At my keynote, there were 25 questions afterwards, maybe 30. A big group lined up to chat with me after, and the reception was very warm.
I was heartened to see how receptive an otherwise conservative audience was to a very optimistic message.
TM: One aspect that many lawyers are interested in is how AI can transform law firm management strategy. Do you have any views on where AI can fit into that and how firms can go about doing that?
ZK: The good thought experiment is ‘does the next major law firm exist today’? Meaning, in order to build the next great professional services companies, should you take an existing professional services company and inject AI into it, or should you build it AI first from the ground up?
TM: Which is a scary proposition.
ZK: Well, certainly it would mean cannibalising one's firm. And I think the question is, can traditional businesses actually fully embrace the AI revolution? I don't have a strong opinion on it. For example, I think it's just as likely that the next Orrick is going to be born from a technology company as it is from Orrick itself, which is exciting in many respects.
I think that law firms should expect to radically change how they do administrative work, hiring and firing, billing, client management – it's not going to be a couple of things that they tweak. It's going to be everything and it's going to be all at once.
I had a funny conversation today with the IBA leadership team. I was basically proposing that law firms go from billing hourly to billing on performance, and in large part not because I thought of that, but because it was the only way that it would make sense in a future where attorneys are working so much less.
Zack Kass and Tom Maguire
I do think it's going to be really hard, really complicated. But if a firm gets it right, other firms will follow suit. And that's exciting. That can bring the industry forward, but a couple of firms are going to have to be brave enough to [go first].
TM: So, they have to be willing to put themselves out there and risk failure.
ZK: This is called the innovator's dilemma, and it's a very real phenomenon which basically says that at some point you get so big and so drunk on a certain kind of success that it's hard to actually adopt the next wave. What got you ‘here’ won't necessarily get you ‘there’.
Google, by the way, is facing this right now with search. That's a pretty big source of revenue for them from ads so they'd have to say, ‘actually, we're not going to be an ad revenue company anymore. We're going to be a different kind of product company’.
I think we're going to see that across industries. The paradigm across industries for how we price goods and services is going to change.
TM: Another aspect that's of great importance to lawyers is obviously laws and regulation around AI. Just recently we've seen [US President] Joe Biden sign an Executive Order on AI in the US. We've seen progress on regulation from China and the EU. How crucial is effective regulation and what does that look like? How do you regulate without stifling the innovation?
Effective AI regulation is existentially important
ZK: You don't need to look much further than nuclear to wonder what it looks like when you get regulation wrong. You can outstrip an entire industry with overbearing regulation. And that has some catastrophic consequences – in the case of nuclear energy, we basically have just been burning trillions of tonnes of coal since then.
That being said, I think effective AI regulation is existentially important. I advise the policymakers I speak with to start with the alignment problem [of ensuring AI and human objectives are aligned] and the explainability problem [why AI systems make particular decisions and operate as they do]. I think those are paramount to success and the hardest to ‘game’ from a special interest standpoint.
My criteria for regulation is: is it crucial, is it discrete and is it measurable? And if it meets those criteria, then we should feel comfortable exploring regulation. A lot of what's being discussed right now is fairly esoteric and I think it just lends itself to special interests. Whereas alignment and explainability is very discrete, very measurable and we have to get it right.
TM: When it comes to concerns around AI, some minds go straight to the doomsday scenarios of Skynet from the Terminator films, with machines turning on their creators. For others, it's practical things like loss of jobs, potential copyright issues or privacy issues. How do you calm the fears?
ZK: I put the risks of AI to the human experience into a few buckets, the first being what I call idiocracy – the idea that we will surrender our critical thinking skills to AI for a long enough period that we as humans just become less intelligent. And that over time, that will sort of shape our way of life.
The second is job displacement. Although I don't think job displacement is the actual risk – it's identity displacement.
If I said ‘you're going to have food on the table, but you're no longer going to [practice your profession]’, well, that's fine. You're not going to starve. But you've put a lot into this way of life, and your identity, then, in a world where you're constantly being asked to change what you're doing, becomes pretty beaten up. And I worry a lot about our identity displacement and how we as a species respond to constant change.
I encourage young people, especially, to ground themselves and their identities in things that are immutable to the human condition: courage, vision, wisdom, curiosity and empathy. Things that AI cannot strip from us.
Can we train a machine to do a task and care about the human condition?
The last is the existentialism problem, and the existentialism problem is very real. And it's grounded in the alignment problem. And it basically says, can we train a machine to do a task and care about the human condition? And this is the problem that exists when we raise children. Can you, as a parent, raise a child who is brave and smart and courageous and also kind and empathetic and thoughtful and works well in society?
The thing about AI is we only get one shot. And so, if we reach artificial general intelligence and we have trained an angry adolescent, we can't go back. But the good news is that we basically understand the problem. We just have to be really focused on getting it right.
TM: What do you think are the key benefits we're going to see from AI in the short term?
ZK: Scientific breakthroughs. We're going to cure cancer. We're going to cure Alzheimer's disease. I think we'll do it in the next ten years. Interstellar travel is somewhere on the way.
We're going to discard a lot of the computationally intensive work that we do and recapture what it means to be human.
We're going to cure cancer. We're going to cure Alzheimer's disease
I think we're going to have an incredible shift with luxuries becoming staples. This happens in every industrial revolution. We're going to turn things like exceptional education and exceptional medical access into staples. Then there's going to be a massive deflationary event. Technology is inherently deflationary, and there's no reason to think that AI won't be principally among them. And I think the cost of goods and services is just going to plummet. Everyone except the central bankers wins, basically.
TM: Looking further to the future, what sort of innovations will we be seeing?
ZK: Here's the interesting thought experiment that I will leave you with. If I asked you to explain the internet to someone in 1900, you would struggle. It would probably be an impossible task. Similarly, if I asked you to explain the internet to a caveman, you would struggle.
You could argue that someone from 1900 and a caveman have a more similar comprehension of modern internet technology than someone from 1900 and someone from the present day do. Imagine what someone in 50 or 25 years would struggle to explain to you today.
We as a species have so many strengths. We are ingenious, we are resourceful, we are survivalists, but we really struggle to imagine a future that we have never experienced. It is very likely that we’ll live in a future in our lifetime that is unrecognisable from the one today, to the degree that we could not have explained it to anyone today. That's pretty fascinating.
To a lot of people it is terrifying, but if you actually examine how much suffering there is in the world and what a blight it is on the human experience, many people are able to suspend the disconcerting effects of uncertainty and recognise that we have so much room to grow as a species and that this AI event stands to have a lot more upsides in many cases than it does downsides.
This is an abridged version of Zack Kass’ interview at the IBA Annual Conference in Paris. The filmed interview can be viewed in full here.