EP. 34
Alex Ustych
Barrister, 5 Essex Chambers
The Question of AI at the Bar
A specialist barrister in AI and data protection law examines what AI really means for legal practice: where it adds genuine value, where it introduces serious risk, and the three rules the Master of the Rolls recommends every lawyer follows.
Artificial intelligence has arrived at the Bar whether chambers are ready or not. But how barristers engage with it, and whether they do so safely and responsibly, remains an open question. Alex Ustych has been thinking about this longer than most. Called to the Bar in 2010, he developed a specialism in data protection and emerging technology law before AI became a mainstream concern, and he brings a forensic legal perspective to a debate that is often driven more by hype than analysis.
This conversation covers the practical realities of using AI tools in legal work: where they add genuine value, where they introduce serious risk, and why the problem of AI "sycophancy" (the tendency of AI systems to tell users what they want to hear rather than what is accurate) is particularly dangerous in a professional context where accuracy is everything.
The danger is not just that AI gets things wrong. It is that it gets things wrong confidently, in a way that is very hard to detect unless you already know the answer.
Alex Ustych, Barrister, 5 Essex Chambers
Alex also addresses the intersection of AI and data protection law, a particularly pressing issue for chambers that handle sensitive personal data as a matter of course. Using AI tools that process client information raises real questions under UK GDPR that many chambers have not yet worked through. He also sets out the three rules published by Sir Geoffrey Vos, Master of the Rolls, for responsible use of AI in legal work.
Share this episode
In this episode
- How barristers should engage with AI tools in practice, and where the boundaries lie
- The problem of AI "sycophancy" and why it is particularly dangerous in legal work
- The intersection of AI and data protection law, and what it means for chambers handling client data
- The biggest risks and opportunities AI presents to chambers right now
- Three core rules for using AI responsibly in legal work
From this episode
Alex argues that the most serious risk of AI in legal practice is not that it gets things wrong, but that it does so confidently and in a way that is very difficult to detect. The problem of sycophancy — AI systems designed to tell users what they want to hear — is especially acute in a profession where accuracy is non-negotiable. Chambers using AI tools that process client information also face real data protection obligations under UK GDPR that require careful consideration before any tool is deployed. Alex highlights three rules for responsible AI use set out by Sir Geoffrey Vos, Master of the Rolls:
- Understand what a large language model is doing before you use it.
- Avoid putting private data into a public engine.
- Check what comes out before you use it for any purpose at all.
AI at chambers creates real UK GDPR obligations.
Using AI tools that process client data requires chambers to address data protection obligations under UK GDPR before deployment. Briefed's training covers the legal and regulatory implications of AI at chambers, including UK GDPR obligations arising from tools that process client data, and produces a documented policy framework for staff.
About the guest
Alex Ustych
Barrister, 5 Essex Chambers
Alex Ustych specialises in data protection and information law, with a particular focus on the impact of emerging technologies on individuals' data protection and privacy rights. Called to the Bar in 2010 by Gray's Inn, he is ranked as a leading junior in both Chambers and Partners and The Legal 500 for Information Law, Data Protection, and Inquests and Public Inquiries, and is on the Attorney General's B Panel of Counsel. A member of the Society for Computers and Law, he advises both public bodies and private companies on major digital projects and has a long-standing interest in the legal challenges posed by artificial intelligence.
Transcript
Orlagh Kelly: Alex Ustych from Five Essex Chambers. Thank you for joining me on the Get Briefed podcast. We're delighted to have you. I know you were called to the Bar in 2010. So you're 15 years in, you're ranked in the Legal 500 and specialising in data protection, inquiries, public law, police law, et cetera. I note that you're also counsel on the Grenfell Tower inquiry and on the COVID-19 inquiry, is that right?
Alex Ustych: Well, Grenfell Inquiry is finished now. That was my life for a number of years where I was specialising in data protection and disclosure issues. But yeah, upwards and onwards now.
Orlagh Kelly: And can you tell us and tell our audience a little bit about how you got started at the Bar, your background and what made you want to become a barrister?
Alex Ustych: Yes, I think I've got a bit of an unusual journey in that I started off in Ukraine where I grew up and then I moved to Europe as a teenager and then I came to study law at Durham. So that was the first time I came to the UK. I was inspired to do law by a John Grisham novel I was reading when deciding which university course to pick — probably not the soundest basis for career choices, there you go. And then after I did my Bar course, I joined Five Essex Court as it was then, now Five Essex Chambers, as a pupil. And I've been there for those 15 years, as you point out — makes me feel a bit old, actually.
I was very keen to talk to you about this particular topic. I've always been a massive geek since I was a kid. I was opening up computers, putting together computers, reading science fiction in the 90s — things like Isaac Asimov, a lot of it to do with the challenges of AI in the future when that technology just wasn't there yet. So when I came to the Bar, after some years I veered towards data protection, cybersecurity, privacy and human rights issues. The sort of areas that really interact most with AI technologies, along with intellectual property. I'm also on the Public Tech Committee of the Society for Computers and the Law, which has been around since the 50s but is certainly becoming even more prominent now with the advent of AI — and anyone interested who's listening should look into joining, because that's the best resource to learn about these advances and the implications for lawyers.
I've been talking about AI for two or three years now and I've done some really interesting events, like the Bar Council Kenya visit where I had an event about AI. I've been speaking both from a legal perspective and alongside AI developers, because even though I'm fairly technical, they are massively more so — and there's no better way to find out about it than talking to someone who actually does the AI work as opposed to just talking about it.
Orlagh Kelly: Very good. And so you can essentially talk to AI from two perspectives, I think. One, as a practitioner and how, regardless of what area of law you practise in, AI will intersect with that. And I know that a lot of our audience are interested because they might find that they won't ever practise in the world of data protection or AI in and of itself or IP, but that they do have to think about AI in their practice. But also you obviously are a practitioner moving towards developing your expertise and probably your practice towards specialising in AI to some extent, even if it's just where it intersects with data protection and your work there. Is that correct?
Alex Ustych: Yes. At the moment, I don't think there's a deluge of AI-related litigation. It has started in America already — New York Times and other cases — but at the moment it's mostly advisory, though it is starting to crop up. And as you probably know, the UK government is very much all guns blazing on AI. It's adopted a very softly-softly approach to regulation compared to Europe, to try and facilitate the technology developing. So yeah, I think it is going to be a day-to-day issue for people in my field to advise on, more so than it is now. But primarily my interaction with that is looking at the implications, the legality, and the practicality of it: how is it going to impact my practice? How do you not get left behind by the rapid advances in technology?
Orlagh Kelly: And what's the answer to that, Alex? Do you, for example, use AI tools in your practice to help support your advocacy and representation of your clients?
Alex Ustych: I think being a data practitioner means I have a pretty conservative view of using AI with client data and legally professionally privileged data. I have seen individuals become unstuck by trying to use tools they don't fully understand. What I have done so far is experiment with and try out, both myself and in the context of my chambers, a number of the legal-specialised AI engines. This is not your ChatGPT. These are systems you pay for — sometimes quite a lot — that draw on vetted material. Westlaw and Lexis, for example, have systems trained on law reports, their databases, articles from practitioners, academics, textbooks. As distinct from what Steve from Reading said on Reddit and ChatGPT picked up when it scooped up the internet. So I do use, and my colleagues use, AI-assisted research from one of those specialised paid systems.
Another useful resource is a document analyser from one of those same systems, where you can plug in the section you've written — your skeleton argument or your opponent's skeleton argument about the law and precedents — and it can point out: is this actually the most relevant authority? Has it been overtaken? Again, this is not client data. This is a useful aid.
I've noticed in the last couple of months that Word has started popping up Copilot every time I open any document, which is super annoying. It feels like you're being pushed towards just putting data into these generalised systems, which I have not started doing, because I am not yet satisfied that any of them are sufficiently secure to put any volume of personal client data into.
Orlagh Kelly: There's obviously a significant risk for any barrister trying to speed up using the tools but putting professionally privileged information or personal data into systems that aren't where they need to be yet. Certainly, we advise a lot on GDPR here at Briefed as well, and as time has gone on I've become a little bit cautious. AI has definitely taken things to a whole new level and it does bring a lot of risk. Is there any other way, other than research, that you find AI useful — that you think it's moving towards being a genuinely helpful technology?
Alex Ustych: As far as my personal life and my own data are concerned, I've probably put more of it into ChatGPT than I should, because I find it immensely useful. Planning a trip, or trying to get my head around pension contributions and the impact on tax liability. I'm a huge convert to its day-to-day use. But I think perhaps because I'm not an accountant, I don't spot all the errors it might produce in that context.
Orlagh Kelly: I did wonder that the accountants would be jumping up and down saying don't use ChatGPT for tax advice.
Alex Ustych: Just to be clear, I do have a proper accountant who does all the proper things. But sometimes, in the middle of an afternoon having a chat with my wife about family finances, we want to ask: what is this? What if we change this? I'm not going to ring up my accountant every time. The difference is, I think, using AI for low-stakes stuff — if AI recommends me a hotel in Romania that turns out to have been closed for five years, that's going to be inconvenient. It's not going to end my career.
It's very different when, as we've seen across the pond, lawyers have used AI with hallucinations without properly understanding. It's now come across to this side of the ocean. There have been cases here. I have real sympathy for the people affected by this, because you open the news and read about how AI is the most brilliant thing ever — and I get a bit of AI fatigue. I don't know about you, but I feel like everything is AI now. I have a friend who works in venture capital, looking at proposals of companies for funding, and the real problem is everyone frames everything as AI now, even though a lot of the time it's exactly the same technology — basic algorithms, basic software that was used 10 years ago, now rebranded as AI. And it's part of the problem, because you need to be educated enough about AI to know what is actual AI and what is just rebranding.
I can see how, if you read all that coverage as a lawyer who maybe hasn't taken the time to do the research, you could take whatever ChatGPT produces as being the gospel. You can get really unstuck. And I do feel for those people.
Orlagh Kelly: Is it a generational difference? I'm thinking about a particular case where I believe it was a pupil or someone just post-pupillage who was found to have used ChatGPT. There were suggestions that possibly they weren't supervised as well as they could have been by their pupil supervisor. Is it that young people are the ones who get caught out by this — because older barristers know the law better already and have been through the trenches, or simply because they're more conservative about technology?
Alex Ustych: As part of our implementation of AI in chambers, in these general AI-assisted research roles, we realised there can be no preconceptions about who's going to find AI useful practically. People who are not particularly tech-savvy have taken it up quite easily. Because if you go onto the front page of the system and instead of the usual search by subject, search by citation, you just put in whatever fairly obscure legal point you want as a question in natural language — you don't have to understand the syntax of how the subject fields work. The hours I spent as a pupil trying to find the answer to a really obscure point using the subject search in Westlaw. It helped no one really. And this is a massive step forward. I think a lot of people who started using that particular, highly vetted feature — not ChatGPT — would struggle to go back. And I think that applies across all ages and technology capabilities, because actually it is easier.
But you hit on a very good point, which is: what about pupils and junior barristers? Even though I'm very pro-technology advancement, there is a lot to be said — and I stress that none of this represents my chambers' views, this is all my own opinion — for pupils and junior barristers learning how to go to the library like I did. They should know how to use law reports, which ones are the good ones, which are the good textbooks, how to check that a case has not been superseded. Because even with the system I use, in spite of paying good money for it, there have been cases when it's said that a case decided something which it actually didn't. Now I know how to check. But if you don't learn that skill as a pupil or a junior barrister, you might not be able to bring that critical approach. And I think it is a real issue: what if we lose the skills to do our own thinking and analysis? Everyone, regardless of how good these systems get, should start off by having those skills. Then you can make your life easier — but you have to go through the grind first.
Orlagh Kelly: Absolutely. That's a very fair point. You mentioned a startling case involving a litigant recently?
Alex Ustych: Yes. Another dimension worth remembering is that it's not only our own use of AI that drives AI into the judicial system. It's what our opponents or the other party does, which is completely out of your control. And there has been a startling change in how litigating works since ChatGPT and others became prominent. I've talked to a lot of solicitors about this. There's been a significant increase in letters of claim and claims. I don't have statistics from the Ministry of Justice to back this up. But what is pretty clear is that my clients are getting more claims. They tend to be of lower quality, with less merit on average. They tend to be much more confusing, prolix, longer, often containing — as we all know — wrong case law.
The first issue is that the barrier to drafting something which looks like a credible claim, at least to someone who doesn't really know what a credible claim looks like, is much lower. You describe your situation to ChatGPT, it gives you a document. You have no way of assessing if it's good or not.
The second problem is AI sycophancy — I'm not sure if you've heard the term. It's the fact that AI likes to tell you that you're right.
Orlagh Kelly: I haven't heard it described as that, but it does resonate with me. When I've asked it some questions I just know the answers are too good to be true.
Alex Ustych: Exactly. I can't tell you how many bad restaurants I've gone to around the country because AI wanted to please me when I asked for a good steak place in a particular town. It'll say, yes, there's a great place here — and it turns out it's been closed for five years or it's awful, but the AI just wanted to please me.
That's much more serious when people put the facts of their case, often with their own subjective slant, into AI — and get back something saying they've definitely got a great case. That spurs on claims. The example I was alluding to: I was in a court hearing and a litigant in person who had clearly used AI in their pleadings — it was clear from the text — was asked some questions by a senior judge. In response, they asked if they could just have a second to check their notes. And they started typing. About 30 seconds later they started reading out from something that was quite clearly generated — largely nonsense, making serious claims about wasted costs with no possible application to the case, no relevance. They were reading it out because the AI told them to. There is no rule against it at the moment, at least not explicitly, and this case was less obvious than some. But I am satisfied that even if they had applied their own judgment, they would not have made those claims. They were, I believe, a victim of AI sycophancy, assuming that the AI knows better than they do.
Is our job in 10 years going to be turning up in some cases and doing advocacy against an AI opponent? I don't particularly want to be doing that job, to be honest. And I think this is an area where there will be more guidance from the judiciary in future.
Orlagh Kelly: You can really imagine a scenario where counsel, a solicitor, or a personal litigant could be allowed to stand in court and just generate answers from a computer to parrot to the judge. You couldn't prepare a case in advance if new things are being generated. Even if AI became more reliable, it just seems — well, it doesn't seem like justice.
Alex Ustych: In this case, the individual did come up with an entirely new point not in any pleadings — I'm fairly sure in the half hour before the hearing. But you're absolutely right: as far as solicitors and barristers are concerned, if we're just reading off a screen without applying any judgment, we might as well not be doing the job. What's the point of us?
It's more nuanced with litigants in person and access to justice. It's easy enough for us to say just go and use a lawyer. But the cost of legal representation is so prohibitively high now. The hollowing out of civil legal aid means genuinely serious cases — family matters, custody matters — are not covered. I find it quite shocking. And so you can understand that people might actually be better off using some form of AI to help them understand the law and present their case to an extent. The problem is that those people, much like some professionals, don't understand what is a suitable system and what safeguards are needed.
Sir Geoffrey Vos, the Master of the Rolls — head of civil justice — is a hugely switched-on person as far as AI and technology is concerned. He's been a big driver of digital transformation. You can find a number of his speeches online about the impact of AI on lawyers and the judicial system, including judicial decision-making. There is one from October which is well worth reading. He has three rules for using AI that I think are good for lawyers, for litigants in person, for anyone:
First, you need to understand what a large language model is doing before you use it. Second, you need to avoid putting private data into a public engine. Third, you need to check what comes out before you use it for any purpose at all.
The litigant I mentioned, I believe, failed on all three counts. But if someone understands those rules and is offered a reliable, vetted system — like the ones we use for legal research — I think that could help unlock some access to justice issues. The court system has massive backlogs. What if the Ministry of Justice could provide, in exchange for your issue fee, access to a system that looks at the facts of your case and gives you some basic advice? Not binding, not denying access to justice, but providing a reality check — without the AI sycophancy, because you're using a proper, rigorous system. It might deter some unmeritorious claims from proceeding and improve the quality of what does reach the courts.
Orlagh Kelly: That's very exciting and interesting if there were a mechanism where that could be introduced. Certainly at Briefed we've noticed that subject access requests to chambers have become not only more frequent but more voluminous and more professionally worded — clearly generated by AI. Once you understand GDPR, you can read them and see that these people have essentially been given language that makes them think they have more rights than they do under the legislation. It requires a genuine expert to push back on that. It's making life more difficult and expensive for chambers. And that's a pattern that will appear across a lot of areas of law — in family law especially, where someone understandably can't afford legal representation but is in a custody battle and is being told by AI that they have a stronger case than they do.
Alex Ustych: Yes. And I think there's a place for professional regulators — the Bar Council, the SRA, the Law Society, the Information Commissioner's Office, the UK AI Office — to look at having some sort of transparent criteria around different sectors, and saying: these are the markers we expect a reliable, responsible, and dependable AI system to have. Perhaps even some sort of licensing or accreditation scheme, so that both as a legal professional and as an individual you could say: this AI system has been vetted and approved. That doesn't mean everything it says will be absolutely right — my own view is that AI doesn't really think; it's a predictive model, a very advanced version of your mobile phone's text prediction, but a predictive model nonetheless. It is not, at this point, sentient. But there needs to be guidance given to individuals. At the moment it's difficult to blame someone who uses ChatGPT because they've not really been offered either guidance or a reliable alternative at a consumer price point.
And on one view, we've created as a society a situation where people in desperate need of legal help are left without it. It's perhaps unfair to then complain about the ramifications of that. These people are probably doing their best because they don't have other options.
Orlagh Kelly: Certainly, there's a confluence of events over the past couple of decades that's led to a significant gap in the availability of cost-effective legal services. And it's talked about over and over again. Legal aid continues to be reduced. There's literally a gap in the market, which AI is clearly going to fill one way or another.
Alex Ustych: On the subject of tools and how people will use them, I'd suggest viewers watch a Channel 4 programme called Will AI Take My Job? — a Dispatches programme. They set up a fascinating experiment where four or five different professionals, including a GP, a fashion photographer, and most relevantly a trainee solicitor, were effectively competing against an AI version of themselves. There was a blind test assessed by an expert in the field.
The trainee solicitor was up against Garfield AI, which I think is the UK's first SRA-regulated law firm. It was a small claim dispute worth £4,500 — small claims, no cost consequences, a bit of a free-for-all. But the performance was fascinating. The particulars of claim created by the AI were judged to be good enough, even though the solicitor's work was considered better and showed more judgment. A couple of details were missed, but it was absolutely usable. The speed: the solicitor took something like six or seven hours; the AI produced it in 10 minutes. The cost: £100 for the Garfield AI versus £1,000 for the human. The client, given both options with full knowledge of which was which, said he would choose the AI over the human for similar cases in future. So that technology is already there at the lower end of the market. And given that tens of billions are being invested into AI, it will get better. If we were having this chat in a year's time, it's quite possible I would be giving a more pessimistic view of the future of the legal profession than I am now. But at the moment, I don't think solicitors and barristers in the more high-value, more sensitive cases — including those involving judgment, assessment of witnesses, and certainly advocacy — are at risk. Going forward, it will be fascinating to see where the technology goes.
Orlagh Kelly: Bearing that in mind, what advice would you give to barristers or chambers that are thinking about starting to use AI? Should they stay away because there are too many risks? Should they plough ahead in an effort to stay current?
Alex Ustych: I don't think burying your head in the sand as to AI is a viable commercial option in the legal market over the next five years. Unless there's some major AI meltdown, which seems unlikely, clients — solicitor clients and clients of solicitor clients — will probably expect legal services providers to be using AI to do at least the things it does easily. Producing a chronology from a bundle of documents, for example, doesn't require much judgment, but it does require many hours of work. Do I think in five years' time a solicitor will be happy to pay a barrister five or ten hours to do that, given the tools available? I think not. And if a chambers hasn't got the AI capability to be more efficient, I can see it losing work to other sets that do.
So I think every chambers needs to be looking at: what are we going to do? What is viable in our situation? What would our clients expect, not today, not tomorrow, but in a year's time, in two years' time? And how do we get ahead of it?
Practical steps to take: I would look at the legal research products I mentioned — I do find them very useful. There is guidance now from the Bar Council, updated in November last year, called Considerations When Using ChatGPT and Generative Artificial Intelligence Software Based on Large Language Models. It explains the issues of these systems, some of the steps you can take individually to make sure your output to clients is sound, and of course your professional duties when using AI. I would look at that with a view to potentially adopting a chambers AI policy based on what the Bar Council says — to ensure that everyone in chambers has at least a basic understanding of AI and the safeguards needed. It would be very unfortunate if some members of chambers with less tech awareness ended up putting personal data into public systems because there isn't a policy against it.
I would follow Sir Geoffrey Vos's three rules, which I mentioned earlier. I would look at potentially having training on AI, either from AI-savvy people internally or from external organisations like Briefed and others. And there is additional guidance coming through the pipeline — the Civil Justice Council last year started working on guidance for legal representatives on using AI to prepare core documents. There has to be a way for chambers to be circulated that information when it comes out.
From a personal data standpoint, if you start using personal data with AI, you really need to review and document the security around it in an auditable way, in case there's ever a complaint or a cyber attack. You may need to tweak your privacy policy to ensure transparency. But subject to all of that, I don't think AI is something we need to shy away from or fear using. You just have to follow the rules, follow the guidance, be responsible. A question that's often helpful to ask is: if something goes wrong, what am I going to say I did to make sure it worked safely?
Orlagh Kelly: Absolutely. And I think for anyone possibly afraid of AI, it would be worth trying AI tools in your personal life first — finding restaurants, recommending holidays, asking a question. Once you've started to understand the power and you can assess as an individual the quality of the answers, that's where the ideas start to flow about how AI can assist in your professional life as well. Start on a more personal level where there's far less risk, as you said earlier.
Alex Ustych: Yes. There's an interesting gap between something like 70% of the adult public using AI now in their day-to-day lives, and I think it's considerably lower — around 25% sometime last year, probably a bit higher now — among legal professionals. It takes a while to get barristers going on new technology. But we've got the ability to use tech, and we don't want to be left behind. I wholeheartedly endorse your advice: familiarise yourself with how it works using your own data that you're responsible for. Then once you're more familiar, read the guidance, make sure your chambers has thought about it, and start using it in a limited way at first.
I do believe it is the future. I don't think it's going to be AI Alex in 10 years from now, but I think it will be an Alex with a lot of help from AI. And a quote I quite like to end on: your job's not going to be taken over by AI — it's going to be taken over by someone who knows how to use AI.
Orlagh Kelly: Absolutely, and I couldn't agree more. Thank you so much for joining me. I could geek out on AI and data with you for much longer. But that is the end for today. Thank you very much, Alex. I might have you back in a year to reflect on this conversation and see if any of the predictions came true. So thank you — we will see you again.
Alex Ustych: Thanks for having me. Bye.
Listen and subscribe
New episodes published monthly.
You're subscribed. New episodes monthly.
No spam. Unsubscribe any time.
Hello, World!