In Episode 10 of TIA Talks, host Jason Mizen was joined by Tristan Marot (Norton Rose Fulbright) and Gary Tiesendorf (Sapiens) for a candid conversation at the intersection of artificial intelligence, ethics, and the rapidly evolving insurance landscape.
From small and medium enterprises to global market dynamics, the episode breaks through the noise to explore what AI adoption really looks like on the ground. Following the conversation, Tristan expands on these ideas, offering a deeper, more personal reflection on how large language models (LLMs) are reshaping legal practice – and what that could mean for other skilled professions.
Tristan continues…
Over the past two years I have been part of the team helping a global law firm thread artificial intelligence into our daily work. The closer we look, the clearer it becomes that powerful large language models are almost ready to take over the basic research that once kept junior lawyers at their desks until midnight.
A typical legal research pipeline runs in four predictable beats. First a query lands, whether from a client, a partner or a colleague. Next the junior disappears into every source that could hold the answer: reported judgments, statutes, commentary, even the firm’s own precedent banks. Then they boil those findings down to a memo framed for the matter at hand. Finally, a senior lawyer reviews the draft, and the polished advice goes out the door. Each stage is a repeat‑after‑me routine of retrieve, read, filter and write, which is exactly the pattern today’s large‑language‑model tools are engineered to replicate at speed. They still make mistakes, sometimes inventing cases that do not exist, a glitch some call a “hallucination”. But every month those errors shrink, and when they dip a little further the drudgery of finding and summarising authorities will happen in minutes, not hours.
That prospect excites many, yet as someone emersed both in the technology and in training our young professionals, there are some major concerns. Long searches are not just a line on a bill. They are where juniors stumble across unfamiliar rules, learn that headnotes can mislead and practice weighing which points really matter. It is where one develops their legal reasoning. Take that grind away too early and tomorrow’s mid‑level associate may command perfect prompts without ever absorbing the feel of the law itself. But it is not just the legal field who are worried. Auditors, engineers, doctors and soon, underwriters, voice the same concern, because their early careers are also built on slow, repetitive tasks that sharpen judgment before real responsibility arrives.
However, we have heard this fear before. When handheld calculators arrived in the nineteen‑seventies schools banned them, convinced pupils would forget long division. Within a decade the ban vanished, lessons shifted to tougher ideas and maths thrived. The comparison is comforting, yet imperfect. Calculators removed only one step, the actual arithmetic. LLMs sweep away several at once: search, reading, first drafts but, most importantly (once the technology advances), reasoning. They do not only give an answer; they wrap it in fluent prose that can hide the shortcut they just took.
If we do nothing, the workforce could split into a few high‑level strategists and a large pool of administrators, with little training ground in between. Big firms will still afford special development schemes, smaller outfits may not, and the skills gap will widen. Professional bodies will need to rethink what it means to show “reasonable skill” when a human now checks the machine instead of the other way round.
Banning AI outright from trainee work is not the answer, yet I am equally unsure what the answer is. A more practical path may be to build scaffolding around the technology. Juniors might be asked to challenge the model’s reasoning, redo key searches by hand, or produce a draft both with and without AI and compare the outcomes. Unfortunately, the relentless push for efficiency and profit means these exercises rarely get the time they need to pay off. Law schools should teach AI literacy while preserving live‑client clinics where students grapple with messy, unstructured files. Partners who reclaim hours by delegating the grind to machines must reinvest part of that dividend in mentoring, not simply chase additional billables.
Technology should free humans for higher‑value work, yet value in a profession depends on sound judgment, and judgment is forged by doing the work. Calculators did not ruin mathematics; they changed what counted as maths worth teaching. LLMs will not doom the next generation of lawyers, auditors or underwriters, but unless we build deliberate apprenticeships for the AI era we may end up with flawless documents written by people who have never really thought like professionals. That would be a bigger hallucination than any chatbot can dream up.