
AI is mainstream. Reimagining conventional risk management and insurance practice
As generative AI continues to revolutionise how businesses operate, the insurance industry is navigating a fast-changing landscape. AI’s potential to increase efficiency is undeniable, but it’s also raising serious questions about risk, responsibility, and the very nature of professional value.
In a recent panel discussion, at the Airmic Annual Conference in Liverpool, David Pryce, Senior Partner at Fenchurch Law, Jonathan Nichols, Head of RMIS Operations at Archer, and Vincent Plantard, Head Strategy & Analytics Claims, Director at Swiss Re Corporate Solutions, shared how their businesses are adapting to AI, the challenges they’re facing, and what responsible use looks like in high-stakes, regulated sectors.
From “let’s wait” to full-scale adoption
“A year ago, if you’d asked me about AI, I would’ve said, ‘We’re a small firm, we’ll wait and see what the bigger players do.’ But within about eight months, that completely changed.”- David Pryce
What drove the shift?
The realisation that clients are unlikely to continue paying for tasks that AI can perform quickly, cheaply, or even for free.
David and his team began a firm-wide review of every task they perform, categorising each into five types of data interaction:
- Capturing
- Retrieving
- Processing
- Analysing
- Creating
The focus is on using AI to free up more time for meaningful, high-value client engagement. The view is that if a task isn’t central to what clients truly value, it should be automated wherever possible. For the work that is core, AI should be used to enhance the way you operate, not to replace it. The ultimate aim is to spend more time applying the judgment, creativity, and specialist knowledge that only humans can offer.
Process to process improvement
Jonathan opened with a stark reality. At a recent industry event, a student asked him, “Where should my career be in five years?” The honest answer: many current jobs will no longer exist as AI rapidly replaces routine processes.
AI is not just advancing, it’s accelerating. Current systems already exceed average human IQ, and in a few years, they are projected to surpass Einstein-level intelligence. AI will solve complex problems at a scale and speed humans can’t match. However, AI cannot decide what problems to solve. That remains the essential human role.
The real value in future careers won’t come from doing the process but from improving it. To stay relevant, professionals must actively learn AI, work closely with IT teams and vendors, and develop strong data strategies, as AI can only deliver results with the right information. Companies that don’t embrace this shift will quickly be left behind.
Becoming data-led professionals
The shift isn’t just about technology, it’s about mindset. Swiss Re’s Vincent shared that one of the biggest challenges is helping professionals evolve from intuition-based decision-making to data-informed thinking.
“We’re asking people who’ve built their careers on experience to now use data and AI insights as a critical part of their decision-making. It’s a cultural change.”
The focus is on automating low-value, repetitive work, like routine marketing tasks and claims processing, so that teams can focus on what matters most: serving clients, making strategic decisions, and adding value.
“AI isn’t about cutting heads. It’s about giving people the space to focus on higher-order work.”
David echoed this theme, emphasising that the profession has already moved beyond the question of whether AI will have an impact; it is. The real focus now is on how to integrate AI into processes in ways that strengthen service and build even deeper trust with clients.
Risk and responsibility
David highlighted that while AI is a powerful tool, it does not remove professional responsibility.
“When you delegate a task to a junior colleague, you don’t sign it off without reviewing it. It’s the same with AI. You must check the output; you’re still accountable.”
In most cases, existing insurance policies will respond to AI-related errors if the professional has taken reasonable care. But there’s a fine line.
“Sending AI-generated work to a client without checking it would likely be seen as negligent, or even reckless. And if you act recklessly, you could find yourself uninsured.”
David also raised an important emerging issue: whether insurance policies are fully equipped to address the rapidly evolving AI landscape. He noted that some cyber exclusions may unintentionally restrict legitimate AI use, while insurers are beginning to consider AI-specific risks, such as hallucination errors. However, the fundamental principle remains that insurance is there to protect those who act responsibly.
Crucially, that responsibility doesn’t rest solely with individuals; it is an organisational duty. Firms must ensure their teams are properly trained to use AI safely and effectively. Simply blaming the tool for mistakes will not suffice; courts, regulators, and insurers will still hold businesses accountable.
David shared a cautionary story: a lawyer who submitted AI-generated court documents containing fake legal citations. The lawyer now faces professional sanctions and possible prosecution.
Responsible AI use is not just about risk management; it’s about maintaining trust with clients, regulators, and insurers. By using AI to enhance, rather than shortcut, professional work, firms can better serve their clients while staying firmly within regulatory and ethical boundaries.
Integrating AI seamlessly
Many professionals still think of AI as something they actively prompt, typing questions into tools like ChatGPT. But as Vincent pointed out, AI is increasingly embedded into everyday systems.
“When you get personalised dashboards, search suggestions, or email summaries, AI is working behind the scenes. You’re probably using it already.”
Jonathan emphasised the importance of taking a proactive, hands-on approach to learning AI. He recommended a three-pronged strategy: first, get familiar with AI personally by using tools like ChatGPT, Copilot, or any accessible AI platform. By experimenting with these tools, professionals can better understand their capabilities and limitations. He noted that the number of people using AI regularly has grown significantly over the past six months, signalling how quickly adoption is accelerating.
Second, Jonathan encouraged working closely with internal IT teams, who often already have access to advanced AI tools. Understanding what is available within the organisation is key to unlocking potential solutions.
Third, he advised engaging directly with technology vendors. By learning what providers can offer and asking how AI might help solve specific problems, professionals can better integrate AI into their workflows. His core message was clear: don’t shy away from AI. Much like past technological shifts, such as the move from fax machines to email, embracing AI will be essential to staying effective and competitive in the evolving workplace.
Getting started
For professionals wondering where to begin, David offers this:
“Don’t try to build your own AI. Start with an off-the-shelf solution from a reputable, secure provider.”
He also recommends integrating AI into workflows in a deliberate and balanced way. Use AI to automate routine, repetitive tasks to free up time, but caution against letting automation distract from the core value clients seek.
Evolving, not replacing
The takeaway from all three speakers is clear: AI isn’t about replacing professionals. It’s about elevating them.
“We’re not preparing for some distant AI future. The tools we have now are already changing how we work. It’s not about whether we use AI, it’s about using it well.”
The key risks? Not using AI at all, or using it irresponsibly.
AI is here to stay. The professionals who thrive will be those who embrace it, understand its limits, and use it to strengthen, not erode, the trust their clients place in them.
David Pryce is a Senior Partner at Fenchurch Law.
Other news
Fenchurch Law Appoints Matthew King as Associate Solicitor, Singapore.
12 June 2025
Matthew King joins Fenchurch Law’s Singapore office as an Associate, specialising in coverage disputes. He previously…
You may also be interested in:
Archives
Categories
- Webinars
- Comparing German and English Insurance Law – A Series
- Construction Risks
- Operations
- Business Development
- Construction & Property Risks
- News
- International Risks
- Legislation
- Financial & Professional Risks
- Case Law
- Professional Risks
- Press Release
- Uncategorized
- The Good, the Bad and the Ugly
- Fenchurch Law Webinars
- Stonegate
- Newsletter
- Events