Risk, Regulation and Rewards: Regulatory Developments in Artificial Intelligence
With the Government’s White Paper consultation – “A pro-innovation approach to AI regulation” – having closed at the end of June, and the UK scheduled to host the first global summit on AI regulation at Bletchley Park in early November, now is an appropriate time to assess the regulatory lay-of-the-land in relation to this nascent technology.
White Paper
The White Paper was originally published on 29 March 2023. It sets out a roadmap for AI regulation in the UK, focusing on a “pro-innovation” and “context-specific” approach based on adaptability and autonomy. To this end, the Government did not propose any specific requirements or rules (indeed, the White Paper does not give a specific definition of AI), but provided high-level guidance, based on five ‘Principles’:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance;
- Contestability and redress.
The White Paper is, in essence, an attempt to control the use of AI but not so overbearingly as to stifle business or technological growth. Interestingly, the proposals will not create far-reaching legislation to impose restrictions and limits on the use of AI, but rather empower regulators (such as the FCA, CMA, ICO and PRA) to issue guidance and potential penalties to their stakeholders. Perhaps surprisingly, the application of the Principles will be at the discretion of the various regulators.
Motives
The White Paper is markedly different to the EU’s draft AI Act, which takes a more conservative and risk-averse approach. The proposed EU legislation is detail and rule heavy, with strict requirements for the supply and use of AI by companies and organisations.
It would appear that the Government is keen on demonstrating a market-friendly approach to AI regulation, in an effort to draw investment and enhance the UK’s AI credentials. The Government wants the UK to be at the forefront of the AI landscape, and there are understandable reasons for that. The White Paper excitedly predicts that AI could have “as much impact as electricity or the internet”. Certainly the AI sector already contributes nearly £4 billion to the UK economy and employs some 50,000 people.
And the UK does have pedigree in this field – Google DeepMind (a frontier AI lab) was started in London in 2010 by three UCL graduates. The Government is optimistically throwing money at the situation, having already made a £900 million commitment to development compute capacity and developing an exascale supercomputer in the UK. Furthermore, the Government is increasing the number of Marshall scholarships by 25%, and funding five new Fulbright scholarships a year. Crucially, these new scholarships will focus on STEM subjects, in the hope of cultivating the Turings of tomorrow.
Ignorantly Pollyannish?
It all seems like it could be too good to be true. And in terms of regulation, it very well may be. The UK approach to AI regulation is intended to be nimble and able to react pragmatically to a rapidly evolving landscape, but questions arise about the devolved responsibility of various regulators. The vague and open-ended Principles may well prove difficult to translate into meaningful, straightforward frameworks for businesses to understand, and in any event are subjective to the individual regulator. It is unclear what would happen where a large company introduces various AI processes to its business functions but is subject to the jurisdiction of more than one regulator. How would there be a consistent and coordinated approach, especially given that some regulators have far more heavy-handed sanction/punishment powers than others? The Government does intend to create a central function to support the proposed framework, but given that the central function is likely over 18 months away, any dispute/contradiction between regulators before its implementation is going to be a can of worms. Furthermore, when it does arrive, is having a centralised, authoritative Government body to deal with AI not in complete contradiction to the desired regulator-led, bottom-up approach?
And with every day that passes, AI becomes more powerful, sophisticated and complex. It could be the case that all discussions of AI regulation are irrelevant, as there is no way for governments and international organisations to control it. While this risks slipping into a catastrophising and histrionic “AI will end humanity” narrative, it is certainly hard to see how regulation can keep pace with the technology. Consider the difficulty that governments and international organisations have had in regulating ‘Big Tech’ and social media companies in the past two decades, given their (predominately) online presence and global ambit, and then consider how much more difficult it would be to regulate an entirely digital technology that can (effectively) think for itself.
November Summit
In light of these considerations, it will be interesting to see what comes out of the AI Safety Summit in early November. The stated goal of the summit is to provide a “platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risk of AI”. There seems an inherent tension, however, between international cooperation in relation to ‘rules of the game’ around AI and the soft power arms race in which nations are involved for supremacy of the technology. In May, Elon Musk pessimistically opined that governments will use AI to boost their weapons systems before they use it for anything else. It may be the case that curtailing the dangers of AI will need a public and private consensus – in March, an open letter titled ‘Pause AI Giant Experiments’ was published by the Future of Life Institute. Citing the risks and dangers of AI, the letter called for at least a six-month pause to training AI systems more powerful that GPT-4. It was signed by over 20,000 people, including AI academic researchers and industry CEOs (Elon Musk, Steve Wozniak and Yuval Noah Harari to name three of the most famous).
In defence of global governments, the Bletchley Park Summit is not emerging from a vacuum – there have been previous efforts by the international community to establish an AI regulatory framework. Earlier this year, the OECD announced a Working Party on Artificial Intelligence Governance, to oversee the organisation’s work on AI policy and governance for member states. In early July, the Council of Europe’s newly formed Committee on Artificial Intelligence published its draft ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law’. And as recently as early September, the G7 agreed to establish a Code of Conduct for the use of AI. It would be unbalanced to suggest the international community is sitting on its hands (although note the non-binding nature of some of the above initiatives, which are nevertheless widely published by their signatories with great alacrity).
Conclusion
It is hard to predict how nations and international organisations will regulate AI, given that we are grappling with an emergent technology. It is true, however, that broad patterns have emerged. It seems the UK is taking a less risk-averse approach to AI regulation than the EU, and hoping that it can unlock both the economic and revolutionising power of the tech. The round table at Bletchley Park will be fascinating, given that it will most likely involve a melting pot of opinions around AI regulation. A sobering final thought is at the end of July the White House released a statement that the Biden-Harris Administration had secured “voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI”: if the USA – the AI world leader – is only subjecting its companies to optional obligations, where does that leave the rest of us?
Dru Corfield is an associate at Fenchurch Law
Bubble Trouble: Aerated Concrete Claims and Coverage
Reinforced autoclaved aerated concrete (“RAAC”) is a lightweight cementitious material pioneered in Sweden and used extensively in walls and floors of UK buildings from the 1950’s to 1990’s. Mixed without aggregate, RAAC is ‘bubbly’ in texture and much less durable than standard concrete, with an estimated lifespan of 30 years. The air bubbles can promote water ingress, causing decay to the rebar and structural instability.
RAAC is often coated with other materials and may be difficult to detect from a visual inspection. Invasive testing will often be required to investigate the condition of affected areas and evaluate operational risks. In some instances RAAC structures have failed with little or no warning, posing a significant risk to owners, employees, visitors and occupants. Aging flat roof panels are especially vulnerable from pooling rainwater above.
Buildings insurance is designed to cover damage caused by sudden and unforeseen events, whilst ordinary ‘wear and tear’ is treated as an aspect of inevitability and usually expressly excluded. Where damage occurs, it will be a matter of expert evidence as to the relative impact of contributing factors. English law recognises a critical distinction between failure due to inherent weakness of insured property, and accidental loss partly caused by external influences. Depending on the specific policy wording, unexpected consequences of a design defect or flawed system adopted by contractors may provide the requisite element of fortuity, notwithstanding the concurrent effects of gradual deterioration under ordinary usage (Versloot Dredging BV v HDI Gerling (The DC Merwestone) [2012]; Prudent Tankers SA v Dominion Insurance Co (The Caribbean Sea) [1980]).
Original designers and contractors responsible for RAAC elements in affected buildings in many cases will no longer exist, adding further complexity to potential liabilities. Given that the widespread use of RAAC ended in the 1990’s, it is likely that limitation (even under the new 30-year period for Defective Premises Act claims, if applicable) will have expired, though a fresh period for bringing such claims can be triggered where subsequent refurbishment works have been carried out. To the extent that RAAC related claims are not time barred, professional indemnity insurance may respond subject to operation of any relevant policy exclusions.
Structural problems associated with RAAC were first identified in the 1980’s and multiple collapses have been reported in recent years at public buildings including schools, courts and hospitals. The Institution of Structural Engineers has advised that many high rise buildings in the private sector with flat roofing constructed in the late 20th century may contain RAAC, which could include residential blocks, offices, retail premises and hotels. Landlords and designated duty holders responsible for ‘higher risk buildings’ should factor RAAC assessments into safety case reports pursuant to the Building Safety Act 2022.
RAAC represents another unfortunate legacy issue in the UK construction landscape requiring urgent steps from government and industry stakeholders, to implement a coordinated and transparent approach to proactively manage safety risks.
Amy Lacey is a Partner at Fenchurch Law
Insurance for fees claims: RSA & Ors v Tughans
Introduction
This Court of Appeal decision, in which our firm represented the successful respondents, considered the scope of a professional indemnity policy written on a full “civil liability” basis. Will such a policy respond to a claim against a firm (in this case, a firm of Solicitors) for damages referable to its fee, for which the firm had performed the contractually agreed work, but where the fee was only paid by the client following a misrepresentation by the firm?
That was the issue in Royal and Sun Alliance Insurance Limited & Ors v Tughans (a firm) [2023] EWCA Civ 999 (31 August 2023), although it is important to stress that the Court of Appeal hearing, like the Commercial Court before it, proceeded on the assumption that there had in fact been a misrepresentation. Whether that was or was not the case remains to be determined in the underlying proceedings against the Solicitors.
The underling facts of the case were complex, but the appellant Insurers’ argument was summarised by the Court of Appeal as follows:
“Because the fee was procured by misrepresentation, Tughans had no right to retain it; and if it was obliged to return it, as part of a damages claim, it had not lost something to which as a matter of substance it was entitled, just as much as if the contract were avoided and it was obliged to return it or its value in a restitutionary claim… Tughans had not suffered a loss in the amount of the fee, and cover for that element of a damages claim would violate the indemnity principle.”
That argument failed at first instance before Foxton J, and failed again in the Court of Appeal.
The indemnity principle
At the heart of Insurers’ argument was reliance on the indemnity principle, the principle that a policy of indemnity insurance (as distinct from contingency insurance) will only indemnify an insured’s actual loss, and no more than that. The Court of Appeal held that Insurers’ reliance on the indemnity principle here was misplaced, for a number of reasons.
First, a professional who has done the contractually agreed work, and has earned the contractually agreed fee, does suffer a loss if he is ordered to return the fee because the retainer had been procured by a misrepresentation.
Secondly, the Insurers’ argument was inconsistent with the public interest in there being compulsory PI cover for certain professionals, so that, if a firm and its partners were not good for the money, a client would be unprotected where its damages claim included the fee which it had paid.
Thirdly, the implication of the Insurers’ argument would leave uninsured those partners, and potentially also those employees, who had no involvement with the misrepresentation and/or who had not benefited in any way from the fee.
Restitutionary claims
In RSA v Tughans, the underlying claim was one for damages, albeit damages calculated by reference to the fee which the client had paid. The Insurers argued that, since a claim framed in restitution would certainly not (they said) be covered, the same must be true of an analogous damages claim.
The Court of Appeal was unpersuaded. First, a damages claim is different from a restitutionary one. In any case, the Court of Appeal held that not only would a professional indemnity policy cover a restitutionary claim in respect of a fee which had been earned, it might in some circumstances also cover a fee which had not been earned. Thus, said Popplewell LJ, if a professional “… receives money on account of fees, and an employee steals them from the client account, or negligently transfers them to a third party, before the work is done to earn the fee, a claim by the client for the money, advanced as a restitutionary claim, would seem to me to give rise to a liability which constitutes a loss; and would, moreover, appear to fall squarely within the intended scope of PII cover, and be a necessary part of cover if the PII policy is to fulfil the public protection function of a compulsory insurance scheme”.
Conclusion
This is a very welcome decision for professional firms facing claims which extend to the fees which they have received and where previously PI insurers would have inevitably asserted that their policy would not cover such a claim.
Jonathan Corman is a partner at Fenchurch Law