Rome wasn’t built in a day – first thoughts on the Bletchley Park AI Safety Summit

The dust is beginning to settle on the much hyped (albeit nebulously orientated) Bletchley Park Artificial Intelligence Summit. Although it will take time for meaningful directives to filter out of the sleekly edited videos and beaming group photos of world leaders, some high-level observations can be made.

The first thing to note is the inherent contradiction between attempting to bring together a group of self-interested parties for the purpose of collective wellbeing. The ‘Bletchley Declaration’, signed by 28 countries, is an opaque commitment to co-ordinate international efforts on safeguarding against the dangers posed by AI. But a tension exists between nation states competing against each other for supremacy of the technology (and all the potential fiscal, technological and societal benefits that could entail) while recognising that in the wrong hands AI could have nefarious consequences and therefore needs a degree of regulation. In light of that fundamental conflict, the Bletchley Declaration can be viewed as a good start. It does not signal a new global regulatory framework, but it may be the blueprint for some such achievement in the future.

Perhaps the starkest demonstration of nations jostling for the title of world leader/global referee of AI development is the fact that on the first day of the Summit Kamala Harris, US Vice President, gave a speech at the US Embassy in London, unveiling the ‘United States AI Safety Institute’ on the responsible use of AI. This somewhat took the wind out of Rishi Sunak’s sails, who was hoping that the Bletchley Summit would be a springboard for the UK’s own Global AI Safety Institute. The UK Safety Institute is still going ahead, with various international partners, industry participants and academics, but significantly the USA has made clear that it will not be joining. Notably, the US Institute has had 30 signatories, one of whom is the UK.

Beyond typical Great Power rivalries, the Bletchley Summit was also forced to grapple between futuristic, dystopian deployment of AI on the one hand and real world ‘already happening’ AI risk on the other. Critics of the Summit pointed out that much of the discussion around AI safety and regulation focused on the former, at the expense of the latter. For example, the Prime Minister had an hour-long sit down with Elon Musk, who has been well documented in his “AI could end humanity” narrative, a position on which – among the tech community – the jury is still out. But little thought or discussion was given to the potential short-term impacts of AI, such as the warning by Nick Clegg of Meta that there is a real chance that invidious AI could generate disinformation and affect the elections next year in the US, India, Indonesia, Mexico and the UK. Likewise, little thought was given to the danger of discrimination bias in AI’s deep learning algorithms, which has already been demonstrated to have undesirable effects, for example in automated underwriting within the insurance industry (be it racial, geographical, or class bias).

Similarly, the TUC was one of a dozen signatories to a letter to the Prime Minister that outlined concerns about the interest groups of the Summit. The opinion and concerns of small businesses (who have been documented as some of those most concerned about the threat of AI) were almost entirely overlooked in favour of the big tech firms. The argument suggested was that the power and influence of the companies like Meta, Google and X created a narrow interest group for the Summit, whereby some of the parties most concerned about AI safety did not get representation, let alone a seat at the table.

Perhaps focusing on the criticisms levied at the Summit is unfair, given the old mantra ‘if you try to please everyone, you’ll please no one’: given the complexities of AI and the amount of interest groups involved, it was almost inevitable that there would be grumblings. In some sense, any agreement should be mildly heralded – it could be argued that managing to get China and the USA to attend the same Summit was a diplomatic win, let alone to have both of them signing the Declaration. And as mentioned above, the Bletchley Summit is only a starting point: the Republic of Korea will co-host a virtual summit within the next six months before France hosts the next in-person event in 12 months. While it is true that there few concrete commitments or directives emerged from Buckinghamshire, Roma uno die non est condita.

Dru Corfield is an associate at Fenchurch Law


Risk, Regulation and Rewards: Regulatory Developments in Artificial Intelligence

With the Government’s White Paper consultation – “A pro-innovation approach to AI regulation” – having closed at the end of June, and the UK scheduled to host the first global summit on AI regulation at Bletchley Park in early November, now is an appropriate time to assess the regulatory lay-of-the-land in relation to this nascent technology.

White Paper

The White Paper was originally published on 29 March 2023. It sets out a roadmap for AI regulation in the UK, focusing on a “pro-innovation” and “context-specific” approach based on adaptability and autonomy. To this end, the Government did not propose any specific requirements or rules (indeed, the White Paper does not give a specific definition of AI), but provided high-level guidance, based on five ‘Principles’:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance;
  • Contestability and redress.

The White Paper is, in essence, an attempt to control the use of AI but not so overbearingly as to stifle business or technological growth. Interestingly, the proposals will not create far-reaching legislation to impose restrictions and limits on the use of AI, but rather empower regulators (such as the FCA, CMA, ICO and PRA) to issue guidance and potential penalties to their stakeholders. Perhaps surprisingly, the application of the Principles will be at the discretion of the various regulators.

Motives

The White Paper is markedly different to the EU’s draft AI Act, which takes a more conservative and risk-averse approach. The proposed EU legislation is detail and rule heavy, with strict requirements for the supply and use of AI by companies and organisations.

It would appear that the Government is keen on demonstrating a market-friendly approach to AI regulation, in an effort to draw investment and enhance the UK’s AI credentials. The Government wants the UK to be at the forefront of the AI landscape,  and there are understandable reasons for that. The White Paper excitedly predicts that AI could have “as much impact as electricity or the internet”. Certainly the AI sector already contributes nearly £4 billion to the UK economy and employs some 50,000 people.

And the UK does have pedigree in this field – Google DeepMind (a frontier AI lab) was started in London in 2010 by three UCL graduates. The Government is optimistically throwing money at the situation, having already made a £900 million commitment to development compute capacity and developing an exascale supercomputer in the UK. Furthermore, the Government is increasing the number of Marshall scholarships by 25%, and funding five new Fulbright scholarships a year. Crucially, these new scholarships will focus on STEM subjects, in the hope of cultivating the Turings of tomorrow.

Ignorantly Pollyannish?

It all seems like it could be too good to be true. And in terms of regulation, it very well may be. The UK approach to AI regulation is intended to be nimble and able to react pragmatically to a rapidly evolving landscape, but questions arise about the devolved responsibility of various regulators. The vague and open-ended Principles may well prove difficult to translate into meaningful, straightforward frameworks for businesses to understand, and in any event are subjective to the individual regulator. It is unclear what would happen where a large company introduces various AI processes to its business functions but is subject to the jurisdiction of more than one regulator. How would there be a consistent and coordinated approach, especially given that some regulators have far more heavy-handed sanction/punishment powers than others? The Government does intend to create a central function to support the proposed framework, but given that the central function is likely over 18 months away, any dispute/contradiction between regulators before its implementation is going to be a can of worms. Furthermore, when it does arrive, is having a centralised, authoritative Government body to deal with AI not in complete contradiction to the desired regulator-led, bottom-up approach?

And with every day that passes, AI becomes more powerful, sophisticated and complex. It could be the case that all discussions of AI regulation are irrelevant, as there is no way for governments and international organisations to control it. While this risks slipping into a catastrophising and histrionic “AI will end humanity” narrative, it is certainly hard to see how regulation can keep pace with the technology. Consider the difficulty that governments and international organisations have had in regulating ‘Big Tech’ and social media companies in the past two decades, given their (predominately) online presence and global ambit, and then consider how much more difficult it would be to regulate an entirely digital technology that can (effectively) think for itself.

November Summit

In light of these considerations, it will be interesting to see what comes out of the AI Safety Summit in early November. The stated goal of the summit is to provide a “platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risk of AI”. There seems an inherent tension, however, between international cooperation in relation to ‘rules of the game’ around AI and the soft power arms race in which nations are involved for supremacy of the technology. In May, Elon Musk pessimistically opined that governments will use AI to boost their weapons systems before they use it for anything else. It may be the case that curtailing the dangers of AI will need a public and private consensus – in March, an open letter titled ‘Pause AI Giant Experiments’ was published by the Future of Life Institute. Citing the risks and dangers of AI, the letter called for at least a six-month pause to training AI systems more powerful that GPT-4. It was signed by over 20,000 people, including AI academic researchers and industry CEOs (Elon Musk, Steve Wozniak and Yuval Noah Harari to name three of the most famous).

In defence of global governments, the Bletchley Park Summit is not emerging from a vacuum – there have been previous efforts by the international community to establish an AI regulatory framework. Earlier this year, the OECD announced a Working Party on Artificial Intelligence Governance, to oversee the organisation’s work on AI policy and governance for member states. In early July, the Council of Europe’s newly formed Committee on Artificial Intelligence published its draft ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law’. And as recently as early September, the G7 agreed to establish a Code of Conduct for the use of AI. It would be unbalanced to suggest the international community is sitting on its hands (although note the non-binding nature of some of the above initiatives, which are nevertheless widely published by their signatories with great alacrity).

Conclusion

It is hard to predict how nations and international organisations will regulate AI, given that we are grappling with an emergent technology. It is true, however, that broad patterns have emerged. It seems the UK is taking a less risk-averse approach to AI regulation than the EU, and hoping that it can unlock both the economic and revolutionising power of the tech. The round table at Bletchley Park will be fascinating, given that it will most likely involve a melting pot of opinions around AI regulation. A sobering final thought is at the end of July the White House released a statement that the Biden-Harris Administration had secured “voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI”: if the USA – the AI world leader – is only subjecting its companies to optional obligations, where does that leave the rest of us?

Dru Corfield is an associate at Fenchurch Law


AI: The Wizard behind the Data Curtain?

“What is Chat GPT?” is a frequently heard question this year. “What is AI? How does it work?” is occasionally the follow up. And for the sceptics, “Will it take my job? Is it dangerous?” One cheerful BBC News headline recently read “Artificial Intelligence could lead to extinction, experts warn”.

Artificial Intelligence (AI) and Machine Learning Technologies (MLTs) have rapidly gone from the stuff of science fiction to real world usage and deployment. But how will they affect the insurance industry, what are the legal implications, and is the whole issue really that much of a concern?

Taking the final question first, the evidence suggests that jobs are already being lost to this new technological revolution. In March 2023, Rackspace surveyed IT decision-makers within 52 insurance companies across the Americas, Europe, Asia and the Middle East. 62% of the companies said that they had cut staff owing to implementation of AI and MLTs in the last 12 months. In the same period, 90% of respondents said they had grown their AI and MLTs workforce.

It is worth drilling into the specifics of what AI and MLTs actually are. McKinsey & Company define AI as “a machine’s ability to perform the cognitive functions we usually associate with human minds”. MLTs, according to IBM, are best considered as a branch of AI, in which computers “use data and algorithms to imitate the ways that humans learn, gradually improving their accuracy”. So, taking ChatGPT (released 30 November 2022) as an example, Open AI (the developer) has trained ChatGPT on billions of documents that exist online. From news, to books, to social media, to TV scripts, to song lyrics. As explained by Boston Consultancy Group, the “trained model leverages around 175 billion parameters to predict the most likely sequence of words for a given question”. In many ways, it has to be seen to be believed. If you haven’t already, it is worth signing up to ChatGPT. It’s free and takes moments. The author has just asked it to write a Jay-Z song about the London Insurance Market and to write a story about Sun Tzu waking up in the world of Charles Dickens – both with instant, detailed results. The absurdity of the requests was done to demonstrate the power of the MLT: it is quite remarkable.

What is exciting, or scary, depending on your position, is that GPT-4 (the next version of ChatGPT) has 1 trillion parameters. It was released at the end of March and is behind a paywall. But the point is that the never-seen-before power behind the technology released only at the end of last year has become nearly 6 times more advanced in four months. Not unlike the Sorcerer’s Apprentice wielding his axe and doubling and redoubling broomsticks carrying pails of water, MLTs scythe through and consume data at an exponential rate.

So, what does it all mean for the insurance industry? MLTs can sift through data vastly faster than humans, and with far greater accuracy. The tech poses the most immediate threat to lower-level underwriters and claims handlers, as well as general administrative roles. But what about to the wider London Market?

As this firm’s David Pryce has explained*, the London Market does not have as many generalised wordings as do other insurance markets around the world. The policies written here are highly sophisticated and frequently geared towards bespoke risks. The specialism of the London Market means that, in terms of senior underwriters, while they may be informed by AI/MLTs, their judgement, gained through experience, will mean their role is fairly safe – machine learning tech is far superior at analysing past knowns than conceptualizing future risks.

Similarly, in high-value, sophisticated non-consumer insurance contracts that are the norm within Lloyd’s, questions arise about AI’s/MLTs’ potential to remove the broker role. Consider a cutting-edge ChatGPT equivalent that does the role of a broker, but is developed by an insurer for an insured. There is an inherent conflict between acting as an agent of an insured and seeking to maximise profit for the insurer. There would undoubtedly be a data bias in this metaphorical ChatGPT. In the same vein, a ChatGPT equivalent could be developed by London Market brokers, but this would miss the personalised touch that (human) brokers bring to the table (and policyholders enjoy). James Benham, Insurtech guru and podcast host, recently said that AI could stop brokers doing menial form-filing and spend more time doing what policyholders want – stress testing the insured’s policy and giving thought to what cover they need but had not considered. So while in the short term low-level work will likely be made more efficient by AI/MLTs, and lead to reductions in staff, a wholesale revolution or eradication of vast swathes of the London Market broking sector remains unlikely.

Finally, it is worth noting the speech on 14 June 2023 by Sir Geoffrey Vos, Master of the Rolls, given to the Law Society of Scotland’s Law and Technology Conference. After highlighting a recent example of an American lawyer who used ChatGPT for his legal submissions, in which ChatGPT not only grossly misunderstood/misrepresented the facts of some cases but actually made up another one for the purposes of the submissions, he cautioned the use of the MLT in legal proceedings. He further observed dryly that “clients are unlikely to pay for things they can get for free”. Perhaps specific MLTs will be successful developed in the near future to assist or stress test lawyers’ approaches (for example, Robin AI is a London-based startup that uses MLTs to assist lawyers with contract drafting), but ChatGPT is not there yet. A similar point could be made in its application to the insurance industry – simple, concise deployment of the technology will remove grunt work and effectively and cheaply simplify data, but human experience will not be replaced just yet. As with blockchain, in the short term we are likely to see more of an impact on consumer insurance contracts than high-value, bespoke, London Market insurance.

Dru Corfield is an Associate at Fenchurch Law

* See The Potential impact of ChatGPT on insurance policy wordings, Insurance Business Mag


Still on the starting block? Implications of blockchain for the Insurance Industry

Blockchain is a digital ledger technology that allows for secure and transparent record-keeping of transactions without the need for a centralised intermediary. It is a distributed database that is used to maintain a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data, and is open to inspection by all participants to the ledger. Once a block is added to the chain, it cannot be altered or deleted, making the blockchain tamper-resistant and immutable.

Blockchain technology is best known for its use in cryptocurrencies, such as Bitcoin, where it is used to securely record and verify transactions. However, the nascent technology has potential to enhance the business model of insurers, brokers and policyholders.

Potential benefits of blockchain

  1. Transparency

One of the primary benefits of blockchain for the insurance industry is increased transparency. Policies can be complex and confusing for consumer policyholders and blockchain can be used to expediate and simplify claims handling. For example, the UK start-up InsurETH is developing a flight insurance policy that utilises blockchain and smart contracts. When a verified flight data source signals that a flight has been delayed or cancelled, the smart contract pays out automatically. This type of policy can improve trust between the insurer and customer, as the policies exist on a shared ledger that is accessible to both and there is little or no scope for dispute as to  when an indemnity should  be provided.

  1. Fraud Prevention

Another issue within insurance, especially consumer insurance, is fraudulent claims. The ABI detected over 95,000 dishonest insurance claims in 2020 alone with an estimated combined value of £1.1 billion. Blockchain could significantly assist with preventing fraud by providing a secure and transparent way to record and verify claims made. While the process would require extensive cooperation between parties, it is perfectly plausible (and indeed likely in the future) that blockchain could substantially reduce fraud by cross-referencing police reports in theft claims, verifying documents such as medical reports in healthcare claims, and authenticating individual identities across all claims.

  1. Efficiency

Underpinning the above two points is blockchain’s clear potential to reduce operational costs. The technology’s automated nature can cut out middle men and streamline the insurance process. Another UK start-up, Tradle, has developed a blockchain solution that expediates ‘Know your customer’ checks – a time-consuming process for companies and a source of annoyance for clients. Tradle’s technology verifies the information once, and then the customer can pass a secure ‘key’ to whoever else may have a regulatory requirement to verify identity and source of funds. This simple utilisation of blockchain saves time and money in what is usually a tedious process.

Possible issues

  1. Participants (standardisation)

Blockchain’s potential impact may be impeded, however, by various issues preventing a revolutionary deployment of the technology. As pointed out above, there needs be a wide level of consensus for blockchain to work properly. There is currently no standardisation in the Market in relation to how and when the technology should be implemented, and participants are understandably cautious about making investment into blockchain when there is no guarantee that it will initiate efficient solutions (due to the current ‘state of play’). Consensus among competitors will take time to evolve. It is telling that the two companies mentioned above as ones who use blockchain are start-ups – the traditional Market is somewhat glacial.

  1. Scalability

And even if the London Market effected a wholesale adoption of blockchain tomorrow, there could be issues of scalability. Blockchain relies on an ever-increasing storage of data, meaning that the longer the blockchain becomes the more demanding the need for bandwidth, storage and computer power. The data firm iDiscovery Solutions found that 90% of the world’s data was created in the last two years, and there will be 10 times the amount of data created this year compared to last. Some firms may be faced with the reality that they do not have the computational hardware and capacity to provide for the technology, especially when the blockchains will be fed by data that is ever-increasing in terms of quantity and complexity.

  1. London Market use?

Finally, questions arise about exactly which insurance contracts stand the most to gain from blockchain. Consumers would certainly benefit from smart contracts with their home/health/travel insurance policies. But within sophisticated non-consumer insurance, where the figures are large but the number of parties involved is limited, it is questionable whether current transaction models need blockchain. Where there is trust between a policyholder and broker, and a personal relationship between the broker and the underwriter (as is often the case at Lloyd’s), it is unclear what blockchain would really add to the process. It is worth mentioning that in late 2016 Aegon, Allianz, Munich Re and Swiss Re formed a joint venture known as B3i to explore the potential use of Distributed Ledger Technologies within the industry. B3i filed for insolvency in July 2022 after failing to raise new capital in recent funding rounds. It seems, at least in relation to the London Market, blockchain will have a slower, organic impact as opposed to revolutionising the industry.

Dru Corfield is an Associate at Fenchurch Law


Fenchurch Law gavel

The Good, the Bad & the Ugly: #18 (The Good). Carter v Boehm (1766)

Welcome to the latest in the series of blogs from Fenchurch Law: 100 cases every policyholder needs to know. An opinionated and practical guide to the most important insurance decisions relating to the London / English insurance markets, all looked at from a pro-policyholder perspective.

Some cases are correctly decided and positive for policyholders. We celebrate those cases as The Good.

Some cases are, in our view, bad for policyholders, wrongly decided, and in need of being overturned. We highlight those decisions as The Bad.

Other cases are bad for policyholders but seem (even to our policyholder-tinted eyes) to be correctly decided. Those cases can trip up even the most honest policyholder with the most genuine claim. We put the hazard lights on those cases as The Ugly.

#18 (The Good): Carter v Boehm (1766)

Carter v Boehm is a landmark case in English contract law. The judgment by Lord Mansfield established the duty of utmost good faith on each party to a contract of insurance. The duty is placed on both the insured and the insurer, and as such the case (and establishment of the principle) can be considered ‘Good’ for policyholders.

Facts

The case concerned Fort Marlborough in Sumatra. Mr Carter was the Governor of the Fort and bought an insurance policy with Boehm against the risk of attack by a foreign enemy. Carter knew that the Fort was not capable of resisting an attack by a European enemy and further knew that the French were likely to attack but did not disclose this information to Boehm at the formation of the policy. The French duly took the Fort and Carter claimed under the policy. Boehm refused to indemnify Carter and Carter subsequently sued.

Judgment

With regard to the actual decision, Lord Mansfield found in favour of Carter. The reasoning was nestled in the context of 18th century geopolitics and the state of affairs between Britain and France at that time: the two nations had been at war and Lord Mansfield held that Boehm knew (or ought to have known) the political situation. As the conflict was public knowledge, the judge held that Carter not informing Boehm of the likely attack could not amount to non-disclosure:

“There was not a word said to him, of the affairs of India, or the state of the war there, or the condition of Fort Marlborough. If he thought that omission an objection at the time, he ought not to have signed the policy with a secret reserve in his own mind to make it void.”

More significantly, however, the case established the duty of utmost good faith in insurance contracts, specifically in regard to disclosure, which Lord Mansfield explained as follows:

Insurance is a contract upon speculation. The special facts, upon which the contingent chance is to be computed, lie most commonly in the knowledge of the insured only; the under-writer trusts to his representation, and proceeds upon confidence that he does not keep back any circumstance in his knowledge, to mislead the under-writer into a belief that the circumstance does not exist, and to induce him to estimate the risk, as if it did not exist.

The keeping back of such circumstance is a fraud, and therefore the policy is void. Although the suppression could happen through mistake, without any fraudulent intention; yet still the under-writer is deceived, and the policy is void; because the risk run is really different from the risk understood and intended to be run, at the time of the agreement.

The policy would equally be void, against the under-writer, if he concealed; as, if he insured a ship on her voyage, which he privately knew to be arrived: and an action would lie to recover the premium. The governing principle is applicable to all contracts and dealings.

Good faith forbids either party by concealing what he privately knows, to draw the other into a bargain, from his ignorance of that fact, and his believing the contrary.

Analysis

The standard position in English contract law is ‘caveat emptor’, meaning buyer beware. There is no implied duty of good faith, unlike, for example, the French Civic Code. This differs, however, in insurance law, and Carter v Boehm was the case that established that.

The case is Good for policyholders because it established the contractual environment in which insurance policies could successfully operate. The historical context of insurance law is important to grasp in this point: in the 17th, 18th, 19th and majority of the 20th century there was no way for the London Market to know the specific details of risks in far flung corners of the world (albeit Lord Mansfield differentiated well known geopolitical realities in this specific case). The insurer had to rely on honest disclosure by the insured. Carter v Boehm provided the legal framework in which the insured was under a duty to disclose facts that only he knew but would be material to an insurer when assessing a risk. Lord Mansfield concluded that this duty went both ways – an insurer could not “insure a ship on her voyage which he privately knew to be arrived”. Without the principle established in Carter v Boehm, it is arguable that the placing of insurance would for a long period have been weighed too much in favour of insureds as to make insurance a commercially viable business.

It is important to note how the information imbalance between an insured and insurer has shifted dramatically since Carter. In 1766, an insurer was heavily, if not entirely, reliant on the open and honest disclosure of an insured when considering a risk (especially in an overseas context). Unfortunately, for policyholders in the 21st century, insurers have considerable ability and appetite to scrutinise what the insured knew or ought to have known at the formation of policy, with the means and resources to question whether the policyholder had indeed complied with his duty disclosure.

The fact that the pendulum had swung too much towards the interests of insurers explains why it became necessary to ameliorate the position in the shape of the Insurance Act 2015 and its well-known reforms of the scope of disclosure and, even more so, of the consequences of a non-disclosure.

Dru Corfield is an Associate at Fenchurch Law