Risk, Regulation and Rewards: Regulatory Developments in Artificial Intelligence
With the Government’s White Paper consultation – “A pro-innovation approach to AI regulation” – having closed at the end of June, and the UK scheduled to host the first global summit on AI regulation at Bletchley Park in early November, now is an appropriate time to assess the regulatory lay-of-the-land in relation to this nascent technology.
White Paper
The White Paper was originally published on 29 March 2023. It sets out a roadmap for AI regulation in the UK, focusing on a “pro-innovation” and “context-specific” approach based on adaptability and autonomy. To this end, the Government did not propose any specific requirements or rules (indeed, the White Paper does not give a specific definition of AI), but provided high-level guidance, based on five ‘Principles’:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance;
- Contestability and redress.
The White Paper is, in essence, an attempt to control the use of AI but not so overbearingly as to stifle business or technological growth. Interestingly, the proposals will not create far-reaching legislation to impose restrictions and limits on the use of AI, but rather empower regulators (such as the FCA, CMA, ICO and PRA) to issue guidance and potential penalties to their stakeholders. Perhaps surprisingly, the application of the Principles will be at the discretion of the various regulators.
Motives
The White Paper is markedly different to the EU’s draft AI Act, which takes a more conservative and risk-averse approach. The proposed EU legislation is detail and rule heavy, with strict requirements for the supply and use of AI by companies and organisations.
It would appear that the Government is keen on demonstrating a market-friendly approach to AI regulation, in an effort to draw investment and enhance the UK’s AI credentials. The Government wants the UK to be at the forefront of the AI landscape, and there are understandable reasons for that. The White Paper excitedly predicts that AI could have “as much impact as electricity or the internet”. Certainly the AI sector already contributes nearly £4 billion to the UK economy and employs some 50,000 people.
And the UK does have pedigree in this field – Google DeepMind (a frontier AI lab) was started in London in 2010 by three UCL graduates. The Government is optimistically throwing money at the situation, having already made a £900 million commitment to development compute capacity and developing an exascale supercomputer in the UK. Furthermore, the Government is increasing the number of Marshall scholarships by 25%, and funding five new Fulbright scholarships a year. Crucially, these new scholarships will focus on STEM subjects, in the hope of cultivating the Turings of tomorrow.
Ignorantly Pollyannish?
It all seems like it could be too good to be true. And in terms of regulation, it very well may be. The UK approach to AI regulation is intended to be nimble and able to react pragmatically to a rapidly evolving landscape, but questions arise about the devolved responsibility of various regulators. The vague and open-ended Principles may well prove difficult to translate into meaningful, straightforward frameworks for businesses to understand, and in any event are subjective to the individual regulator. It is unclear what would happen where a large company introduces various AI processes to its business functions but is subject to the jurisdiction of more than one regulator. How would there be a consistent and coordinated approach, especially given that some regulators have far more heavy-handed sanction/punishment powers than others? The Government does intend to create a central function to support the proposed framework, but given that the central function is likely over 18 months away, any dispute/contradiction between regulators before its implementation is going to be a can of worms. Furthermore, when it does arrive, is having a centralised, authoritative Government body to deal with AI not in complete contradiction to the desired regulator-led, bottom-up approach?
And with every day that passes, AI becomes more powerful, sophisticated and complex. It could be the case that all discussions of AI regulation are irrelevant, as there is no way for governments and international organisations to control it. While this risks slipping into a catastrophising and histrionic “AI will end humanity” narrative, it is certainly hard to see how regulation can keep pace with the technology. Consider the difficulty that governments and international organisations have had in regulating ‘Big Tech’ and social media companies in the past two decades, given their (predominately) online presence and global ambit, and then consider how much more difficult it would be to regulate an entirely digital technology that can (effectively) think for itself.
November Summit
In light of these considerations, it will be interesting to see what comes out of the AI Safety Summit in early November. The stated goal of the summit is to provide a “platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risk of AI”. There seems an inherent tension, however, between international cooperation in relation to ‘rules of the game’ around AI and the soft power arms race in which nations are involved for supremacy of the technology. In May, Elon Musk pessimistically opined that governments will use AI to boost their weapons systems before they use it for anything else. It may be the case that curtailing the dangers of AI will need a public and private consensus – in March, an open letter titled ‘Pause AI Giant Experiments’ was published by the Future of Life Institute. Citing the risks and dangers of AI, the letter called for at least a six-month pause to training AI systems more powerful that GPT-4. It was signed by over 20,000 people, including AI academic researchers and industry CEOs (Elon Musk, Steve Wozniak and Yuval Noah Harari to name three of the most famous).
In defence of global governments, the Bletchley Park Summit is not emerging from a vacuum – there have been previous efforts by the international community to establish an AI regulatory framework. Earlier this year, the OECD announced a Working Party on Artificial Intelligence Governance, to oversee the organisation’s work on AI policy and governance for member states. In early July, the Council of Europe’s newly formed Committee on Artificial Intelligence published its draft ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law’. And as recently as early September, the G7 agreed to establish a Code of Conduct for the use of AI. It would be unbalanced to suggest the international community is sitting on its hands (although note the non-binding nature of some of the above initiatives, which are nevertheless widely published by their signatories with great alacrity).
Conclusion
It is hard to predict how nations and international organisations will regulate AI, given that we are grappling with an emergent technology. It is true, however, that broad patterns have emerged. It seems the UK is taking a less risk-averse approach to AI regulation than the EU, and hoping that it can unlock both the economic and revolutionising power of the tech. The round table at Bletchley Park will be fascinating, given that it will most likely involve a melting pot of opinions around AI regulation. A sobering final thought is at the end of July the White House released a statement that the Biden-Harris Administration had secured “voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI”: if the USA – the AI world leader – is only subjecting its companies to optional obligations, where does that leave the rest of us?
Dru Corfield is an associate at Fenchurch Law
Other news
Will someone think of the Lenders? Co-insurance issues for funders
11 November 2024
Recent Court decisions such as Sky UK Ltd & Mace Ltd v Riverstone Managing Agency Ltd (which we wrote about…
You may also be interested in:
Archives
Categories
- Webinars
- Comparing German and English Insurance Law – A Series
- Construction Risks
- Operations
- Business Development
- Construction & Property Risks
- News
- International Risks
- Legislation
- Financial & Professional Risks
- Case Law
- Professional Risks
- Press Release
- Uncategorized
- The Good, the Bad and the Ugly
- Fenchurch Law Webinars
- Stonegate
- Newsletter
- Events