This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

No AI-specific regulation in the UK – yet

As the EU's AI Act moves rapidly towards enactment, the UK is holding firm in adopting an alternative approach to regulating AI. On 6 February 2024, the Department for Science Innovation and Technology published the UK government's response to the consultation on its March 2023 AI White Paper.

The government confirms that its overall strategy has not changed as a result of the consultation. For now, the intention is to rely on sector-based regulation informed by five (unchanged) AI principles, rather than introduce AI-specific legislation. Relevant regulators including Ofcom, the ICO, FCA, CMA and MHRA, are to publish an outline of their regulatory approach by 30 April 2024, supported by new government guidelines (some of which were published alongside the response) and the AI Standards Hub. £100m has been allocated for new AI innovation and to enhance the capability of the regulators.

The government does, however, make clear that legislation may be needed in future, notably in relation to the most advanced (or highly capable) general purpose AI. This is in keeping with messaging, particularly around the November 2023 AI Safety Summit, when calls for an international oversight body and some form of global consensus on AI regulation came to the fore. We are, however, some way from that point - there isn't even agreement on terminology at the moment. 

One area where the government does propose to legislate is in relation to automated decision-making. However, this will be done through the Data Protection and Digital Information Bill where the government proposes to expand the lawful bases for processing personal data to reach solely automated decisions which have a legal or similarly significant effect on individuals. 

The consultation response includes a detailed 2024 roadmap, but this is set out against the background of an upcoming general election, currently expected (but not certain) to take place in November 2024. There are already divisions emerging between the approaches of the Conservative and Labour Parties. At the AI Safety Summit, leading AI companies agreed to voluntary cooperation with governments on testing advanced AI models and, on 9 February 2024, the government published guidance on the AI Safety Institute's (AISI) approach to evaluations and testing of advanced AI systems. Should the Labour Party win the next general election, however, it seems likely to pursue a more interventionist strategy. Labour recently said it plans to introduce a statutory regime which would replace the current voluntary arrangement. It proposes requiring firms to tell the government when they are developing AI systems over a specified capability level and to conduct safety testing with independent oversight. 

There is also a difference in approach between the strategy outlined in the consultation response and the  House of Lords Communications and Digital Committee which published its report on 'Large language models and generative AI' on 2 February 2024. The report concludes that the government's strategy focuses too much on AI safety and not enough on up-skilling, commercial opportunity and technical skills. It says the UK needs to rebalance towards boosting opportunities or risk losing influence and therefore becoming too dependent on overseas tech firms. It is also critical of the government's stance on copyright and generative AI and urges it to produce clear guidance, if not legislation, to protect rights holders.  

Perhaps this lack of consensus adds weight to the ‘watching brief’ approach to legislating adopted by the current government. Time will tell whether using existing law and regulators will be more effective in managing AI risks without stifling innovation, than the EU's more prescriptive approach.

The government confirms that its overall strategy has not changed as a result of the consultation. For now, the intention is to rely on sector-based regulation informed by five (unchanged) AI principles, rather than introduce AI-specific legislation.

Tags

technology media & communications, copyright & media law, data protection & cyber, information technology, artificial intelligence & machine learning, ai, financial services regulatory