This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

The AI Safety Summit – a little less conversation, a little more action?

The AI Safety Summit, hosted by the UK at Bletchley Park, has certainly attracted key political heavyweights including the EU's Ursula von der Leyen, UN Secretary-General António Guterres, US Vice President Harris (although not President Biden himself), as well as representatives from China's Ministry of Science and Technology. Academics and tech leaders, notably OpenAI's Sam Altman and X's Elon Musk, were also in attendance. There were, though, notable absences including the President of France and the German Chancellor, and there have been complaints that civil society and campaign groups were not afforded a sufficient presence.

 Many will agree with UK Prime Minister Sunak's view that global consensus is the only genuinely effective path to managing potential AI-related doomsday scenarios, but it's important to ask what the summit has really achieved. Getting a wide range of power brokers to sit down and discuss the issues is certainly an important step, and the positioning of the UK as rainmaker has been moderately successful. However, Sunak's communiqué, now signed by politicians from a wide range of countries including the US, China, Nigeria, Canada and Singapore, stops short of calling for specific AI regulation.

This is in line with the UK government's policy outlined in its 2023 White Paper on AI, but at odds with the EU's approach which is to introduce AI-specific legislation. The communiqué is ambitious (calling for international co-operation and the need to be inclusive) but there is no call for specific AI regulation or enforcement, and the 20+ countries which have signed obviously falls far short of global coverage.  

However, the summit does appear to be the start of something big – a change in mood music, perhaps. For example, there are commitments for further summits in the years ahead. Significantly, the pledges to establish AI Safety Institutes in the UK and the US to test AI technology before its release onto the market also indicate a desire for cross-border collaboration on evaluating risks and promoting safety, as well as – in theory at least – collaboration between Big Tech and governments. 

Getting to a place of global agreement on AI regulation at this point was always going to be a tough ask. In the first place, there is disagreement as to the nature of the safety issues posed by AI and whether we should be focusing on future existential threats or on the currently destabilising potential of deepfakes and disinformation (or indeed how to effectively focus on both concerns). It's also hard to envisage progress on AI safety regulation keeping up with the pace of technological advances. 

The Prime Minister himself acknowledged that the rapid development of technology is in tension with the time and resources required to consult, draft and implement legislation, but there are strong voices calling for some form of international oversight body. Perhaps what form such a body should take will be high on the agenda of the next summit, but for the foreseeable future, a fragmented approach to the safety concerns around AI will persist.


technology media & communications, artificial intelligence & machine learning