In response to the AI Regulation White Paper consultation, government is backing regulators with the skills and tools needed to address the risks and opportunities of AI.
Government has set aside £10 million to prepare and upskill regulators to and enable them to develop research and practical tools to monitor and address the associated challenges and opportunities in their sectors, from telecoms and healthcare to finance and education. For example, this might include new technical tools for examining AI systems.
Almost £90 million will go towards launching nine new research hubs across the UK and a partnership with the US on responsible AI. The hubs will support British AI expertise in using the technology across areas including healthcare, chemistry and mathematics.
A further £2 million of Arts & Humanities Research Council (AHRC) funding has also been announced, which will support new research projects that will help to define responsible AI across sectors such as education, policing and the creative industries. These projects are part of the AHRC’s Bridging Responsible AI Divides programme.
Another £19 million will go towards 21 projects to develop innovative trusted and responsible AI and machine learning solutions to accelerate deployment of these technologies and drive productivity. This will be funded through the Accelerating Trustworthy AI Phase 2 competition, supported through the UKRI Technology Missions Fund and delivered by the Innovate UK BridgeAI programme.
Many regulators have already taken action on AI. The Information Commissioner’s Office, for instance, has updated guidance on how the UK’s strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account, such as through issuing enforcement notices.
However, government wants to build on this by further equipping them to deal with AI as its use ramps up. The regulatory system will also allow regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK.
In a drive to boost transparency and provide confidence to British businesses and citizens, key regulators, including Ofcom and the Competition and Markets Authority, have been asked to publish their approach to managing the technology by April 30. They will be required to set out AI-related risks in their areas, and detail their current skillset and expertise to address them, as well as a plan to regulate AI over the coming year.
This forms part of the AI Regulation White Paper consultation response, published today, which carves out the UK’s own approach to regulation and will ensure it can quickly adapt to emerging issues and avoid placing burdens on business, which could stifle innovation.
This approach to AI regulation will mean the UK can be more agile than competitor nations, while also taking the lead on AI safety research and evaluation, putting the UK ahead in terms of safe, responsible AI innovation.
Government’s context-based approach will enable regulators to address AI risks in a targeted way.
It has also set out its initial vision for future binding requirements, which could be introduced for developers building the most advanced AI systems - to ensure they are accountable for making these technologies sufficiently safe.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said, “The UK's innovative approach to AI regulation has made us a world leader in both AI safety and AI development.
“I am personally driven by AI's potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
“AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
Government will also be launching a steering committee in spring to support and guide the activities of a formal regulator coordination structure within government in the spring.
These measures are alongside the £100 million invested by government in the AI Safety Institute, which evaluates the risks of new AI models.
The International Scientific Report on Advanced AI Safety, which was unveiled at the AI safety summit at Bletchley Park in November 2023, will also help to build a shared evidence-based understanding of frontier AI, while the work of the AI Safety Institute will enable the UK to work closely with international partners to boost its ability to evaluate and research AI models.
Government has further committed to that approach with a £9 million investment through its International Science Partnerships Fund, bringing together researchers and innovators in the UK and the US to focus on developing safe, responsible and trustworthy AI.
Hugh Milward, vice-president, external affairs Microsoft UK said, “The decisions we take now will determine AI’s potential to grow our economy, revolutionise public services and tackle major societal challenges and we welcome the government’s response to the AI White Paper.”
Tommy Shaffer Shane, AI policy advisor at the Centre for Long-Term Resilience, said, “We’re pleased to see this update to the government’s thinking on AI regulation, and especially the firm recognition that new legislation will be needed to address the risks posed by rapid developments in highly-capable general purpose systems.”
Julian David, CEO at TechUK said, “TechUK welcomes the government’s commitment to the pro-innovation and pro-safety approach set out in the AI White Paper. We now need to move forward at speed, delivering the additional funding for regulators and getting the central function up and running. Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system and assessing the immediate risks.”
Kate Jones, chief executive of the Digital Regulation Cooperation Forum (DRCF), said, “The DRCF member regulators are all keen to maximise the benefits of AI for individuals, society and the economy, while managing its risks effectively and proportionately.
“To that end, we are taking significant steps to implement the White Paper principles, and are collaborating closely on areas of shared interest including our forthcoming AI and digital hub pilot service for innovators.”