On 28 May, the Science, Innovation and Technology Committee published its report on “Governance of artificial intelligence (AI)”. 

The report “examines domestic and international developments in the governance and regulation of AI since the publication of our interim Report. It also revisits the Twelve Challenges of AI Governance we identified in our interim Report and suggests how they might be addressed by policymakers. Our conclusions and recommendations apply to whoever is in Government after the General Election”.

Some of the key recommendations to the Government made in the report include:

  1. Increase public trust in AI as it becomes a bigger part of our lives
  2. Address the 12 challenges of AI governance (which the Committee set out in its interim report and which we discuss in our previous article here)
  3. Introduce legislation as and when appropriate (namely, if the current approach to AI regulation appears insufficient), learning lessons from the regulatory approaches adopted by other jurisdictions and clarifying the position on liability for harmful uses of AI
  4. Publish the results of its regulatory gap analysis “to determine whether regulators require new powers to respond properly to the growing use of AI”
  5. Provide additional financial support to regulators tasked with overseeing AI risks
  6. Support the “safe” adoption of AI in the public sector via i.AI, the AI incubator which “help[s] departments harness the potential of AI to improve lives and the delivery of public services
  7. Require deployers of AI models and tools to “submit [to] them … robust, independent testing and performance analysis prior to deployment”, as well as “summarise what steps they have taken to account for bias in datasets used to train models, and to statistically report on the levels of bias present in outputs produced using AI tools”
  8. Take steps to address the consequences of AI-assisted misrepresentation, including in relation to the upcoming general election
  9. Adopt a regulatory approach which recognises the “black box challenge” associated with the use of AI and encourage regulators to “prioritise testing and verifying their outputs, as well seeking to establish—whilst accepting the difficulty of doing so with absolute certainty—how they arrived at them”
  10. Conclude discussions regarding “the use of copyrighted works to train and run AI models” (which the Committee believes will inevitably “involve the agreement of a financial settlement for past infringements by AI developers, the negotiation of a licensing framework to govern future uses, and in all likelihood the establishment of a new authority to operationalise the agreement”)

The UK Government has two months to respond to the report and its recommendations. With the upcoming election in a few weeks’ time, it is not clear whether whoever is in power after 4 July will be able to comply with this deadline. 

Meanwhile, while yet another report calls on the UK Government to take action to address specific AI governance issues, across the Channel, the EU’s AI Act enters its final stretch. It’s been reported that the Act will be signed off later this week, though only published in July 2024 due to a significant backlog of legislation.