Another week, another plethora of AI regulation-related updates – and this time, we’re not talking about the EU AI Act.

Communications and Digital Committee report on LLMs and generative AI

On 2 February (the same day as the Council of the EU's Committee of Permanent Representatives endorsed the EU’s draft AI Act – yes, we know we promised we wouldn’t mention it…), the Communications and Digital Committee of the House of Lords published its report on large language models (LLMs) and generative AI. In it, the Committee confirms its support for “the Government’s overall approach and welcome[s] its successes in positioning the UK among the world’s AI leaders” but notes that the Government “has recently pivoted too far towards a narrow focus on high‑stakes AI safety”. Accordingly, the Committee calls for a “rebalance … involving a more positive vision for the opportunities and a more deliberate focus on near‑term risks”.

What does this “rebalance” look like according to the Committee? Well, the recommendations proposed in the report are wide-ranging and include:

  1. Preparation: The UK must prepare for a “period of protracted international competition and technological turbulence as it seeks to take advantage of the opportunities provided by LLMs”.
  2. Guard against regulatory capture: The Government must prioritise competition in the AI market and guard against “regulatory capture”. This refers to a situation where certain influential players manipulate a regulatory framework to their advantage, either through lobbying or because officials lack technical knowledge and come to rely on a limited group of private sector actors.
  3. Nuanced approach to models: The risks of both types of open and closed LLMs should be balanced and a “nuanced approach” implemented. While open models provide greater access and competition, they also come with the risk of uncontrollable proliferation of dangerous capabilities. On the other hand, closed models come with less scrutiny and a higher risk of concentrated power, but they are controlled more easily. 
  4. Rebalance strategy: To date, the Government’s objectives in the AI space have been focused on a narrow view of AI safety. This must be rebalanced to take advantage of the opportunities from LLMs.
  5. Boost opportunities: The report calls for “a suite of measures to boost computing power and infrastructure, skills, and support for academic spinouts”, as well as “a sovereign LLM capability, built to the highest security and ethical standards”. 
  6. Support copyright: The Government must support copyright by “prioritis[ing] fairness and responsible innovation … resolv[ing] disputes definitively (including through updated legislation if needed); empower[ing] rightsholders to check if their data has been used without permission; and invest[ing] in large, high‑quality training datasets to encourage tech firms to use licenced material”.
  7. Address security risks: The risks posed by LLMs are high and include threats to public safety and financial security. Accordingly, additional protection and mitigation measures are required, including to tackle harms related to discrimination, bias, and data privacy.
  8. Review catastrophic risks: The report concludes that catastrophic risks are “not likely within [the next] three years but cannot be ruled out”. Therefore, the Committee recommends that “[m]andatory safety tests for high‑risk high‑impact models are also needed”.
  9. Empower regulators: Sector regulators should be better resourced to deliver on the objectives of the Government’s previously published AI Regulation White Paper.
  10. Regulate proportionately: The report espouses the benefits of “strategic flexibility” when it comes to regulation with the priority being to “develop accredited standards and common auditing methods at pace to ensure responsible innovation, support business adoption, and enable meaningful regulatory oversight”.

In support of the report, the Chairman of the House of Lords Communications and Digital Committee, Baroness Stowell of Beeston, commented that “we expect the Government to act on the concerns we have raised and take the steps necessary to make the most of the opportunities in front of us.”

Government response to AI Regulation White Paper

Helpfully, mere days later (on 6 February), the Government published its response to the AI Regulation White Paper consultation, indirectly addressing some of the concerns raised by the House of Lords. In particular, the Government has pledged to:

  1. Invest in regulators and research: The Government has promised a whole host of funding in the AI space. Firstly, £10 million will be invested to “prepare and upskill regulators to address the risks and harness the opportunities” of AI, followed by nearly £90 million which will be used to launch nine new research hubs in the UK which will support experts in “harnessing the technology across areas including healthcare, chemistry, and mathematics”. Finally, £2 million of Arts and Humanities Research Council funding will be provided to support research projects which explore “what responsible AI looks like across sectors such as education, policing and the creative industries”.
  2. Promote proportionate, context-based, agile regulation: The UK’s approach to AI regulation will, according to the Government, “ensure it can quickly adapt to emerging issues and avoid placing burdens on business which could stifle innovation. This approach to AI regulation will mean the UK can be more agile than competitor nations…”. Indeed, the Government confirms that it will “not rush to legislate” and will instead implement a “context-based approach means existing regulators are empowered to address AI risks in a targeted way”. In particular, the Government has not ruled out introducing “targeted binding requirements on the small number of organisations that are currently developing highly capable general-purpose AI systems, to ensure that they are accountable for making these technologies sufficiently safe”. 
  3. Invest in safe and trustworthy AI (particularly highly capable, general-purpose AI): Following the successful launch of the AI Safety Institute and the AI Safety Summit in November last year, £9 million will be invested on working with international counterparts to develop “safe, responsible, and trustworthy AI”. A further £19 million will be invested in 21 projects to develop “innovative trusted and responsible AI and machine learning solutions”.
  4. Collaborate with international partners: AI is a global technology with far-reaching opportunities. As such, the Government has committed to “continue to act through bilateral partnerships and multilateral initiatives – including future AI Safety Summits – to promote safe, secure, and trustworthy AI, underpinned by effective international AI governance”. 

As the EU inches closer to approving a groundbreaking AI regulatory framework with the introduction of the EU AI Act (oops – we’ve mentioned it twice…), the UK government insists on remaining “agile”. Interestingly, this agility is intended to be demonstrated through regulators issuing guidance on best practices for developing responsible AI design. Nevertheless, earlier this week, the UK Intellectual Property Office (IPO) has scrapped its plan to publish a code of conduct for training AI models on copyright-protected content. The Government now says that it wants to work closely with rightsholders and AI developers to explore mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as input for AI models. It is not clear why the Government thinks it can achieve a better result where the IPO has failed, particularly as it has been criticised by the House of Lords as failing to articulate its legal understanding of the AI and copyright issue.  

However, what is clear is that this debate cannot continue indefinitely. While the Government says that further proposals are on the way, it remains unclear how or when it may choose to legislate. Perhaps the Government is waiting to see how the EU’s AI Act (third time’s a charm!) plays out in practice before committing pen to paper. Whatever the reason may be, it looks like we’ll need to wait a little longer for any more definitive steps.