On 22 November, the Artificial Intelligence (Regulation) Bill - a private members' bill - had its first reading in the House of Lords.

It's important to remember that only a minority of private members' bills become law, and at only 8 pages long, the Bill doesn’t contain a huge amount of content. However, it does suggest that:

  1. The government should create a body called the AI Authority which will monitor the use and regulation of AI in the UK based on a number of key principles, including:
    1. regulation of AI should deliver (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; (v) contestability and redress; 
    2. any business which develops, deploys or uses AI should (i) be transparent about it; (ii) test it thoroughly and transparently; (iii) comply with applicable laws, including in relation to data protection, privacy and intellectual property; 
    3. AI and its applications should (i) comply with equalities legislation; (ii) be inclusive by design; (iii) be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetuate unlawful discrimination arising in input data; (iv) meet the needs of those from lower socio-economic groups, older people and disabled people; (v) generate data that are findable, accessible, interoperable and reusable; and
    4. a burden or restriction which is imposed on a person, or on the carrying on of an activity, in respect of AI should be proportionate to the benefits.
  2. The AI Authority should collaborate with other regulators to construct regulatory sandboxes for AI. Further, it must implement a programme for public engagement about the opportunities and risks presented by AI.
  3. Any business which develops, deploys, or uses AI should have a designated AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business (including by ensuring that data used by the business in any AI technology is unbiased). In addition, information about the development, deployment, or use of AI by that business should be included in the company’s strategic report under section 414(C) of the Companies Act 2006.
  4. Any person involved in training AI should:
    1. supply a record of all third-party data and IP used in that training to the AI Authority; and
    2. assure the AI Authority that all data and IP is used with consent (either express or implied) and in compliance with all applicable IP and copyright obligations.
  5. Any person supplying a product or service involving AI should give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold consent (either express or implied) in advance.
  6. Any business which develops, deploys, or uses AI should allow independent third parties accredited by the AI Authority to audit its processes and systems.

As far as wish lists for AI regulation go, the Bill is a pretty good start, covering everything from macro-level international regulatory collaboration (via the introduction of an AI Authority with a broad supervisory remit) to more targeted governance of AI usage within businesses (via the new requirement for businesses to introduce an AI officer). However, there are still obvious gaps in the Bill’s remit, including, for example, whether any fines will apply to businesses for breach of these new rules and how the use of AI by bad actors will be monitored and, potentially, sanctioned.

The key principles contained in the Bill may remind you of those contained in a number of international initiatives and frameworks including the OECD AI Principles (2019), the EU’s Ethics Guidelines for Trustworthy AI (2019) or the EU’s White Paper on Artificial Intelligence (2020). These policy documents have heavily influenced the ongoing development of the EU’s AI Act (currently being finalised by EU’s lawmakers) as well as legislative initiatives relating to AI elsewhere in the world. However, translating these high-level principles into concrete, enforceable legal requirements can be tricky, as we have seen from the EU’s experiences. The key question for the UK now is how its legislators will reconcile the tension between the UK government’s preference for a ‘light-touch’ and pro-innovation approach to AI regulation (see UK’s White Paper from earlier this year) and the need to turn the principles of the Bill into practical, actionable regulation. It may be that they can't and the Bill doesn't garner enough support to pass, even if modified.

Although its unlikely the Bill will become law in its current form, it will be interesting to see whether its publication will influence a public bill in the coming months.