Artificial intelligence is the buzz word of the moment; and rightly so – by the first half of 2023 alone, hundreds of AI-powered tools were released, each with their own promise of transforming the way we live and work. However, as the popularity and functionality of AI grows, so too do calls for the regulation and governance of this evolving technology.
On 31 August 2023, the Science, Innovation and Technology Committee of the UK Government published an interim report exploring both the current position, and anticipated future challenges, of AI governance (see the Governance of Artificial Intelligence: Interim Report). Whilst the report acknowledges the benefits of AI (calling it “a general-purpose, ubiquitous technology [that] has affected many areas of society and the economy”), it details 12 challenges that the Government’s approach to AI governance and regulation should address in order to ensure that “the public interest is safeguarded and, specifically, potential harms to individuals and society are prevented”.
- The Bias Challenge: As a result of the way in which most AI technologies are trained (based on datasets compiled by humans), there is a risk of encoding bias into the tools themselves which could replicate the biases of, and perpetuate discrimination in, society.
- The Privacy Challenge: Due to the reliance of AI tools on varied and often detailed datasets, the report acknowledges that “AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants”.
- The Misrepresentation Challenge: AI tools can create material that “misrepresents behaviour, opinions, and character”, including via the phenomenon of “fake news”.
- The Access to Data Challenge: AI tools require access to very large pools of data in order to train the technology. Access to such huge datasets can be difficult and is often only open to developers with the most resources, resulting in potential competition concerns.
- The Access to Compute Challenge: In addition to data, AI tools require significant levels of compute in order to power the development of such AI. The cost implications of accessing such power mean that availability is limited to a few organisations.
- The Black Box Challenge: The decision-making processes which underlie some AI technologies are not explainable, resulting in transparency concerns. Rather than become black boxes, there are calls for AI systems to detail the processes by which they generate an output.
- The Open-Source Challenge: There are disagreements within the industry regarding whether the code used to create AI tools should be open-source. Those in favour of open-source code argue that it can encourage innovation, whilst opponents suggest that proprietary code can prevent misuse of the models.
- The Intellectual Property and Copyright Challenge: There is ongoing concern regarding the use (or “scraping”) of copyrighted material to train AI tools without the permission of the copyright holder.
- The Liability Challenge: As the industry moves towards more complex and international supply chains for AI tools, policy must establish who is liable for any harm arising from such tools.
- The Employment Challenge: As the capabilities of AI evolve, there is concern that jobs (particularly those that involve routine, repetitive labour, and low-level decision-making) may become automated.
- The International Coordination Challenge: Given the geographically far-reaching potential of AI technologies, there will need to be an internationally coordinated response to its governance.
- The Existential Challenge: Experts have raised concerns that the “use of AI models and tools could threaten or undermine national and/or international security” including via “the development of novel biological or chemical weapons, the risk of unintended escalation and the undermining of nuclear deterrence”. If this is a possibility, governance needs to provide protections for national security.
This interim report follows the UK Government’s release of the AI White Paper back in April which initially recommended that the UK adopt a flexible “pro-innovation” regulation of AI when compared to the more prescriptive regime put forward by the European Union.
However, the 12 key challenges outlined in this interim report are a strong indication that the UK Government is changing tac on its recommended approach to AI regulation – seemingly now favouring a more comprehensive regime to tackle the varied and complex challenges posed by AI. Indeed, the report concludes that: “The AI white paper should be welcomed as an initial effort to engage with a complex task. However, the approach outlined [in the White Paper] is already risking falling behind the pace of development of AI”.
It is certainly the UK Government’s goal to be seen as a world leader when it comes to regulating AI, given the UK’s long history of technological innovation and regulatory expertise. We’ll be keeping a close eye on how the UK’s domestic AI regulation progresses, but expect a more detailed framework to emerge as the UK progresses its legislative approach (which is unlikely to be in force for at least another 24 months).
[The report] details 12 challenges that the Government’s approach to AI governance and regulation should address in order to ensure that “the public interest is safeguarded and, specifically, potential harms to individuals and society are prevented”.
https://publications.parliament.uk/pa/cm5803/cmselect/cmsctech/1769/report.html