The CMA has announced its initial review in to certain AI models (foundation models), looking at the markets for these models and how their development might be helped and regulated. It is due to report its findings in September.

In March, the government unveiled a White Paper (i.e. a policy document issued by the government setting out proposals for future legislation) which it titled ‘A pro-innovation approach to AI regulation’. The White Paper sets out the UK’s proposals for enabling the progress of AI in a way that promotes innovation while protecting our society. The government is also keen to show that the UK is more innovative and business friendly than the EU, and what better way than to take a different approach to the EU on regulation of the topic du jour.

We looked at the White Paper in a separate article here, but the key takeaway from the White Paper was that the development of AI would need to occur in line with five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The government called on regulators such as the Competition and Markets Authority (CMA) to consider how to apply the principles in creating guidance, documentation and if necessary regulation in relation to the use of AI in the UK.

The CMA has responded by launching its review of the market for foundation models. Foundation models are AI computing models that are trained on vast amounts of data, and include large language models that power tools such as ChatGPT and other forms of ‘generative AI’.

The aim of this review is to develop and understanding of:

  • How the competitive markets for foundation models and their use could evolve;
  • What opportunities and risks these scenarios could bring for competition and consumer protection;
  • Which principles can best guide the ongoing development of these markets so that the vibrant innovation that has characterised the current emerging phase is sustained, and the resulting benefits continue to flow for people, businesses and the economy.

The findings are designed to enable the CMA to best support the development of AI markets and their use going forward, while ensuring effective competition is maintained and consumers protected. The CMA will also look to understand the ways in which the AI Foundation model market may develop in the near future.

The focus of the review

  • Competition and barriers to entry in the development of foundation models

The CMA is looking to understand how competition works within the market for foundation models, including exploring the barriers to entry and the roles of the models in the ecosystem. The CMA seems concerned that key inputs such as access to data, computing power, talent and funding may lead to disruption and the concentration of power in the hands of a few very large businesses. So far the development of AI and foundation models in particular has shown how dynamic markets can be, leading to ‘a thousand flowers blooming’, but the CMA seems wary that the initial period of dynamism might be followed by concentration of market share in the hands of a few giants holding the data, knowhow and cash.

  • The impact foundation models may have on competition in other markets

The CMA says that there are concerns (which it presumably shares) that the market might develop in ways that “frustrate competition”. There are a number of ways in which that might happen, but understanding how foundation models are being used, how accessible they are, and how important they are becoming in certain markets will be a key focus.

  • Consumer protection

The CMA calls out risks of foundation models to consumers, including the proliferation of false and misleading information and other compliance with consumer protection law.

The CMA is not looking at the wider implications of AI, including the potential impact on labour markets, online safety, compliance with intellectual property laws, the effect on publishers and other content creators, or data protection and privacy.

Comment

This review comes amid concerns over the safety of generative AI, particularly misleading and false news and disruption to markets that are at the heart of our economy and society.

The CMA’s review is not looking at these issues other than through the lens of consumer law (i.e. it is not looking at the risks such as those to the political process as a result of AI-generated fake news, and has expressly rules out looking at labour markets and the impact of AI on publishers). Many of these issues are being looked at by other regulators.

It is instead focusing on the broader market implications, and arguably trying to learn what it sees as mistakes from the approach to the development of the internet and in particular social media.

The CMA’s review will engage a range of AI stakeholders and is not targeting specific companies, although the early leaders in the field will inevitably come under more scrutiny. The CMA’s drive towards understanding AI and its use is part of a wider trend, with the US Federal Trade Commission now “focusing intensely on… AI tools, in which that can have actual and substantial impact on consumers”.

The CMA is accepting submissions on the review until 2 June 2023, with a report outlining the findings being due in September 2023.