Over the past few weeks there have been various scare stories about the rise of AI as well as calls for its development to be paused, with some claiming that AI will kill humans. However, Margrethe Vestager (with the snappy job title of Executive Vice-President of the European Commission for a Europe fit for the Digital Age (Competition)) has said that AI's potential to amplify bias or discrimination was a more pressing concern. 

Against this backdrop, consumer groups from 13 European countries have called on their national regulators to launch urgent investigations into the risks of generative AI, such as ChatGPT, and to enforce existing legislation to protect consumers.

This coincides with the publication of a new report by Forbrukerrådet, (a Norwegian consumer organisation), which sets out what it perceives to be the many risks of generative AI, about the existing rules which can protect consumers and on which rules still need to be developed.

BEUC has already written to consumer safety and consumer protection authorities in April calling on them to start investigations because of the breadth and speed of the rollout of generative AI models, such as ChatGPT, and the possible harms resulting from its deployment. The European Data Protection Board has also created a taskforce to look into ChatGPT.

Summary of report

The report published by Forbrukerrådet summarises various current and emerging challenges, risks, and harms of generative AI. These include:

  • power, transparency, and accountability, where certain AI developers have closed off their systems from external scrutiny, making it very hard to understand how data has been collected or decisions are made.
  • wrong or inaccurate output, where generative AI systems have not understood context or even made-up non-existent sources to support the content generated. 
  • using technology to manipulate or mislead consumers, for example, by emulating human speech patterns and using emotive language.
  • bias and discrimination. For example, image generators tend to portray CEOs as white males, and people who wash up dishes in restaurants as women of colour.
  • privacy and personal integrity. For example, image generators can use datasets taken from search engines or social media without a lawful legal basis or the knowledge of the people in the pictures. Text generators could include personal data from individuals which may be taken out of context.
  • security vulnerabilities. Generative AI systems could be used by scammers to generate large amounts of convincing-looking text to deceive victims.

The report covers proposed EU legislation, such as the AI Act, the revised Product Liability Directive and the AI Liability Directive.  It also mentions industry codes of conduct but does not believe that these are sufficient on their own, due to usually catering to the lowest common denominator. 

The EU's approach is arguably unworkable, as most generative AI in use now would fall foul of the new laws. The UK is currently planning to go it alone with its Pro-Innovation approach to AI regulation, (ie regulation lite) emphasising the benefits of AI such related to medicine and climate change, among others. It has also set up a Foundation Model Taskforce, with £100m of initial funding. Since the government issued the White Paper in March, Ofcom has produced its own statement on generative AI in the communications sector.

One other issue is the environmental impact - training the datasets and running the models is very energy intensive and has a major impact on the carbon footprint.  It is well known that crypto-mining is energy intensive, but people are perhaps less cognisant of the issues related to generative AI.

Watch this space!