As we wait for the final version of the codes on illegal harms to be published next month, the new Data (Use and Access) Bill has been introduced to parliament, which makes some changes to the online safety regime. Ofcom is consulting about access for researchers to information about online safety and it has also published an open letter to online service providers operating in the UK about how the UK’s Online Safety Act 2023 will apply to Generative AI and chatbots.
The Data (Use and Access) Bill was introduced to Parliament on 23 October 2024. From an online safety perspective, it contains provisions about the retention of information by providers of internet services in connection with investigations into child deaths.
The Bill also includes a power to create a researcher data access regime. This aims to support researchers to access data held by online platforms so they can conduct robust and independent research into online safety trends. It also aims to boost transparency and evidence about the scale of online harms and the measures which are effective in tackling them.
With this in mind, Ofcom has published a call for evidence on independent researchers' access to information on online safety from providers of services regulated under the OSA. Under the OSA, Ofcom has an obligation to report on how and to what extent independent researchers access information about online safety matters from providers of regulated services. It has asked about how and to what extent independent researchers currently access information from providers of regulated services, the challenges that currently constrain information sharing for these purposes and how greater access to this information might be achieved. The call for evidence ends on 17 January 2025.
Finally, Ofcom has published a very useful open letter about how the OSA applies to generative AI. It mentions recent concerning incidents, including the tragic death of an American teenager who had developed a relationship with a chatbot based on a Game of Thrones character and another incident where a Generative AI chatbot platform had created chatbots to act as ‘virtual clones’ of real people and deceased children, including Molly Russell and Brianna Ghey.
The OSA covers websites or apps that allow their users to interact with each other by sharing images, videos, messages, comments or data with other users of the platform. Any AI-generated text, audio, images or videos that are shared by users on a user-to-user service is user-generated content and would be regulated in the same way as human-generated content. For example, deepfake fraud material is regulated no differently to human-generated fraud material. It does not matter whether that content was created on the platform where it is shared or has been uploaded by a user from elsewhere.
The OSA also regulates Generative AI tools and content in other ways, including:
- Generative AI tools that enable the search of more than one website and/or database are ‘search services’ under the OSA. This includes tools that modify, augment or facilitate the delivery of search results on an existing search engine, or which provide ‘live’ internet results to users on a standalone platform. For example, in response to a user query about health information, a standalone Generative AI tool might serve up live results drawn from health advice websites and patient chat forums – this would make it a search service regulated by the OSA.
- Sites and apps that include Generative AI tools that can generate pornographic material are also regulated under the OSA. These services are required to use highly effective age assurance to ensure children cannot normally access pornographic material.
Ofcom strongly encourages services to start preparing now to comply with the relevant duties. For providers of user-to-user services and search services, this means, among other requirements, undertaking risk assessments to understand the risk of users encountering harmful content; implementing proportionate measures to mitigate and manage those risks; and enabling users to easily report illegal posts and material that is harmful to children. Ofcom mentions the following steps, among others:
- having a named person accountable for compliance with the OSA;
- having a content moderation function that allows for the swift takedown of illegal posts where identified and for children to be protected from material that is harmful to them;
- having a content moderation function that is adequately resourced and well trained;
- using highly effective age assurance to prevent children from encountering the most harmful types of content where this is allowed on the platform; and
- having easy to access and easy to use reporting and complaints processes.
The OSA’ duties are mandatory. If companies fail to meet them, Ofcom is prepared to take enforcement action, which may include issuing fines. For background on the timelines, see here.