Ofcom has published an Illegal Harms Further Consultation in order to inform its enforcement of the Online Safety Act 2023 (the “Act”) in respect of content depicting animal cruelty and human torture when the legislation comes into full effect next year.  

The Act, which is being implemented in phases through to early 2025, requires all in-scope service providers to protect their users from encountering illegal content – meaning content that relates to a “relevant offence” – on their platforms. A “relevant offence” is either a “priority” offence or a “non-priority” offence, with the former category comprising a list of more than 130 specific offences that were identified by Parliament as forming the basis of the most harmful content and therefore demand particularly stringent user protections. The priority offences are specified in Schedules 5, 6 and 7 of the Act. 

The animal cruelty offence

In a late-stage amendment to the Online Safety Bill, as it then was, the Government added section 4(1) of the Animal Welfare Act 2006 (the “animal cruelty offence”) to the prescribed list of priority offences. Ofcom is concerned however that the offence, which criminalises behaviour that causes or is likely to cause unnecessary suffering to animals, does not cover off the full range of damaging online content which depicts animal cruelty.

The animal cruelty offence is committed by a person where they know or ought reasonably to know that their conduct would cause, or would be likely to cause, unnecessary suffering to a protected animal. Ofcom considers that the difficulty with this particular offence is that posting online content which depicts animal content does not itself cause an animal to suffer, so while the content may be harmful and dangerous, it would not actually amount to an offence. 

While the Act makes it an offence to conspire with, assist or encourage other people to commit a priority offence, Ofcom’s consultation notes that the animal cruelty offence also does not cover off all of the kinds of online content which are associated with animal cruelty: there is a risk, it says, that “taken in isolation, the priority animal cruelty offence does not deal with pre-recorded animal cruelty in a suitable robust way… because not all pre-recorded depictions of cruelty to animals will amount to encouraging, assisting or conspiring to commit an action that would cause, or be likely to cause, unnecessary suffering to an animal.”

The resulting risk is that in-scope providers will not be obliged to take down content depicting animal cruelty: in Ofcom’s words, a “deeply undesirable policy outcome” given the plethora of evidence available to support the notion that content depicting animal cruelty is clearly “deeply damaging, and unacceptable to any right-thinking person”

Content depicting human torture

In a similar vein, Ofcom’s consultation voices concerns about depictions of real-life human torture which do not fall within the remit of a priority offence, for example terrorism or extreme pornography offences, as service providers may decide against taking action in respect of this content. 

Ofcom’s consultation acknowledges that democracy demands that extreme cruelty of both humans and animals is depicted from time to time, for example in the process of war reporting, or in dramatisations of violent historic events. Ofcom clarifies that it is concerned instead with ensuring the regulation of “a class of content which goes well beyond what is acceptable in a pluralistic society, which is created and shared by or for those who enjoy sadism.” 

The improper use of public electronic communications networks

In light of identifying these weak spots inherent in the current list of priority offences, Ofcom has identified that section 127(1) of the Communications Act 2003, a non-priority offence, is best placed to make sure that providers deal appropriately with content which depicts animal cruelty or human torture. The offence prohibits the improper use of public electronic communications networks by sending or causing to be sent a message which is “grossly offensive or of an indecent, obscene or menacing nature”, and subject to the consultation’s findings, Ofcom plans to publish guidance about the application of s.127(1) in this context to ensure that content which falls between the gaps of the priority offences can still be taken down efficiently.  

Non-priority offences: what does compliance look like?

Relevant non-priority offences under the Act are offences under UK law which are not priority offences but which meet the following criteria: the victim of the offence is an individual; the offence is created under the Act or another piece of legislation, an order of Council or another relevant instrument; the offence does not concern IP infringement, the safety or quality of goods, or the performance of a service by an unqualified person, and is not an offence under the Digital Markets, Competition and Consumers Act 2024 (which replaces the Consumer Protection from Unfair Trading Regulations 2008). 

Ofcom’s November 2023 Online Harms Consultation paper sets out the compliance thresholds for priority and non-priority offences respectively. While user-to-user platforms must carry out risk assessments which separately assess the 15 kinds of illegal harm covered off by the list of priority offences, as well as confirm the level of risk that their service will be used to commit or facilitate the commission of the priority offences, this won’t be needed for non-priority offences. Instead, providers need only assess the risk of harm from non-priority offences where they have reason to believe that specific non-priority offences are likely to be carried out on their platform. 

In respect of non-priority offences, user-to-user platforms must also:

  • take down non-priority illegal content swiftly, when they are made aware of it; and 
  • effectively mitigate and manage the risks of harms to individuals according to risk assessments.

For both priority and non-priority illegal content, search services must take or use proportionate systems and processes to effectively mitigate and manage the risks of harm to individuals. In respect of priority offences, they must operate a service which proportionately minimises the risk of individuals encountering illegal content. This latter requirement will extend to non-priority offences only where the service has been made aware of the relevant illegal content. 

The consultation will close on 13 September 2024, with guidance to follow in its December 2024 Illegal Harms Statement.