From police facial recognition software to algorithms targeting benefit fraud, the Equality and Human Rights Commission (EHRC) has made challenging discrimination in the public sector’s use of artificial intelligence (AI) a major strand of its new three-year strategy.
The EHRC has published guidance to help public bodies avoid breaches of equality law, including the public sector equality duty. While the guidance is aimed at public sector bodies, private sector companies that provide services and software to such organisations should also take note.
It gives practical examples of how AI systems may be causing discriminatory outcomes.
Examples include:
- a police force using photos of suspects to train facial recognition software where the photos of suspects are disproportionately of younger people and Black people because the police force arrests people with these protected characteristics more than others thus reinforcing an existing bias; and
- a public body using an online portal to decide how to allocate grants for community projects and finds that grants applications for Bangladeshi and Pakistani community projects are turned down more than projects for other groups. It investigates and finds that the algorithm is identifying certain postcode areas as posing a greater risk of project failure, these postcodes, it turns out, have higher proportions of Bangladeshi and Pakistani residents. This may amount to indirect discrimination because the use of the AI applies to everyone but disadvantages a group of people with a protected characteristic without a justifiable reason.
The guidance makes the point that some types of service will be more relevant to some protected characteristics than others.
However, it also identifies scenarios where AI can be used to improve outcomes such as identifying people at risk of missing medical appointments such as some disabled people and people who speak different languages.
The guidance highlights that when public bodies are buying these kinds of systems there is a risk that their staff don’t understand the AI, both in terms of how it works and the data sets it uses in order to make decisions. If public bodies don’t know how the AI works, it will be difficult to make sure that the AI is working as intended and making fair decisions. Public sector organisations are advised by the EHRC to consider how the public sector equality duty applies to automated processes, to be transparent about how the technology is used and to keep systems under constant review and includes a useful checklist on things to consider when developing new software.
Private companies supplying AI tech should therefore be prepared for public sector customers to start asking more detailed questions about how the tech works and requiring more support in reviewing how it is being used. We may also see public sector organisations pushing for enhanced contractual protection in this area, for example warranties that the AI systems are not just compliant with applicable law, but are specifically compliant with equality law including the public sector equality duty.
Next steps
From October, the EHRC says that it will work about thirty local authorities to understand how they are using AI to deliver essential services, such as benefits payments, amid concerns that automated systems are inappropriately flagging certain families with protected characteristics as a fraud risk.
The EHRC is also exploring how best to use its powers to examine how organisations are using facial recognition technology, following concerns that software may be disproportionately affecting people from ethnic minorities. It hopes that its interventions will improve how organisations use AI and encourage public bodies to take action to address any negative effects on equality and human rights. The monitoring projects will last several months and the EHRC will report initial findings early next year.
AI systems may lead to discrimination and deepen inequalities. Discrimination may happen because the data used to help the AI make decisions already contains bias. Bias may also occur as the system is developed and programmed to use data and make decisions.
https://www.equalityhumanrights.com/en/advice-and-guidance/artificial-intelligence-public-services