We have identified four general principles that any organisation wanting to use facial recognition technology should consider following this judgement (as the context and grounds of appeal are to some extent specific in the context of use of facial recognition technology by the police force):
- Any use of facial recognition technology must undergo a Data Protection Impact Assessment which should assume that the right to privacy under the European Convention on Human Rights is engaged and likely to be infringed by the use of the technology.
- The benefits gained by use of facial recognition technology can outweigh an individual’s right to privacy. This will clearly be context-specific but the use by the police in this instance was proportionate. This balancing exercise must be clearly documented (in a Data Protection Impact Assessment) by an organisation to show that the effect on individuals’ right to privacy under the Human Rights Act has been considered and risks mitigated where possible.
- Clearly then facial recognition technology does interfere with an individual’s right to privacy so any use must be “in accordance with the law”. Whilst overall the use of the technology was proportionate, giving individuals (in this case, police officers) discretion as to when it is used and who the technology identifies is unlikely to be “in accordance with the law” unless there is very clear guidance.
- Consider whether the facial recognition technology will result in any discrimination or bias. In this case, specific public sector legislation dealing with equality was engaged. Whilst this will not be the case for private sector organisations, the courts are clearly alive to the potential for discrimination and expect users of these systems to satisfy themselves that there will be no bias.
Read on for a more detailed analysis of this judgement via the Gowling WLG website.