HR, rightly, has an uneasy relationship with AI. To the equalities practitioner, the downsides of a machine taking decisions based on a formula are only too apparent. How is the formula generated? What values, in every sense, are used? How does the machine learn?
For a large employer, the benefits of AI in, say, recruitment appear obvious to the Finance and IT Directors. The HRD though is instinctively wary. What is saved in costs risks being lost many times over in compensation and reputational damage if the algorithm turns out to discriminate against those with protected characteristics. In theory, AI should make recruitment fairer: in practice, as the old adage has it, to err is human but to really screw up takes a computer. The effect of any errors in the coding may affect far more people than one errant manager.
Employment and Equalities lawyers have been aware of this for some time. In introducing AI into recruitment, HR, Legal and IT have to work closely together, ruthlessly probing the software and, modelling how it will really behave for the particular organisation. We are a long way from this stuff working straight out of the box. That said, a ‘long way’ in AI terms might mean only months, given the pace of development.
The Centre for Data Ethics & Innovation, part of the Department for Culture, Media & Sport, has just published its review into bias in algorithmic decision making. My Partner, and AI legal expert, Matt Hervey has blogged the headlines here. Recruitment is one of the four ‘sectors’ on which it focusses and which leads the summary in the report.
The findings and recommendations are not surprising, but will add considerable weight to HR’s views around the Board table. The report calls for more guidance, greater scrutiny and “higher and clearer standards of good governance“. It identifies many of the problems in working with AI in the HR context and is helpful in identifying what might be best practice in addressing them. Perhaps mercifully, it does not recommend a new regulator: we continue to rely on the EHRC and the ICO.
It is worth HR reading the whole report, not just the recruitment section. One knotty problem it identifies comes under the heading of ‘local government procurement’, but as the report acknowledges is of much broader importance: if a supplier relies on customer data, who is responsible for system performance? How in turn is that responsibility passed to a sub-contractor, or does it remain with, well, who? Given that few organisations are going to produce and manage their own AI, the governance arrangements are going to be crucial across the whole supply chain. In the employment context, the buck is likely to stop with the employer: where there might be responsibility, it is a good idea to have control.
About the author(s)
Jonathan Chamberlain leads for the Technology Sector in Gowling WLG's UK Employment, Labour & Equalities Team. He is a member and past Chair of the Legislative & Policy Committee of the Employment Lawyers' Association, but blogs in a personal capacity.