Site icon LoupedIn

The CDEI reports on the use of data and AI in financial services

The Centre for Data Ethics and Innovation (“CDEI“) published a report on its Review Into Bias In Algorithmic Decision-making on 27 November 2020 (“the Report“). The Report examined the use of data and AI in recruitment, financial services, policing and local government. In this blog, I examine the CDEI’s key findings in respect of financial services.  

Use of AI in Financial Services – not a new phenomenon

The Report highlights that financial services firms have long used data to support their decision-making, ranging from being highly innovative to more risk averse in their use of new algorithmic approaches. Given the historical use of algorithms, the CDEI concluded that the financial services sector is ‘well-placed’ to adopt ‘the most advanced data-driven technology to make better decisions about which products to offer to which customers’.

However, the CDEI also recognised a serious risk that in the context of societal inequality, historic biases could become entrenched further through algorithmic systems. The Report discusses the credit sector as a prime example, where there is evidence documenting historical inequalities experienced by women and ethnic minorities in accessing credit; the Report warns of the risks of firms relying on making accurate predictions about peoples’ behaviours using algorithmic solutions based on traditional data, for example how likely they are to repay debts, where specific individuals or groups are historically underrepresented in the financial system.

In the Report, the CDEI explores the current landscape and discusses how machine learning is being used in financial services, relying on the findings of the 2019 joint survey issued by the Bank of England (“BoE“) and Financial Conduct Authority (“FCA“). I have already summarised key use cases in a previous blog but as the Report confirms, financial services firms are increasingly utilising complex machine learning algorithms in back-office functions like risk management and compliance.

Data limitations?

The Report identifies that use of more data from non-traditional sources could enable population groups who have historically found it difficult to access financial services, for example credit, due to there being less data about them from traditional sources, to gain better access in future. Still focusing on the credit sector, the Report discusses the example of ‘credit-worthiness by association’, describing the move from credit scoring algorithms using data from an individual’s credit history to drawing on additional data about an individual, for example their rent repayment history or their wider social network. However, many firms are not using social media data and are, according to the CDEI, sceptical of its value.

There are also concerns that while more data could improve inclusiveness and the representation of datasets, more data and complex algorithms could increase the potential for the introduction of indirect bias via proxy and limit the ability to detect and mitigate it. For example, opaque algorithms may unintentionally replicate systemically discriminatory results. Data points like salary may simply become substitutes for protected characteristics like ethnicity and gender, when considering issues such as the gender pay gap which requires data about the sex of each employee. The Report emphasises that the tension between the need to create algorithms which are blind to protected characteristics while also checking for bias against the same characteristics creates a challenge for organisations seeking to use data responsibly. As such, firms adopting more complex data feeds and algorithms need to test their models’ accuracy through validation techniques and ensure that there is sufficient human oversight of the process as a way to manage bias in the development of algorithmic models.  

The CDEI reported that most financial organisations they interviewed agreed that the key obstacles to further innovation in the sector were as follows:

The importance of explainability in financial services

The CDEI refer to ‘explainability’ as the ability to understand and summarise the inner workings of a model, including the factors that have gone into the model. As mentioned above, explainability is key to understanding the factors causing variation in outcomes of decision-making systems between different groups and assessing whether or not this is regarded as fair.

As part of the FCA’s Principles for Business, the FCA expects firms to treat customers fairly (Principle 6) and take reasonable care to organise and control their affairs responsibly and effectively, with adequate risk management systems (Principle 3). The FCA is very much an outcomes focused regulator and expects firms to ensure that they are achieving the right outcomes for customers and mitigating any risks of harm that may be posed by their business, including the way in which their businesses are operated.

As such, the FCA expects all firms to understand how algorithmically-assisted decisions are reached in order to ensure they are treating customers fairly and achieving the right outcomes for customers, and are meeting their legal and regulatory obligations. Explainability is therefore crucial in financial services and until firms can get comfort that more advanced, complex algorithms are sufficiently transparent to enable them to understand how decisions were reached, there may be a natural limitation to the extent to which more advanced machine learning algorithms can be developed.

The final word

For financial services firms and regulators to identify and mitigate discriminatory outcomes and build consumer confidence in the use of algorithms, it is crucial for firms to have transparent and explainable algorithmic models, particularly for customer-facing decisions.

The FCA and BoE are undertaking work to assess the impact and opportunities of innovative data use and AI in the financial services sector as they recognise the benefits that AI can bring, including:

The CDEI will be an observer on the FCA and BoE’s AI Public Private Forum which will explore means to support the safe adoption of machine learning and AI within financial services.

Sushil Kuner is a London-based principal associate who advises on all aspects of financial services regulatory law, having spent eight years working within the Supervision and Enforcement Divisions of the Financial Conduct Authority (FCA).

Exit mobile version