Financial services firms have been increasingly incorporating Artificial Intelligence (“AI”) into their strategies to drive operational and cost efficiencies. Firms must ensure effective governance of any use of AI. The Financial Conduct Authority (“FCA”) is active in this area, currently collaborating with The Alan Turing Institute to examine a potential framework for transparency in the use of AI in financial markets.
In simple terms, AI involves algorithms that can make human-like decisions, often on the basis of large volumes of data, but typically at a much faster and more efficient rate. In 2019, the FCA and the Bank of England (“BoE”) issued a survey to almost 300 firms, including banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, to understand the extent to which they were using Machine Learning (“ML”), a sub-category of AI. While AI is a broad concept, ML involves a methodology whereby a computer programme learns to recognise patterns of data without being explicitly programmed.
The key findings included:
- ML is increasingly being used in financial services with two thirds of respondents reporting they already use it in some form. However, the FCA and BoE called the adoption of ML ‘nascent’ as the technology is employed largely for back office functions with customer-facing technology largely in the exploration stage (see use cases below).
- Regulation was not seen by respondents as an unjustified barrier to the use of ML but some firms stressed the need for additional guidance on how to interpret current regulation. The biggest constraints firms faced in wider adoption of ML were legacy IT systems and data limitations.
- Firms validated ML applications before and after deployment – the most common methods were outcome-focused monitoring and testing against benchmarks. However, many firms noted that ML validation frameworks would need to evolve in line with the nature, scale and complexity of ML applications.
- Firms used a variety of safeguards to manage the risks associated with ML including alert systems which flag unusual or unexpected actions to employees and ‘human-in-the-loop’ mechanisms, where decisions made by the ML application are only executed after review or approval from a human.
- Firms mostly designed and developed ML applications in-house but sometimes relied on third-party providers for the underlying platforms and infrastructure.
- The majority of users applied their existing model risk management frameworks to ML applications.
The use cases for ML identified by the FCA and BoE were largely focused around the following areas:
Anti-money laundering and countering the financing of terrorism
Financial institutions have to analyse customer data continuously from a wide-range of sources in order to comply with their AML obligations. The FCA and BoE found that ML was being used at several stages within the process to:
- analyse millions of documents and check details against blacklists for know-your-customer checks during the onboarding stage; and
- as customers transfer money or make payments, identifying suspicious activities and flagging potential cases of money laundering for review by human analysts.
Customer engagement
Firms were increasingly using ‘Chatbots’, which enable customers to contact firms without having to go through human agents via call centres or customer support. Chatbots can reduce the time and resources needed to resolve consumer queries.
ML can facilitate faster identification of user intent and recommend associated content which can help address consumers’ issues. For more complex matters which cannot be addressed by the Chatbot, the ML will transfer the consumer to a human agent who should be better placed to deal with the query.
Sales and trading
The FCA and BoE reported that ML use cases in sales and trading broadly fell under three categories ranging from client-facing to pricing and execution:
- for client-facing activities, ML was used to increase the speed and accuracy of processing orders, thereby allowing shorter response times;
- in pricing, ML models combined a larger number of market time series to arrive at an estimate of a short-term fair value; and
- in execution, ML evaluated venue, timing and order size choices. ML was also used for calculating the probability of an order being filled on the basis of the available characteristics of the order.
Insurance pricing
The majority of respondents in the insurance sector used ML to price general insurance products, including motor, marine, flight, building and contents insurance. In particular, ML applications were used for:
- risk cost modelling where firms use ML to analyse new data sources and build the underlying risk cost models to gain an understanding of the expected claims cost of an underwritten policy; and
- propensity modelling where ML is used to predict product add-on selections, customer demands and estimated future claims costs, which can influence renewal premiums offered to existing policyholders.
Insurance claims management
Of the respondents in the general insurance sector, 83% used ML for claims management in the following scenarios:
- ML applications analyse photographs and unstructured data sources (i.e. data provided by policyholders including the location and any sensor data) to extract the relevant management information from the raw data and predict the estimated repair costs. The ML application then uses historical data to compare this to the predicted total loss cost and then make a decision as to the correct route for the claims handler to follow (e.g. to write off or repair a vehicle).
- ML applications use predictive analytics to target claims that have a high likelihood of customer dissatisfaction or complaint, in which case they are flagged so that a human can monitor the claim and intervene if required.
Asset management
ML currently appears to provide only a supporting role in the asset management sector. Systems are often used to provide suggestions to fund management (which apply equally to portfolio decision-making or execution only trades):
- analysing large amounts of data from diverse sources and in different formats;
- digesting large selection of inputs to assist in establishing a fair market price for a security;
- supporting decision-making processes by linking data points and finding relationships across a large number of sources; and
- sifting through vast amounts of news feeds and extracting useful insights.
All of these applications have back-up systems and human-in-the-loop safeguards. They are aimed at providing fund managers with suggestions, with a human in charge of the decision making and trade execution.
Regulatory obligations
Although there is no overarching legal framework which governs the use of AI in financial services, Principle 3 of the FCA’s Principles for Business makes clear that firms must take reasonable care to organise and control their affairs responsibly and effectively, with adequate risk management systems. If regulated activities being conducted by firms are increasingly dependent on ML or, more broadly, AI, firms will need to ensure that there is effective governance around the use of AI and that systems and controls adequately ensure that the use of ML and AI is not causing harm to consumers or the markets.
There are a number of risks in adopting AI, for example, algorithmic bias caused by insufficient or inaccurate data (note that the main barrier to widespread adoption of AI is the availability of data) and lack of training of systems and AI users, which could lead to poor decisions being made. It is therefore imperative that firms fully understand the design of the MI, have stress-tested the technology prior to its roll-out in business areas and have effective quality assurance and system feedback measures in place to detect and prevent poor outcomes.
Clear records should be kept of the data used by the ML, the decision making around the use of ML and how systems are trained and tested. Ultimately, firms should be able to explain how the ML reached a particular decision.
Where firms outsource to AI service providers, they retain the regulatory risk if things go wrong. As such, the regulated firm should ensure it carries out sufficient due diligence on the service provider, that it understands the underlying decision-making process of the service provider’s AI and ensure the contract includes adequate monitoring and oversight mechanisms where the AI services are important in the context of the firm’s regulated business, and appropriate termination provisions.
The FCA announced in July 2019 that it is working with The Alan Turing Institute on a year-long collaboration on AI transparency in which they will propose a high-level framework for thinking about transparency needs concerning uses of AI in financial markets. The Alan Turing Institute has already completed a project on “explainable” AI with the Information Commissioner in the content of data protection. A recent blog published by the FCA stated:
“the need or desire to access information about a given AI system may be motivated by a variety of reasons – there are a diverse range of concerns that may be addressed through transparency measures…. one important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems… transparency may [also] enable customers to understand and – where appropriate – challenge the basis of particular outcomes.”