Continued support for AI from the US financial services regulators

On March 29, 2021, the primary federal bank regulators (the Federal Reserve, CFPB, the FDIC, the NCUA, and the OCC) issued a request for information to gain input on the growing use of Artificial Intelligence (“AI”) by financial institutions. Recognizing that financial institutions have been exploring AI-based applications in a variety of fields, the agencies noted that the use of innovative technologies and techniques, such as those involving AI, “has the potential to augment business decision-making, and enhance services available to consumers and businesses.”

In this RFI, the agencies are seeking information on financial institutions’ risk management practices related to the use of AI, as well as challenges facing financial institutions when developing, adopting, and managing AI and its risks and benefits to financial institutions and their customers from the use of AI.

This RFI was notable as the agencies cover various current uses of AI by financial institutions which they are seeing, but also outlined the risks associated with the use, and as a result the RFI presents an opportunity for those looking to deploy this technology to see what their peers are doing but also align their risk controls to the what the agencies are seeing. The uses observed by the agencies within the RFI include:

  • Flagging unusual transactions. This involves employing AI to identify potentially suspicious, anomalous, or outlier transactions (e.g., fraud detection and financial crime monitoring). It involves using different forms of data (e.g., email text, audio data – both structured and unstructured), with the aim of identifying fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/anti-money laundering investigations, monitoring employees for improper practices, and detecting data anomalies.
  • Personalization of customer services. AI technologies, such as voice recognition and natural language processing, are used to improve customer experience and to gain efficiencies in the allocation of financial institution resources. One example is the use of chatbots to automate routine customer interactions, such as account opening activities and general customer inquiries. AI is leveraged at call centers to process and triage customer calls to provide customized service. These technologies are also used to better target marketing and customize trade recommendations.
  • Credit decisions. This involves the use of AI to inform credit decisions in order to enhance or supplement existing techniques. This application of AI may use traditional data or employ alternative data (such as cash flow transactional information from a bank account).
  • Risk management. AI may be used to augment risk management and control practices. For example, an AI approach might be used to complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. AI can assist internal audit and independent risk management to increase sample size (such as for testing), evaluate risk and refer higher-risk issues to human analysts. AI may also be used in liquidity risk management, for example, to enhance monitoring of market conditions or collateral management.
  • Textual analysis. Textual analysis refers to the use of natural language processing for handling unstructured data (generally text) and obtaining insights from that data or improving efficiency of existing processes. Applications include analysis of regulations, news flow, earnings reports, consumer complaints, analyst ratings changes, and legal documents.
  • Cybersecurity. AI may be used to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. Examples include real time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files.

As always, the agencies stress that appropriate governance, risk management, and compliance management are all necessary to manage the risks associated with adoption and deployment of any innovation including AI. Within the request for comment the agencies went as far as to highlight the heightened consumer protection risks including ECOA and UDAAP risks and outlined risks where the use of AI may present particular risk management challenges to financial institutions, such as:

  • Explainability. For the purposes of this RFI, explainability refers to how an AI approach uses inputs to produce outputs. Some AI approaches can exhibit a “lack of explainability” for their overall functioning (sometimes known as global explainability) or how they arrive at an individual outcome in a given situation (sometimes referred to as local explainability). Lack of explainability can pose different challenges in different contexts. Lack of explainability can also inhibit financial institution management’s understanding of the conceptual soundness of an AI approach, which can increase uncertainty around the AI approach’s reliability, and increase risk when used in new contexts. Lack of explainability can also inhibit independent review and audit and make compliance with laws and regulations, including consumer protection requirements, more challenging.
  • Broader or More Intensive Data Usage. In many cases, AI algorithms identify patterns and correlations in training data without human context or intervention, and then use that information to generate predictions or categorizations. Because the AI algorithm is dependent upon the training data, an AI system generally reflects any limitations of that dataset. As a result, as with other systems, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data or make incorrect predictions if that data set is incomplete or non-representative.
  • Dynamic Updating. Monitoring and tracking an AI approach that evolves on its own can present challenges in review and validation, particularly when a change in external circumstances (e.g., economic downturns and financial crises) may cause inputs to vary materially from the original training data. Dynamic updating techniques can produce changes that range from minor adjustments to existing elements of a model to the introduction of entirely new elements.

While the federal bank regulators will continue to support innovation within the financial services space it is important to note that they will also continue to expand their oversight of operational and risk areas that deploy these same innovations. It is important to not only understand how the deployment of technology such as AI can help with your customer experience and lowering of internal costs but also how it will impact your existing compliance controls and risk assessments and how you will have to evolve your compliance program to capture any new risks presented. The results of this RFI should be watched closely as it will be a glimpse into how your peers are handling their modernization efforts, but it will also provide a look at how the regulators view the risks presented through the technology and their opinion as to the best practices to manage said risks.

Get unlimited access to all Global Banking Regulation Review content