UK watchdogs to clamp down on banks using discriminatory AI in loan applications

The news: UK regulators have signaled that they will clamp down on artificial intelligence (AI) use in banking that might be used to discriminate against people, per the FT.

Banks which use AI to approve loan applications must be able to prove the tech will not worsen discrimination against minorities.

The bigger picture: AI is a significant growth area in banking. Its market size is projected to soar globally from $3.88 billion in 2020 to $64.03 billion in 2030, with a CAGR of 32.6%, per a Research and Markets report.

AI in banking is maturing, and as data analysis improves, it brings the potential for more accurate decision-making. But concerns about misuse have led to heightened regulatory scrutiny:

  • US: Earlier this month, the Consumer Financial Protection Bureau (CFPB) warned it would get tougher on AI misuse in banking. CFPB Director Rohit Chopra cautioned that AI could be abused to advance “digital redlining” and “robo discrimination.” The chairs of two US congressional committees last year asked regulators to ensure the country’s lenders implemented safeguards ensuring AI improved access to credit for low- and middle-income families and people of color.
  • Europe: EU regulators last week urged lawmakers to consider “further analysing the use of data in AI/machine learning models and potential bias leading to discrimination and exclusion.”
  • UK: The Office for AI will release its white paper on governing and regulating AI in early 2022. This could lead to a shift from the government’s current sector-led approach to blanket AI-specific regulations.

The problem: Banks using AI need to be aware of the risks that come with the technology:

  • The complexity of AI models can create a “black box” problem in which decisions are made with very little transparency regarding how they reached their conclusions, making accountability and error detection a challenge.
  • Baked-in biases that are difficult to root out are another risk. For example, Apple and Goldman Sachs found themselves in hot water in 2019 over claims that technology used to measure creditworthiness might be biased against women.
  • Unintended biases in AI models can arise from flawed training data. AI algorithms that are trained on incomplete, biased, or extraneous data can yield judgments that are biased, causing a range of issues, including inadvertent discrimination.

The solution: Clear and consistent guidance from regulators will give banks a framework to work within, helping them to minimize potential problems arising from AI use. Banks must recognise the inherent flaws in AI, improve transparency, and take responsibility for problems. Both banks and watchdogs must introduce policies to minimize the risk of bias and discrimination:

  • For robo-advice, humans should be involved in signing off outputs from algorithms before they are delivered as advice to customers, a practice known as having a “human-in-the-loop.
  • Regulators should offer examples of best practices and poor practices when banks deploy AI.
  • Tools that are heavily reliant on training data may require new processes to manage the data quality.
  • Reverse-engineering can sometimes be used to draw conclusions about black-box algorithms, improving transparency and documentation.

"Behind the Numbers" Podcast