FinDev Blog

Artificial Intelligence in Financial Inclusion: How Should Supervisors Respond?

A look at the risks and opportunities presented by artificial intelligence and machine learning
Futuristic image, neon lights surrounding person with mask.

Insurance premium and coverage that dynamically change according to a driver’s behavior? A financial adviser born out of algorithmics? The use of artificial intelligence (AI) and machine learning (ML) in the financial sector is spreading exponentially across advanced and emerging economies.  AI and ML are used in various areas such as fraud detection and credit scoring, as we can see in the figure below.

Figure 1: To what extent does your organization use artificial intelligence for the following business units?Graph showing the results of a global survey with senior banking executives
Source: Economist Intelligence Unit Survey (global survey with senior banking executives)

While we still do not have an exact picture of the full financial inclusion potential of AI and ML in emerging economies, there is no doubt that these technologies bring both opportunities and risks in a wide array of areas, which need to be carefully considered by financial supervisors. But what are these risks and opportunities and what should supervisors do?

According to a Toronto Centre Note, AI and ML create new risks and affect old ones, both positively and negatively, in three main areas of concern for supervisors (Figure 2). More familiar risks include credit, data security and money laundering risks. Some of the new risks include the difficulty in ensuring full transparency of AI and ML models, such that they can be readily explained to a general public, as well as their robustness, fairness and ethical use. For instance, supervisors should be concerned with the potential that these models have to amplify or create unfair biases and discrimination.

Figure 2: Transmission mechanisms whereby AI and ML impact risksGraphic showing areas of concern whereby AI and ML impact risks

Before taking hasty action, supervisors should ask themselves:

  1. Where are the biggest risks from the use of AI and ML in the financial sector, and how significant are they? 
  2. What can be done to control and mitigate these risks?
  3. Are new regulations required or are the existing ones adaptable?
  4. What are the implications for supervisory resources – numbers, skills and expertise?

Supervisors need to understand the models and use cases in their jurisdictions and respond according to the significance of the related risks. They may find it necessary, for instance, to impose specific regulatory obligations on financial institutions to control and mitigate risks, such as ensuring that underwriting models do not produce unfair discrimination and exclusion. They may also need to adapt their supervisory approach, skills and expertise. For example, data science expertise can help supervisors evaluate critical AI and ML models.

But AI and ML do not only mean risks; there are many opportunities as well (Figure 3). They are already offering value in advancing financial inclusion, for example with new credit and insurance underwriting models that use alternative data to serve low-income clients. AI and ML models can also help to enhance consumer protection, with chatbots that facilitate consumer complaints, and customization of disclosures and product features  according to the needs and characteristics of individual clients or client segments. Another area of opportunity opened by AI and ML is supervisory technology, or suptech, which can help improve the effectiveness and efficiency of financial supervision.

Figure 3: Main areas of opportunities opened by AI and MLToronto Centre graphic showing areas of opportunity for AI and ML

The main question, therefore, is: how to address the risks of AI and ML while maximizing the opportunities?  Supervisors can combine three types of responses:

  1. Apply high-level principles and guidelines for trustworthy AI, such as those issued by the OECD and by the European Union, by converting them into regulatory requirements and/or supervisory expectations.
  2. Apply and adjust existing regulatory requirements that govern the use of statistical models by financial institutions, to the use of AI and ML models. The Financial Stability Institute provides a useful guide for this step.
  3. Consider replicating and adapting standards that have been recently issued to deal with AI and ML models, including those issued by standard setting bodies (e.g., IOSCO), national authorities (e.g., UK Prudential Regulation Authority, and international authorities (e.g., European Banking Authority).

It is clear that, as in so many other areas of financial innovation, supervisors need to strike a delicate balance between the risks and benefits of AI and ML . This can only be done if they succeed in applying the concept of proportionality to juggle their multiple – and sometimes conflicting – mandates. This task is especially difficult in emerging and developing economies, where supervisory capacity is lower and the legal and regulatory framework is weaker. There is a long road ahead!

Leave a Comment

Comments on this page are moderated by FinDev Editors. We welcome comments that offer remarks and insights that are relevant to the post. Learn More