Machine Learning Models in Risk Management

Despite the more restrictive rules in Basel IV, machine learning models will still contribute to reducing the risk of default in the banking sector

Modelos de machine learning en la administración de riesgos

The use of machine learning (or automated learning) models and techniques in the banking system is becoming increasingly widespread, particularly in risk management and the regulation of financial institutions, especially credit institutions.

Although machine learning increases the predictive power of credit and financial risk, it also makes these models more complex. Knowing how to explain the role, direction and weight of the variables involved in the outcomes of these models, as well as their sensitivity to each of the variables used, is essential. Therefore, different techniques have been and continue to be developed to address this challenge. But how did we get here?

The Basel regulation

The use of internal models (such as those enabled by machine learning) for financial risk management was promoted extensively in the early Basel regulation stages (Basel II), which fostered risk sensitivity in the regulatory capital allocation requirement in each of the portfolios.

In 1988, the BIS, or Bank for International Settlements, (the “regulator” of financial regulators) defined the minimum capital requirements for banks in the first stage, Basel I. It established standard risk weights for the main credit counterparties (sovereign, financial institutions, corporate, mortgages etc.). However, the risk sensitivity of this approach was low. In 2004, Basel II incorporated the use of internal models to improve risk sensitivity in capital allocation, primarily in determining the parameters that influence the use of more or less capital. In 2010, Basel III focused on the problems brought to light by the 2008-2009 crisis: liquidity, capital supplements to cushion the effects of crises, and for countercyclical purposes -according to the systemic importance of each entity-, in addition to other elements such as large exposures, leverage ratio, counterparty risk, total capital for loss absorption, etc.

Machine learning and complexity

One of the focal points of Basel II was that the necessary capital should be sensitive to the calculation of critical variables for credit portfolios. Such variables include the probability of nonpayment (default), loss severity in the event of default, and exposure in the event of default. Capturing the key variables involved in the risk parameters and defining the size of the capital necessary to deal with this risk make the models more complex. In fact, the complexity can be such that it becomes undesirable (“undue,” according to the Basel documents), reducing the comparability of risk between different banks, not only within a country, but also at the global level. Seeking a homogeneous metric to allocate capital was, actually, one of the Basel Committee's intentions.

The comparability of risk metrics (and also of capital) is reduced by the difference in the speed with which regulation is implemented in different countries. For example, the Basel III components have been implemented with different schedules and at discretion, since countries can generate local versions of the standard (the so-called “national discretion”).

What does Basel IV have in store for us?

Basel IV, which will be implemented in 2023, offers a regulatory turnaround, as well as an invitation to reduce the complexity of the models just when the use of highly complex models (machine learning) has become standard.

The Basel IV regulatory package is the response to the increase in complexity. One of its objectives is to discourage the use of internal models in order to reduce undue complexity and improve the comparability of regulatory capital allocated by the global banking system.

So, this is the environment that financial risk management models developed with machine learning methodologies must face in order to comply with the regulation.

However, let's end on a positive note. When the Basel Committee proposed the internal models, such models were already a common practice in financial institutions and only the way in which they should be incorporated into the regulatory framework was regulated. The advancement of machine learning models will continue in internal processes because they are such a powerful tool. Their regulatory recognition will remain on the agenda.

* The content of this article is based on the chapter we wrote for the book Data Analytics Applications in Emerging Markets (Springer 2022).

The authors are research professors at EGADE Business School.

Articles of Finance
Go to research
EGADE Ideas
in your inbox