• Loading stock data...
Finance Banking Featured Technology

AI financial crises

The rapid adoption of artificial intelligence is transforming the financial industry. This first of a two-column series argues that AI may either increase systemic financial risk or act to stabilise the system, depending on endogenous responses, strategic complementarities, the severity of events it faces, and the objectives it is given. Stress that might have taken days or weeks to unfold can now happen in minutes or hours. AI’s ability to master complexity and respond rapidly to shocks means future crises will likely be more intense than those we have seen so far.

Both the private and the public financial sectors are expanding their use of artificial intelligence (AI). Because AI processes information much faster than humans, it may help cause more frequent and more intense financial crises than those we have seen so far. But it could also do the opposite and act to stabilise the system.

In Norvig and Russell’s (2021) classification, we see AI as a “rational maximising agent”. This definition resonates with the typical economic analyses of financial stability. What distinguishes AI from purely statistical modelling is that it not only uses quantitative data to provide numerical advice; it also applies goal-driven learning to train itself with qualitative and quantitative data. Thus, it can provide advice and even make decisions.

It is difficult to gauge the extent of AI use in the financial services industry. The Financial Times reports that only 6% of banks plan substantial AI use, citing concerns about its reliability, job losses, regulatory aspects, and inertia. Some surveys concur, but others differ. Finance is a highly competitive industry. When start-up financial institutions and certain large banks enjoy significant cost and efficiency improvements by using modern technology stacks and hiring staff attuned to AI, more conservative institutions probably have no choice but to follow.

The rapid adoption of AI might make the delivery of financial services more efficient while reducing costs. Most of us will benefit.

But it is not all positive. There are widespread concerns about the impact of AI on the labour market, productivity and the like (Albanesi et al. 2023, Filippucci et al. 2024). Of particular concern to us is how AI affects the potential for systemic financial crises, those disruptive events that cost the large economies trillions of dollars and upend society. This has been the focus of our recent work (Danielsson and Uthemann 2024).

The roots of financial instability

We surmise that AI will not create new fundamental causes of crises but will amplify the existing ones: excessive leverage that renders financial institutions vulnerable to even small shocks; self-preservation in times of crisis that drives market participants to prefer the most liquid assets; and system opacity, complexity and asymmetric information that make market participants mistrust one another during stress. These three fundamental vulnerabilities have been behind almost every financial crisis in the past 261 years, ever since the first modern one in 1763 (Danielsson 2022).

However, although the same three fundamental factors drive all crises, it is not easy to prevent and contain crises because they differ significantly. That is to be expected. If financial regulations are to be effective, crises should be prevented in the first place. Consequently, it is almost axiomatic that crises happen where the authorities are not looking. Since the financial system is infinitely complex, there are many areas where risk can build up.

The key to understanding financial crises lies in how financial institutions optimise – they aim to maximise profits given the acceptable risk. When translating that into how they behave operationally, Roy’s (1952) criterion is useful – stated succinctly, maximising profits subject to not going bankrupt. That means financial institutions optimise for profits most of the time, perhaps 999 days out of 1,000. However, on that one last day, when great upheaval hits the system and a crisis is on the horizon, survival, rather than profit, is what they care most about ― the ‘one day out of a thousand’ problem.

When financial institutions prioritise survival, their behaviour changes rapidly and drastically. They hoard liquidity and choose the most secure and liquid assets, such as central bank reserves. This leads to bank runs, fire sales, credit crunches, and all the other undesirable behaviours associated with crises. There is nothing untoward about such behaviour, but it cannot be easily regulated.

When AI gets involved

These drivers of financial instability are well understood and have always been a concern, long before the advent of computers. As technology was increasingly adopted in the financial system, it brought efficiency and benefited the system, but also amplified existing channels of instability. We expect AI to do the same.

When identifying how this happens, it is useful to consider the societal risks arising from the use of AI (e.g. Weidinger et al. 2022, Bengio et al. 2023, Shevlane et al. 2023) and how these interact with financial stability. When doing so, we arrive at four channels in which the economy is vulnerable to AI:

  1. The misinformation channel emerges because the users of AI do not understand its limitations, but become increasingly dependent on it.
  2. The malicious use channel arises because the system is replete with highly resourced economic agents who want to maximise their profit and are not too concerned about the social consequences of their activities.
  3. The misalignment channel emerges from difficulties in ensuring that AI follows the objectives desired by its human operators.
  4. The oligopolistic market structure channel emanates from the business models of companies that design and run AI engines. These companies enjoy increasing returns to scale, which can prevent market entry and increase homogeneity and risk monoculture.

How AI can destabilise the system

AI needs data to be effective, even more so than humans. That should not be an issue because the system generates plenty of data for it to work with, terabytes daily. The problem is that almost all that data comes from the middle of the distribution of system outcomes rather than from the tails. Crises are all about the tails.

There are four reasons why we have little data from the tails.

The first is the endogenous response to control by market participants; this relates to the AI misinformation channel. A helpful way to understand that is Lucas’s (1976) critique and Goodhart’s (1974) law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”. Market participants do not just stoically accept regulations. No, they respond strategically. They do not tell anybody beforehand how they plan to respond to regulations and stress. They probably do not even know. Consequently, the reaction functions of market participants are hidden. And something that is hidden is not in a dataset.

The second reason, which follows from the malicious channel, is all the strategic complementarities that are at the heart of how market participants behave during crises. They feel compelled to withdraw liquidity because their competitors are doing so. Meanwhile, strategic complementarities can lead to multiple equilibria, where wildly different market outcomes might result from random chance. Both these consequences of strategic complementarities mean that observations of past crises are not all that informative for future ones. This is another reason we do not have many observations from the tails.

At the root of the problem are two characteristics of AI: it excels at extracting complex patterns from data, and it quickly learns from the environment in which it operates. Current AI engines observe what the competitors do, and it would not be difficult for them to use those observations to improve their own models of how the world works. What this means in practice is that future AI in private firms and public organisations train, and hence optimise, to influence one another.

Aligning the incentives of AI with those of its owner is a hard problem – the misalignment channel. It can get worse during crises, when speed is of the essence and there might be no time for the AI to elicit human feedback to fine-tune objectives. The traditional way the system acts to prevent run equilibria might not work anymore. The ever-present misalignment problem between individually rational behaviour and socially desirable outcomes might be exacerbated if human regulators can no longer coordinate rescue efforts and ‘twist arms’. AI might have already liquidated their positions, and hence caused a crisis, before the human owner can pick up the phone to answer the call of the Fed chair.

AI will probably exacerbate the oligopolistic market structure channel for financial instability. As financial institutions come to see and react to the world in increasingly similar ways, they coordinate in buying and selling, leading to bubbles and crashes. More generally, risk monoculture is an important driver of booms and busts in the financial system.  Machine learning design, input data and compute affect the ability of AI engines to manage risk. These are increasingly controlled mainly by a few technology and information companies, which continue to merge, leading to an oligopolistic market.

The main concern from this market concentration is the likelihood that many financial institutions, including those in the public sector, get their view of the world from the same vendor. That implies that they will see opportunities and risk similarly, including how those are affected by current or hypothetical stress. In crises, this homogenising effect of AI use can reduce strategic uncertainty and facilitate coordination on run equilibria.

Given the recent wave of data vendor mergers, it is a concern that neither the competition authorities nor the financial authorities appear to have fully appreciated the potential for increased systemic risk that could arise from oligopolistic AI technology.

Summary

If faced with existential threats to the institution, AI optimises for survival. But it is here that the very speed and efficiency of AI works against the system. If other financial institutions do the same, they coordinate on a crisis equilibrium. So, all the institutions affect one another because they collectively make the same decision. They all try to react as quickly as possible, as the first to dispose of risky assets is best placed to weather the storm.

The consequence is increased uncertainty, leading to extreme market volatility, as well as vicious feedback loops, such as fire sales, liquidity withdrawals and bank runs. Thanks to AI, stress that might have taken days or weeks to unfold can now happen in minutes or hours.

The AI engine might also do the opposite. After all, just because AI can react faster does not mean it will. Empirical evidence suggests that, although asset prices might fall below fundamental values in a crisis, they often recover quickly. That means buying opportunities. If the AI is not that concerned about survival and the engines converge on a recovery equilibrium in aggregate, they will absorb the shock and no crisis will ensue.

Taken together, we surmise that AI will act to lower volatility and fatten the tails. It could smooth out short-term fluctuations at the expense of more extreme events.

Of particular importance is how prepared the financial authorities are for an AI crisis. We discuss this in a VoxEU piece appearing next week, titled “How the financial authorities can respond to AI threats to financial stability”.

Source : VOXeu

GLOBAL BUSINESS AND FINANCE MAGAZINE

GLOBAL BUSINESS AND FINANCE MAGAZINE

About Author

Leave a comment

Your email address will not be published. Required fields are marked *

You may also like

Technology

Has the Digital Markets Act got it wrong on app stores?

Apple’s iPhone and Google’s Android mobile operating system dominate the smartphone market. The two companies also control the app stores
Business Technology

How to fix the European Union’s proposed Data Act

The draft European Union Data Act, proposed by the European Commission in February 2022, aims to fill a big gap in