Artificial intelligence is rapidly reshaping the financial sector. This column introduces the seventh report in The Future of Banking series, which examines the fundamental transformations induced by AI and the policy challenges it raises. It focuses on three main themes: the use of AI in financial intermediation, central banking and policy, and regulatory challenges; the implications of data abundance and algorithmic trading for financial markets; and the effects of AI on corporate finance, contracting, and governance. While AI has the potential to improve efficiency, inclusion, and resilience across these domains, it also poses new vulnerabilities — ranging from inequities in access to systemic risks — that call for adaptive regulatory responses.
Artificial intelligence (AI), particularly generative AI (GenAI), is reshaping financial intermediation, asset management, payments, and insurance. Since the 2010s, machine learning (ML) has had a profound impact on various fields, including credit risk analysis, algorithmic trading, and anti-money laundering (AML) compliance. Nowadays, financial institutions are increasingly leveraging AI to streamline back-office operations, enhance customer support through chatbots, and improve risk management through predictive analytics. The fusion of finance and technology has become a transformative force, bringing both challenges and opportunities to financial practices and research (Carletti et al. 2020, Duffie et al. 2022).
In the financial sector, AI offers avenues for enhanced data analysis, risk management, and capital allocation. The big data revolution can result in significant welfare gains for consumers of financial services (households, firms, and government). However, there are risks that these gains might not be fully achieved because of market failures stemming from frictions in financial markets – asymmetric information, market power, and externalities – that AI may exacerbate or modify. As AI systems become more widespread, they introduce new challenges for regulators tasked with balancing the benefits of innovation with the need to maintain financial stability, market integrity, protect consumers, and ensure fair competition.
The risks associated with the use of AI/GenAI are extensive: privacy concerns and fairness issues (e.g. inducing undesirable discrimination, algorithmic bias of models from imperfect training data), security threats (e.g. facilitating cyberattacks or malicious output), intellectual property violations (e.g. infringing on legally protected materials), lack of explainability (e.g. uncertainty over how an answer is produced), reliability issues (e.g. stochastic outputs leading to hallucinations), and environmental impacts (e.g. CO2 emissions and water consumption).
Furthermore, AI introduces new sources of systemic risk. The opacity and lack of explainability of AI models make it difficult to anticipate or understand systemic risks until they materialise, underscoring the need for diversity in model design and robust stress-testing protocols. Additionally, the use of AI models may increase correlations in predictions and strategies, heightening the risk of flash crashes, amplified by the speed, complexity, and opacity of AI-driven trading. Finally, increasing returns to scale in AI services may lead to a concentrated market for some AI services to financial intermediaries (i.e., cloud services), increasing systemic risks.
The seventh report in The Future of Banking series, part of the Banking Initiative from the IESE Business School, discusses what is old and what is new in the use of AI in finance, examines the transformations and challenges that AI poses for the financial sector (in intermediation, markets, regulation and central banking), analyses the implications of data abundance and the new trading techniques in financial markets, and studies the implications of AI for corporate finance. The report strives to highlight policy implications.
Artificial intelligence and the financial sector: Transformations, challenges, and regulatory responses
The application of AI in financial intermediation has led to significant improvements in screening, monitoring, and credit allocation. ML models outperform traditional credit scoring, especially in volatile or rapidly changing environments. They excel in utilising large, unstructured datasets – transaction records, digital footprints, behavioural cues – thereby enabling a better assessment of borrower risk. Empirical evidence from fintech platforms in China and the US demonstrates that AI-enhanced models may not only accelerate loan approval but also expand access to credit, particularly among thin-file borrowers. Moreover, by reducing reliance on collateral, AI can help channel capital to high-productivity startups that might otherwise be constrained.
However, these efficiency gains are neither uniformly distributed nor guaranteed to enhance welfare. Fintech lenders often charge higher interest rates than traditional banks despite superior screening capabilities. This premium may reflect higher risk, technology costs, or weak competition in certain borrower segments. In some cases, it could arise from the strategic use of AI to price discriminate based on inferred willingness to pay, thereby shifting informational rents from consumers to lenders. The implication is that while AI improves allocative efficiency, it does not necessarily reduce financial intermediation costs for end users.
The increasing prevalence of AI-based lending models may also weaken the traditional channels of monetary transmission. The decoupling of lending from collateral values and the diminished role of relationship lending may reduce the sensitivity of credit flows to interest rate changes. This has implications for both macroeconomic policy effectiveness and systemic risk. Furthermore, the opacity and non-linearity of many AI models complicate supervisory oversight, particularly when their underlying logic cannot be readily interpreted or audited.
Central banks are deploying AI in core functions. ML tools are used to follow economic activity, detect anomalies in payment systems, and process vast volumes of supervisory text. These tools offer gains in speed and scope, allowing supervisors to identify early warning signs and enhance macroprudential monitoring. However, they also introduce a new form of risk: model convergence and interpretive homogeneity. As central banks and market participants increasingly adopt similar AI systems, the risk of shared blind spots and procyclical amplification grows, particularly during period of market stress.
In sum, the reconfiguration of intermediation through AI enhances predictive capacity and operational efficiency but complicates monetary policy, alters competitive dynamics with a potential dominant role of big techs in the value chain, and introduces new sources of model risk. The challenge lies in fostering AI-driven innovation while mitigating risks related to financial instability, monopolistic behaviour, and privacy violations. Addressing these issues may require rethinking supervisory frameworks, possibly including model auditability protocols and broader stress-testing practices.
Data abundance, AI, and financial markets: Implications and risks
A second domain of AI transformation lies in capital markets, where data abundance and algorithmic intermediation have reshaped the mechanisms of price discovery, market making, and asset management. The proliferation of alternative data – ranging from satellite imagery and credit card flows to social media and geolocation – has created new sources of information outside of traditional financial disclosures. AI models, trained on these high-dimensional datasets, extract predictive signals that were previously inaccessible or prohibitively costly to obtain. The marginal cost of producing actionable financial insight has dropped sharply, shifting the locus of informational advantage from access to processing.
This transformation has generated efficiency gains. Bid-ask spreads have narrowed, liquidity provision has become more automated, and forecasting accuracy in earnings, credit events, and volatility has improved. However, these benefits are accompanied by new risks. First, algorithmic trading strategies could converge toward similar patterns when trained on overlapping data, increasing the risk of synchronised behaviour and flash crashes. Furthermore, reinforcement learning agents, which optimise through trial and error, may develop strategies that are unstable or exploitative in equilibrium.
Second, AI can intensify informational asymmetries among market participants. While disclosures are nominally public, only those with sufficient computational resources and model sophistication can process them effectively. Empirical studies show that analysts at AI-equipped institutions significantly outperform their peers when alternative data becomes available. As a result, AI may reinforce market power and widen participation gaps.
Third, AI enables new forms of tacit collusion and strategic opacity. Pricing algorithms can learn to coordinate without explicit communication, reducing competitive pressure and increasing margins. The line between legitimate dynamic pricing and algorithmic collusion becomes blurred, especially in markets where a few dominant platforms set terms for thousands of users. Furthermore, because many AI models are non-interpretable, their behaviour can evade both market scrutiny and regulatory detection until after harm has occurred.
Finally, the arms race for speed and signal extraction has diverted capital and talent into zero-sum competition. The social return to shaving microseconds off execution times or exploiting ephemeral data anomalies is limited, yet firms invest heavily in such capabilities because private returns are high. This misalignment between private incentives and social value raises questions about the allocative efficiency of AI in financial markets.
Possible regulatory responses may include introducing latency-aware circuit breakers, mandating public access to baseline pricing data, and requiring disclosures of model architectures in certain trading contexts. Their design and effectiveness will hinge on careful experimentation, cross-jurisdictional learning, and ongoing dialogue between market participants and regulators.
Taken together, these developments point to a financial system in which information is more abundant but also potentially more unevenly distributed; in which trading is faster but also more fragile; and in which transparency is technically feasible but practically elusive. A policy response must go beyond disclosure and address issues such as infrastructure access, model auditability, and incentive alignment.
Corporate finance and governance with AI: Old and new
A third domain of AI transformation relates to corporate finance, contracting, and governance. AI alters foundational elements of corporate control, reshaping agency dynamics, information asymmetries, and the nature of financial contracting. While AI systems are not self-interested in the human sense, they introduce a distinct form of agency problem: optimisation misalignment. Autonomous agents trained via reinforcement learning may satisfy narrow objectives in ways that undermine broader regulatory or ethical goals. An AI tasked with minimising loan defaults, for instance, might engage in discriminatory behaviour or exploit data proxies that regulators deem unacceptable. Because these systems are adaptive and opaque, detecting and correcting such behaviours after deployment is costly and uncertain.
This raises deep accountability questions. Traditional corporate governance rests on the attribution of intent and the assignment of responsibility. But when decisions are made by systems that learn and evolve independently of direct instruction, legal and institutional mechanisms for enforcement begin to fray. The difficulty of auditing complex ML models compounds this challenge. Without robust interpretability requirements or embedded traceability mechanisms, financial institutions risk deploying systems whose behaviour they cannot fully explain, let alone predict or justify.
Information asymmetry is also being reconfigured. In the past, insiders held informational advantages derived from privileged access to internal records and forecasts. Today, AI enables outsiders to infer enterprise conditions from external data streams, undermining that asymmetry. Sophisticated investors use alternative data and natural language processing tools to analyse supply chains, sentiment, and behavioural signals, and may anticipate corporate disclosures. In response, firms have begun tailoring their communications for algorithmic consumption, further shifting the information environment. For example, the US Regulation Fair Disclosure and similar statutes may need to evolve to ensure not just equal access, but also equal usability of public information.
On the contracting front, AI is accelerating the adoption of smart contracts – automated agreements that self-execute based on real-time data inputs. These contracts reduce enforcement costs and limit the scope for opportunistic renegotiation. However, they also introduce rigidity. Automated margin calls or trigger events can cascade through markets, especially when multiple actors rely on similar models and thresholds. The absence of discretion or context can make smart contracts a source of systemic risk in times of stress.
The solution may lie in hybrid governance models. Contracts might embed flexibility ex ante, through macro-sensitive renegotiation clauses, human override options, and clear audit trails. AI systems could be subjected to accountability principles analogous to those applied to human agents: comprehensibility, traceability, and bounded autonomy. Legal frameworks might shift gradually from subjective intent to outcome-based liability, and from fixed contractual forms to adaptive governance protocols.
Conclusion
The integration of AI into financial systems constitutes a structural transformation, not a marginal adjustment. The benefits – in terms of efficiency, precision, and inclusion – are substantial, but so too are the risks to stability, equity, and governance. If policymakers rise to the challenge, AI can be harnessed to improve the financial system’s performance and inclusiveness. If they do not, the same technologies may undermine the very foundations on which financial trust depends. Three examples illustrate the challenges for policy.
AI systems acting as agents
AI systems acting as autonomous agents present challenges such as misalignment of objectives, opacity, and capacity for emergent misbehaviour. These challenges call for targeted regulatory responses, including outcome-based liability (holding firms accountable for AI-driven harm, regardless of intent), mandatory interpretability of decisions (ensuring consumers can contest unfair outcomes), system stress testing, and governance standardisation (hybrid human-machine governance). Allowing opaque AI models to dictate financial access without accountability risks eroding public trust while also exacerbating economic exclusion.
New information asymmetry in public information use
AI has amplified disparities in data processing and access to alternative data, necessitating updates to traditional regulatory frameworks (e.g., the US Reg FD). Policymakers should redefine equal access by standardising corporate disclosures, promoting fair use of alternative data, and addressing algorithmic behaviour to prevent distortions (e.g. algorithmic collusion). For example, policymakers should strive to limit trading on information that clearly has no social value.
Commitment and flexibility with AI contracting
Balancing the value of commitment with the need for flexibility in AI-driven contracts can be achieved through predefined renegotiation triggers (e.g., macroeconomics indicators), hybrid AI-human contracting (e.g., on mortgages), and with incentive-compatible mechanisms that deter strategic behaviour while allowing the necessary flexibility.
Source : VOXeu