• Loading stock data...
Productivity Featured Finance Innovation

Big techs’ AI empire

Advances in AI are transforming the economy. Behind this wave of visible innovation lies a less visible but significant trend: the role of large technology firms – commonly referred to as ‘big techs’ – across the AI supply chain. This column analyses the AI supply chain and the market structure of every input layer, highlighting the economic forces shaping the provision of AI and the role of big techs in each input market. The authors illuminate challenges that big techs pose to consumer choice, innovation, operational resilience, cyber security, and financial stability.

Advances in artificial intelligence (AI) are poised to transform the economy and society. From chatbots and image generators to financial forecasting tools, AI applications are becoming ubiquitous, promising to revolutionise the way we live and work. In particular, generative AI (GenAI) is being adopted at a much faster pace than other transformative technologies (Bick et al. 2024). Recent evidence already points to AI’s wide-ranging impact on labour markets and productivity, local economies, women’s employment, capital markets, public finances, and the broader financial sector (Gambacorta et al. 2024, Aldasoro et al. 2024, Albanesi et al. 2025, Andreadis et al. 2025, Frey and Llanos-Parades 2025, Kelly et al. 2025).

Behind this wave of innovation lies a less visible but significant trend: the growing role of large technology firms – commonly referred to as ‘big techs’ – across the AI supply chain. Big techs have been consistently investing in AI: in 2023, they accounted for 33% of the total capital raised by AI firms and nearly 67% of the capital raised by generative AI firms (Financial Times 2023). While big techs have undoubtedly accelerated the development of AI, their expanding influence over how AI is provided raises critical questions about competition, innovation, operational resilience, and financial stability.

In a recent paper (Gambacorta and Shreeti 2025), we explain the AI supply chain and the market structure of each of its input layers. We highlight the economic forces shaping the provision of AI today, and the role of big tech in each input market. We also outline the potential impact of the current market structure on economic outcomes and highlight challenges for regulation.

Big techs in the AI supply chain

The AI supply chain comprises five key layers: hardware, cloud computing, training data, foundation models, and user-facing AI applications (see Figure 1). Each of these layers is essential to powering the AI systems we use today, and big techs are active in all of them.

Consider cloud computing, the backbone of AI development. AI models require immense computational resources for training and deployment, and cloud platforms provide the infrastructure to make this possible. Globally, the cloud market is dominated by three big techs: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.

Figure 1 The AI supply chain

Figure 1 The AI supply chain
Figure 1 The AI supply chain
Source: Gambacorta and Shreeti (2025).

Together, the three big tech firms control nearly 75% of the infrastructure-as-a-service market, the segment most relevant for AI. Their dominance is rooted in both the economic forces shaping the cloud computing market and their strategic actions. High fixed costs, economies of scale, and network effects make it difficult for smaller players to compete in this market. Moreover, cloud providers also charge egress fees to transfer data out of their platforms. The egress fees charged by big techs not only exceed the incremental cost of transferring data but also exceed the fees charged by smaller competitors (Biglaiser et al. 2024). Big techs also provide vertically integrated service ecosystems on their platforms, often at a discount.

But big techs’ influence extends far beyond cloud computing. Training data is the lifeblood of AI, and big tech firms have access to some of the richest pools of user-generated data in the world. Meta has Instagram, Facebook, and WhatsApp; Google has Gmail, Maps, Play Store, and Google Search; Microsoft has Bing; LinkedIn and Microsoft 365 (Hagiu and Wright 2025). To use these data for AI training, big techs have been quietly updating their terms of use and privacy policies. In addition to exploiting their existing data reserves, big tech companies are actively acquiring or partnering with data-rich firms. As the stock of high-quality public data dwindles, such proprietary data sources will become even more valuable. The increasing returns to every additional unit of data can further entrench their influence over the AI supply chain.

The foundation model layer of the AI supply chain – home to large pre-trained models like OpenAI’s GPT-4 or Google’s Gemini – is another area where Big Techs are increasingly active. Foundation models are expensive to develop, with training costs often exceeding $100 million (The Economist 2023). These high fixed costs can create significant barriers to entry, favouring firms with deep pockets and access to computational resources. 1 Unsurprisingly, big tech firms are not only developing their own foundation models but are also integrating them into their consumer-facing products. Microsoft offers its AI-powered Copilot across its suite of applications, while Google embeds its Gemini model into search results. At the same time, they are producing AI hardware (chips) and even securing their own supply of nuclear fuel to power data centres (CNBC 2024). Such vertical integration allows Big Techs to capture value at multiple points in the supply chain.

Figure 2 Big techs in the AI supply chain

Figure 2 Big techs in the AI supply chain
Figure 2 Big techs in the AI supply chain
Source: Gambacorta and Shreeti (2025).

This vertical integration can create a self-reinforcing ‘cloud-model-data loop’ (see Figure 2). By controlling cloud computing resources, big tech firms can produce better AI models. These models, in turn, generate more data, which can be fed back into their systems to improve subsequent iterations. The loop is further strengthened if there are substantial network effects associated with AI applications provided by big techs. As more users adopt a particular AI model or platform, its value increases, attracting even more users. The strength of this loop will depend on the quality of big techs’ proprietary data, the extent of network effects arising from AI use, and the returns to scale of each additional unit of data in the AI training process.

Implications for economic and social outcomes

The control exerted by a few firms over how AI is provided can have profound consequences for economic and social outcomes. In general, limited competition can create room for dominant firms to raise prices, limit consumer choice, supress wages, and stifle innovation (Impullitti and Rendahl 2025). Perhaps even more crucially, control by a few big techs over AI provision gives these firms disproportionate power over the direction of innovation, raising the risk of privately profitable advancements diverging from socially desirable ones (Acemoglu 2021).

In more practical terms, a concentrated AI supply chain also creates operational vulnerabilities. Relying on a small number of providers for critical AI components introduces single points of failure. As a result, disruptions – whether due to operational errors, malevolence, regulatory changes, or geopolitical conflicts – could have cascading effects across industries. Moreover, centralising critical infrastructure and data in the hands of a few players also makes them attractive targets for cyberattacks, with potentially widespread consequences. Concentration in the AI supply chain increases systemic risk and threatens financial stability (Aldasoro et al. 2024, Danielsson 2025).

Why is regulation hard?

Addressing concentration in AI provision is far from straightforward. The AI supply chain spans multiple markets – even domestically – each governed by different regulatory authorities that may have competing goals. International cooperation on regulation or standards is even more elusive given the differences in legal frameworks, geopolitical interests, and regulatory appetites. Moreover, technological progress in AI often evolves faster than regulatory capacity, making it difficult to design effective policy remedies where needed.

Despite these obstacles, several policy responses are on the table. Encouraging data-sharing agreements between firms and creating public datasets for AI training could help level the playing field. Ensuring fair access to cloud infrastructure and foundation models for all firms could reduce barriers to entry. Promoting interoperability and reducing switching costs in the cloud market is another avenue worth exploring. In any scenario, regulatory authorities need to monitor the provision of AI carefully, gathering evidence on market conduct, operational vulnerabilities, and concentration.

Conclusion

As with other general-purpose technologies, AI has the potential to drive economic growth and improve economic and social outcomes. But to realise this potential, the AI ecosystem must remain competitive and fair. Excessive influence by a handful of technology companies over how AI is provided will risk stifling socially beneficial innovation – exacerbating inequalities, harming consumer welfare, and creating systemic vulnerabilities.

Source : VOXeu

GLOBAL BUSINESS AND FINANCE MAGAZINE

GLOBAL BUSINESS AND FINANCE MAGAZINE

About Author

Leave a comment

Your email address will not be published. Required fields are marked *

You may also like

Banking Finance

Can African trade integration be a game changer?

New World Bank research shows the agreement among 54 countries would likely draw more foreign direct investment, amplifying its benefits
Economy Finance

Kenya private sector activity jumps in January – PMI

NAIROBI, Feb 3 (Reuters) – Kenya’s private sector activity rose for a third consecutive month in January, helped by improved