Forecasts from professionals (economists, analysts, brokers, academics) are a key input into economic decision-making. This column highlights that professional forecasts are ‘lumpy’, often remaining unchanged for several periods, before shifting in large, infrequent jumps. It argues that this reflects ‘rational inaction’, as frequent adjustments or constant swings could undermine credibility. It suggests a natural distinction between forecasts reported in surveys and the internally held beliefs by forecasters. Finally, it proposes a simple two-stage procedure for recovering more accurate measures of underlying beliefs.
Forecasts from professionals – economists, analysts, brokers, and academics – are a key input into economic decision-making. Businesses use them to plan investment and pricing strategies, while governments and central banks rely on them to design fiscal and monetary policies.
Since the influential work of Coibion and Gorodnichenko (2012), survey data from professional forecasters have become central to testing theories of expectation formation, informational frictions, and biases. Existing research highlights various frictions and behavioural biases, including inattentiveness (Andrade and Le Bihan 2013), diagnostic expectations (Bordalo et al. 2022), and overconfidence in private information (Broer and Kohlhas 2024, Adam et al. 2024). Strategic incentives — how forecasters’ reputations influence their reported forecasts — have also garnered considerable attention (Ottaviani and Sørensen 2006, Gemmi and Valchev 2023).
Our recent paper (Baley and Turen 2025) highlights a striking yet underexplored aspect of professional forecasts. These forecasts often remain unchanged for several periods, only to shift abruptly in large, infrequent jumps. We argue that these ‘lumpy forecasts’ do not indicate irrationality or bias; rather, they reflect rational inaction. Forecasters value stability in their published predictions and aim to avoid appearing erratic to their clients or the public. Forecasts might seem overly inertial not because they fail to process information – ultimately, they are professionals – but because frequent adjustments or constant swings could undermine credibility. Our work suggests a natural distinction between forecasts (what is reported in surveys) and beliefs (what forecasters hold internally).
This preference for stability leads forecasters to restrain themselves before revising their predictions until enough evidence accumulates to justify an update. In practice, this results in large, occasional revisions (rational ‘catch-ups’) that may appear to be an overreaction to information. Strategic concerns further amplify this tendency: forecasters seek stability and alignment with the consensus (the average forecasts), avoiding reputational risks from being outliers. These two forces – stability and alignment with others – generate and magnify the observed lumpiness and overreaction.
To better understand these frictions, we explore the heterogeneity among forecasting institutions by comparing banks, financial institutions, consulting firms, and universities. Each group faces distinct costs and reputational incentives, allowing us to identify each friction’s role more clearly. Finally, we propose a straightforward two-stage procedure to ‘cleanse’ survey data – isolating informative revisions and correcting for strategic alignment – to provide a more accurate proxy of forecasters’ actual beliefs.
Using detailed US inflation forecast data from Bloomberg’s Economic Forecast (ECFC) panel and a fixed-event forecasting framework, we monitor the progression of forecasts about a fixed target (end-of-year inflation) across forecasting periods. Our study reveals several patterns:
Figure 1 Average probability of a non-zero forecast revision
Figure 2 Gap to the consensus triggers revisions
To explain these empirical patterns, we develop a structural Ss model analogous to those used in price setting (with menu costs) or investment (with capital adjustment costs) but specifically tailored to forecast revisions. In our framework, forecasters continuously update their internal beliefs through Bayesian learning from public and private signals. However, two key frictions shape how and when these beliefs are reflected in reported forecasts:
The model, calibrated to align with the frequency and size of revisions in the Bloomberg data, replicates the observed irregularities, consensus-seeking, and apparent overreaction in professional forecasts. Figure 3 illustrates a simulated year with 100 forecasters (in light grey), their average (blue), and their average internal belief (green). While reported forecasts adjust infrequently, internal beliefs respond more smoothly and closely track the actual inflation target (dashed line). This highlights the mismatch between what forecasters believe and what they report—a central insight of the model.
To address the challenges of solving the equilibrium with heterogeneity, strategic concerns, and aggregate shocks, we employ a Restricted Perceptions Equilibrium (RPE). This approach draws on the work of Marcet and Nicolini (2003) and Adam and Marcet (2011), and more recently advocated by Moll (2024). RPE enables us to capture rich dynamics in a way that would be computationally infeasible under full rational expectations.
Figure 3 Simulated beliefs and forecasts in the model
We examine differences among types of institutions to better understand the sources of lumpiness, strategic concerns, and heterogeneity in forecasting behaviour. We categorise Bloomberg’s forecasters into four groups – financial institutions, banks, consulting companies, and universities – and estimate frictions for each type. Table 1 presents these estimates, normalised relative to financial institutions, which serve as the benchmark group.
Table 1 Estimated frictions by forecaster type (relative to financial institutions)
Financial institutions and consultancies exhibit the lowest revision costs and the strongest strategic concerns, likely reflecting a desire to stay relevant or to avoid alienating clients by deviating too far from their peers. In contrast, universities show the highest revision costs, weakest strategic concerns, and most considerable signal noise, consistent with institutional preferences for stability, independence, and more diverse internal views.
These frictions likely extend to households (D’Acunto et al. 2024) and firms (Thwaites et al. 2022). Strategic concerns are likely less significant for these groups, while revision costs may stem from inattention, cognitive constraints, or personal experience (Malmendier et al. 2022). Recent evidence also highlights substantial heterogeneity in how agents process information (Meeks and Monti 2024). Understanding how frictions differ across economic agents remains an open question.
Our findings carry important implications for policymakers who rely on survey-based forecasts. As discussed, reported forecasts are shaped not only by information but also by strategic and frictional distortions. We propose a simple two-stage procedure for recovering more accurate measures of underlying beliefs:
This approach significantly reduces the perception of overreaction in the data and improves the interpretation of expectations to inform policy. Figure 4 presents the estimated OLS coefficients from regressing forecast errors on forecast revisions, using all forecasts (red line), only updaters (black dashed line), and correcting for lumpy behaviour and strategic bias (pink line), progressively bringing the coefficient closer to zero. This shows that the observed overreaction is not entirely behavioural but amplified by rational inaction.
Figure 4 Estimated coefficient of forecast errors on forecast revisions
What may appear to be overreaction or inertia in forecast data often reflects rational responses to frictions such as adjustment costs and reputational concerns. Recognising this ‘lumpiness’ aids in better interpreting survey forecasts. By separating reported forecasts from underlying beliefs and enhancing survey design and incentives (Gaglianone et al. 2022), we argue that it can sharpen our understanding of expectations – and strengthen the foundations of macroeconomic policy.
Source : VOXeu
The rapid growth of cryptocurrency markets has created new challenges for financial regulators and policymakers.…
Worsened security in Europe has prompted EU member states to increase their defence capacity. This…
The Trump administration’s sweeping tariff measures are intended to increase the competitiveness of US firms…
A key challenge in predicting recessions is distinguishing which factors matter at different forecasting horizons…
Fact-checking has emerged as one of the most prominent policy tools to combat the spread…
Over the past two decades, start-ups have increasingly turned to acquisition as their preferred exit…