The Bank of England released its Monetary Policy Report in November 2021, which included a meticulous, data-driven prediction that UK inflation would hit 3.4% by the end of 2022. The European Central Bank reached a similar conclusion—3.2% for the eurozone—using teams of qualified economists and its own advanced models. The economists who created the initial frameworks from which these models originated would have found it astounding that both institutions had access to massive datasets, decades of institutional knowledge, and computing infrastructure. They were both off by about six percentage points. By the end of 2022, both economies’ real rates of inflation were 9.2%.
The difference between 3% and 9% is not the result of a rounding error. It’s the difference between a manageable policy challenge and a real cost-of-living crisis that forced central bankers to make emergency rate decisions, squeezed households across two continents, and left them scrambling to publicly explain why everything they had predicted turned out to be so different from what actually happened. The justifications arrived. Together, they revealed something more profound than a single poor forecast cycle, and they were diverse and only partially convincing.
Economic Inflation Forecasting — Key Information
| Topic | Why economic inflation forecasting models failed — and what is being built to replace them |
| Key Research Institution | UC Berkeley Haas School of Business (Prof. Don Moore — overconfidence & decision-making) |
| Key Survey Analyzed | Survey of Professional Forecasters — conducted by Federal Reserve Bank of Philadelphia since 1968 |
| Core Finding | Forecasters reported 53% confidence in accuracy — but were correct only 23% of the time (16,559 forecasts analyzed) |
| Notable Failure — Bank of England | Forecast UK inflation at 3.4% (end of 2022); actual rate: 9.2% |
| Notable Failure — ECB | Forecast eurozone inflation at 3.2% (end of 2022); actual rate: 9.2% |
| Primary Problem | Over-precision (“over-certainty”) — not consistent directional bias, but too-narrow confidence intervals |
| Model Type Most Used | Dynamic Stochastic General Equilibrium (DSGE) models; also autoregressive statistical models |
| Known Structural Flaw | “Hedgehog graphs” — forecasters continue projecting old trends after structural breaks have already occurred |
| Experience Paradox | More experienced forecasters are slightly more accurate — but also significantly more over-precise; net effect cancels out |
| Bright Spot | Aggregate forecasts (averaged across many forecasters) tend to fall in the correct range, even when individual forecasts fail |
| Official Reference | UC Berkeley Haas — Why Forecasts Are So Often Wrong |
Academic researchers examining professional forecasting over far longer timescales have produced the most unsettling results, not the central banks themselves. 16,559 forecasts from the Survey of Professional Forecasters, a quarterly survey carried out by the Federal Reserve Bank of Philadelphia since 1968, were examined by Don Moore, a professor at UC Berkeley’s Haas School of Business who studies overconfidence and decision-making, and his former PhD student Sandy Campbell.
This survey is not being completed by amateurs. They are senior economists with decades of experience in making precisely these kinds of forecasts, and they have clear standards for what constitutes good forecasting. They work for large banks and corporations. Moore and Campbell discovered something startling: on average, these forecasters expressed 53% confidence that they had selected the correct response. Twenty-three percent of the time, they were right.
Moore takes care to make it clear that the economists weren’t consistently overly optimistic or pessimistic. Certain predicted indicators would run higher, while others would run lower; there was no consistent directional bias. Over-precision was the true problem.
They were overconfident. They underestimated the actual range of possible outcomes by placing their predictions inside too narrow intervals. Here’s the part that has an unsettling irony: forecasters with more experience were marginally more likely to get the correct answer, but they were also overly certain, which nearly completely offset the accuracy gain. Put another way, experience fosters confidence more quickly than it does accuracy.
There are actual repercussions to this structural issue with how economists express uncertainty. The data from the Survey of Professional Forecasters influences the Federal Reserve’s monetary policy decisions, interest rate settings, and market guidance.
The resulting policy leaves very little room for error when the Fed views the narrow range of outcomes that those forecasters collectively project as a reasonable description of the future. For an organization that frequently communicates with purposeful precision in order to anchor expectations, Moore’s suggestion that the Fed consider monetary policies allowing more wiggle room and hedging, given genuine uncertainty, sounds almost dangerously modest. However, it is difficult to ignore the underlying data.
A particular flaw known to economists as a “location shift”—when the average value of an economic indicator unexpectedly changes after a forecast has already been made—was revealed by the inflation forecasting failures of 2021 and 2022. This can occur abruptly, as demonstrated by the COVID-19 lockdowns that resulted in one of the biggest GDP contractions in contemporary history. In retrospect, the confidence intervals on December 2019 forecasts appear almost comically thin. However, location changes can also occur gradually, which is a more subtle issue.
Following the 2007–2009 global financial crisis, UK productivity growth took a structurally worse turn. For years after that change had already solidified, the Office for Budget Responsibility continued to project a return to pre-crisis growth rates, creating what scholars have come to refer to as “hedgehog graphs” in which the actual data continues to move sideways while each forecast spine points optimistically upward. Instead of forecasting the future, the forecasters were projecting the past.
It would be an exaggeration to say that the profession has found a definitive alternative framework, as what is being developed to address this is still a work in progress. Because the average of various forecasts tends to land closer to the truth than the most confident individual call, ensemble approaches—aggregating across numerous models and forecasters rather than depending on any one projection—are gaining popularity.
Rather than locking in a quarterly forecast and waiting to see what happens, there is also a growing focus on real-time data assimilation, which borrows techniques from meteorology that update predictions continuously as new information arrives. For decades, weather forecasters have used this discipline to create hurricane track forecasts that get better with each new observation.
Whether economic systems can be controlled with the same rigor is still up for debate, given how intertwined they are with human behavior, political choices, and the kind of feedback loops that don’t show up in any equation. However, compared to ten years ago, the profession is at least more honestly asking the right questions. At the very least, acknowledging that 23% accuracy and 53% confidence are issues is a good place to start.
