In 2007, just before the outbreak of the global financial crisis, many of the world’s most sophisticated risk-management systems indicated that markets were under control. Investment banks, insurance companies, and large funds relied on advanced quantitative models that estimated extreme losses as highly improbable events. However, once the system began to fracture, losses followed in rapid succession.
The 2008 financial crisis was not only a crisis of credit or liquidity. It was also a crisis of risk measurement. Many institutions depended on metrics such as Value-at-Risk (VaR) or used models that were inadequate for their applications, in which extreme events were assumed to be rare. In practice, however, these events occurred far more frequently than predicted. What was interpreted as a black swan was, to a large extent, the result of a persistent failure in how probabilities were assigned to risk.
When the error lies in the model
After 2008, the crisis was attributed to the complexity of the system, the opacity of derivatives, banking interconnections, and the lack of supervision, among other factors. However, a less visible yet equally decisive element was the fragility of the models used to quantify risk. Our recent publication, “Return Distributions, Not Levels” (SSRN, 2026), offers a deeper explanation of a problem that goes beyond specific crises.
After analyzing more than a century of financial data, we show that asset returns follow a stable statistical pattern that traditional models fail to capture properly. Once standardized, financial returns display a symmetric, wave-shaped pattern that appears almost identically across equities, exchange
This pattern persists for a century, cutting across economic crises, regulatory changes, and technological transformations, and it remains even when portfolios are diversified or time horizons are aggregated. Our results predict that if an institutional investor holds a USD 100 trillion portfolio and estimates Value-at-Risk using standard models (such as GARCH), the amount of risk being underestimated ranges between USD 2.2 and 3.4 trillion—roughly equivalent to the net worth of Michael Jordan.
A structural property of the financial system
This finding implies that extreme events are not simple anomalies. On the contrary, they are part of a distributive architecture that repeats systematically. Ignoring this structure leads to underestimating risk.
Our paper estimates that models assuming normality can undervalue extreme losses by amounts equivalent to billions of dollars for institutional portfolios. In practice, the models used by financial institutions do incorporate heavy tails, but the risk in those tails is still underestimated. These are not marginal errors or minor technical deviations, but a persistent distortion in how financial risk is measured.
More importantly, this pattern does not depend on a specific asset class or a particular historical period. It appears during the Great Depression, the collapse of Bretton Woods, the 2008 crisis, and the digital era, among others. This suggests that extreme market behavior is not a historical accident, but a deep property of the financial system.
Our study also finds that this structure is dominated by a single systematic factor that explains most of the cross-sectional variation across assets. In economic terms, this factor can be interpreted through the so-called stochastic discount factor linked to consumption. In other words, standard economic theory is not invalidated; on the contrary, it provides tools to understand the distribution of risk.
Better use of models, not their replacement
Although one might conclude that traditional models are of limited use, our article argues the opposite. Models such as GARCH remain useful for forecasting volatility, while jump-diffusion models are essential for pricing derivatives. The problem lies in applying them outside their appropriate domain.
A pragmatic solution is to separate two tasks that are often conflated: on the one hand, volatility forecasting; on the other, the shape of the return distribution. Adjusting this second dimension through a wave-based correction makes it possible to estimate extreme losses far more accurately, without replacing existing infrastructure.
This small incremental improvement can have large effects. According to our study, the correction reduces VaR errors by more than 90% in large portfolios. In practical terms, this means moving from errors measured in billions to errors measured in hundreds of millions.
Lessons from recent history
In hindsight, these types of errors help explain why certain episodes of financial stress have been so destructive. When models systematically underestimate the probability of extreme losses, institutions accumulate more risk than they believe they are taking on. The system appears stable—until it is not.
The global financial crisis of 2008 can, in part, be read through this lens. It was not only a crisis of toxic assets or banking interconnections. It was also a crisis of measurement. Many sophisticated systems indicated that risk was under control, but it was not. Losses deemed improbable occurred in cascading fashion.
The lesson is not that markets are inherently unpredictable; it is that the shape of risk was poorly represented. Today, we have better statistical maps of financial danger, but the question is not whether markets can fail—it is whether we are willing to measure risk with the precision that available data allow.