⬅️ Back to list of blog posts

This entry pertains to the NBER Summer Institute 2025 — Asset Pricing. Unlike other recaps, I summarize my thoughts by the different themes throughout the session.


Theme 1: Measurement Debates

A very large fraction of debates in asset pricing amounts to “Here’s an interesting relationship that I found between X and Y.” Inevitably, to provide evidence of this relationship empirically, one requires empirical measures of both X and Y. Two papers and ensuing discussions centered around this fundamental task of measurement.

SVIX

Annette Vissing-Jorgensen presented the paper “Equity Premium Events” in which the authors leverage the information in daily S&P 500 option expirations to construct forward-looking measures of equity risk premia. In particular, one of the key measures they employ is the Ian Martin (2017) SVIX measure, which uses option prices to estimate expected returns by exploiting the relationship between risk-neutral and physical return moments.

However, the discussant Alan Moreira pointed out that these options-based measures do not successfully forecast returns in practice, making it difficult to claim that we can reliably observe expected returns by examining implied volatilities. In this paper’s case, this is a particularly important point of discussion because the measurement is central to all subsequent parts of the paper.

Treasury Convenience Yield

Another paper that centered around measurement is the paper “The US Treasury Funding Advantage Since 1860” presented by Balint Szoke. Long story short, the authors use historical data to construct a term structure of convenience yields, but carefully adjusting for tax benefits and the optionality embedded in flower bonds. With this arguably higher quality data, authors show that for long-term spreads, the relationship between Treasury supply and funding advantage is less clear-cut than previously believed.

The discussant Annette Vissing-Jorgensen, who has a seminal paper with Arvind Krishnamurthy on the topic of treasury convenience yields, much agreed that this improves measurement. However, the discussant disagreed on the conclusion regarding the role of treasury supply. In general, I found the discussion to be quite productive because both parties agreed on the superior measurement approach and the discussion centered around what truly constitutes a funding advantage and what these findings imply for the fiscal outlook.


Theme 2: Bottom-Up vs. Top-Down

Another popular task in finance is the following. Suppose you observe $A$ and you’re interested in figuring out the value of $B$:

$$ A=B+C $$

For simplicitly, let’s suppose $B=\kappa D$. One way to do this, which I’ll refer to as the bottom-up approach, is to get a number for $\kappa$ and another number for $D$ and then combine them to get $B$. You can also take the other route, which I’ll refer to as the top-down approach, is to take $A$ (which we observe) and subtract $C$ (which we’ll have to come up with a number).

Equity Valuation

Thummim Cho presented the paper “Equity Valuation Without DCF” which nicely illustrates the top-down approach to equity valuation. Specifically, the idea is that you start with the price ($A$ in the above expression) and then estimate discounted alphas ($C$ in the above expression). Clearly, this contrasts with the most traditional approach of using DCF methods to come up with the value of the firm ($B$ in the above expression), which requires inputs on the firm’s cash flow, growth rate, and the discount rate.

Duration

Another example came in the discussion of “Interest Rates and Equity Valuations” presented by Eben Lazarus. The paper tackles the question of how much do asset prices change when interest rates change. An immediate challenge in answering this question stems from the fact that there are many drivers of interest rates that could affect asset prices in many different ways:

Source: Authors’ presentation

Source: Authors’ presentation

The paper takes the bottom-up approach of (1) decomposing changes in interest rates into the different components, and (2) only using one of the components to see how interest rates affect equities. The discussant Dan Greenwald argues that a top-down approach is better, where you start from the duration of the asset (which is the standard measure that captures the sensitivity of asset prices to interest rates) and then adjusting it by the size of the growth shocks AND the pass-through of output growth to stock cash flow growth.


Theme 3: Embracing New Paradigms

When new paradigms are proposed, it is generally a messy process. And this makes sense — paradigm shifts require dismantling of established frameworks, all while grappling with the inherent uncertainty about whether the proposed alternative actually represents genuine progress. This challenge is by no means specific to the social sciences, but it unfolds more slowly in fields like finance because, as one famous finance professor quipped, "we are glorified storytellers." It’s extremely difficult to adjudicate which narrative more accurately captures the underlying economic reality when competing theories can often explain the same empirical patterns through vastly different causal mechanisms.

Virtue of Complexity

Stefan Nagel presented the paper “Seemingly Virtuous Complexity in Return Prediction” which is basically a response to the discussant Bryan Kelly’s earlier work “The Virtue of Complexity in Return Prediction.” The basic point of the paper is that the complex regression in Kelly’s work — at least in one of the main specification — closely resembles volatility-timed momentum, which generates splendid performance. More provocatively, Nagel shows that even under an alternative data generating process characterized by negative momentum, the complex regression still mechanically yields a volatility-timed momentum strategy, suggesting that the apparent success may reflect an algorithmic artifact rather than genuine economic insight. It is this simulation that really calls into question whether the complexity is truly virtuous or merely serendipitous.

Kelly pretty much disagreed with every aspect of this critique, arguing that (i) the virtue of complexity represents a genuine methodological advance that captures real economic relationships, (ii) researchers should focus on relative outperformance against benchmarks rather than absolute performance metrics, and (iii) the path forward involves improving absolute performance through ensemble methods that combine multiple sophisticated approaches.

It is still unclear to me after the discussion whether the equivalence to a volatility-timed momentum strategy is legitimate or not, but one thing that was clear to me is that there seems to be some disagreement about what constitutes meaningful progress in empirical asset pricing. It’s a helpful reminder that paradigm shifts often involve not just technical disagreements but deeper epistemological divides about how we should evaluate competing claims to knowledge.

Demand System Asset Pricing

Another debate regarding new paradigms started with Valentin Haddad’s presentation of “Causal Inference for Asset Pricing.” The paper specifies a set of assumptions that are needed to draw causal inference about demand elasticities. The debate on this paper got a bit heated which I won’t go into too much detail here.