Spencer Lyon

Schorfheide, Song, & Yaron (2014) (Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach (@Schorfheide2014))

· Read in about 6 min · (1082 Words)

Outline

This is paper is more empirical than theoretical.

Theoretical contributions are to specify more flexible stochastic environment than in other long run risk models a la Bansal et al. (2007)

Empirical contributions are

  • form a linear approximation of the stochastic environment to apply state space methods within an MCMC algorithm
  • Show how to incorporate data of various frequencies and accuracy in the inference algorithm

Overview of this talk:

  • Background: consumption data is available at varying frequencies and varying degrees of of measurement error. The authors want to use all data available to identify innovations in consumption and dividend growth as well as asset returns.
  • They will build a simple representative agent exchange economy that includes short and long run components to consumption growth and stochastic volatility
  • Will use semi-linearized solutions to this model to define a non-linear state space system
  • Use a Gibbs sampler to characterize posterior of parameter distribution – characterizing consumption growth innovations
  • They compare results of this posterior to moments in the data

Model

Preferences

Endowment economy with a representative agent with Epstein Zin preferences.

The agent maximizes lifetime utility subject to the simple Budget constraint Wt + 1 = (Wt − Ct)Rc, t + 1, where Rc, t + 1 is the return on all invested wealth.

They also consider an extension with a time rate of preference shock (the flow utility C is shocked). The growth of this shock follows an AR(1) with innovations independent of all other processes in the model.

Technology/Endowment

The authors save on some notation by jumping to the equilibrium outcome of Ct = Yt and they describe the growth process for C directly.

  • They decompose consumption growth g into a persistent component x and a transitory shock that has stochastic volatility
  • The persistent component follows and AR(1) with its own stochastic volatility process
  • They also model dividend streams that have levered exposures to (linear in) both persistent and transitory components of consumption growth and its own stochastic volatility process.

All stochastic volatility processes are AR(1) in logs with Gaussian innovations, so that each stochastic volatility is log normal.

Information

The agent observes current wealth, consumption growth, dividend growth, and asset returns in every period.

Solution

The focus of their work is on a unique estimation procedure. To facilitate this, they want a closed form solution for the model.

Non-Guassian dynamics for each volatility process prevents them from writing a closed form solution.

They use a linear approximation to the Log normal process. This linear approximation is Gaussian.

Key results from solution:

  • State variables are the level x as well as the stochastic volatility of innovations to both g and x
  • The log price consumption ratio (price of consumption good) and risk free rate are affine in state variables
  • State prices are reflected in innovations to the SDF (mt + 1 − E[mt + 1]), which in equilibrium are linear innovations to g, x, and the two associated volatilities.

State space representation

This closed form solution to the model is used to identify the coefficient matrices in a state space representation of the model. Characteristics of the state space form are

  • The model is quite large
    • 22 parameters
    • 30 states: most deal with x and the various innovation realizations
    • 3-6 measurement variables
  • The state space model is non-linear because the levels of stochastic volatility are nonlinear.
  • Measurement equation for consumption must be flexible in two ways:
    • Allow various frequency of observations (annual pre-1959 and monthly from 1959 to 2011)
    • Allow for the potentially larger measurement errors for the monthly frequency data.

Bayesian Inference

The authors use a Gibbs sampling scheme to draw from the posterior of the parameter vector of the non-linear state space system. They split the parameters into two blocks:

  1. stochastic volatility levels | all other parameters
  2. all other parameters | stochastic volatility levels

In each step of the MCMC algorithm:

  • The stochastic volatility block is updated using a non-linar particle filter
  • Then, taking as given levels of stochastic volatility, the rest of the equations form a linear Guassian state space. Thus, the prediction error decomposition within a Kalman filtering framework is used to update the parameters in this block.

A common result with particle filters is that their accuracy and stability degrade with the number of parameters. Isolating the non-linearity allows the authors to applying the Kalman filter to compute the exact conditional likelihood for the majority of the parameters. This substantially reduces the error inherent in large scale particle filters.

Results

They run the MCMC algorithm on three versions of the model:

  1. Without rate of time preference shocks and only using consumption and divided growth data (they drop the risk free rate and market returns from the measurement equation)
  2. Without rate of time preference shocks and using consumption and divided growth data as well as market returns and the risk free rate
  3. With rate of time preference shocks and using consumption and divided growth data as well as market returns and the risk free rate

Key empirical results:

Common across all three versions:

  • Strong evidence for a persistent component to consumption growth (robust to sample used – i.e. pre/post 1959 and all). Shown in high correlation coefficient in AR(1) for x
  • Strong evidence for all three independent forms of stochastic volatility

Differences when including return data:

  • Auto correlation increases from 0.97 to 0.99 (captures part of equity premium)
  • Volatility of long run risk innovations increases (reflects long run info about growth uncertainty contained in prices)
  • Predictability of consumption growth drops to levels that are closer to data

Differences when including time rate of preference shocks:

  • Much better able to capture movements in risk free rate.
    • In fact, they do a variance decomposition of market returns, the price divided ratio, and the risk free rate in terms of x, the growth rate of the preference shock, and the volatility of x
    • They find that the growth rate of the preference shock explains almost no variation in the market returns or the price dividend ratio, but explains between 40-90% of the variation in the risk free rate over the sample.
    • Driven by the fact that the rate of time preference shock directly moves the SDF and that movement in the SDF maps directly into movement in the risk free rate.

References

Bansal, Ravi, Ravi Bansal, Dana Kiku, Dana Kiku, Amir Yaron, and Amir Yaron. 2007. “Risks For the Long Run: Estimation and Inference.” http://128.197.26.34/econ/seminars/macroeconomics/paperfall08/bky{_}Sept2007.pdf.

Schorfheide, Frank, Nber Dongho Song, and Amir Yaron. 2014. “Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach.”