Economics‎ > ‎

Current work

Risk preferences, asset pricing, and business cycles

In this paper, I combine disappointment aversion, as employed by Routledge and Zin (Journal of Finance, 2010)   Castro and Clementi (RED, 2010)  with rare disasters in the spirit of Rietz (JME, 1988)  Barro (QJE,2006)  Gourio (Finance Research Letters, 2008)  Gabaix (AER, 2008) and others. I find that, when the model's represented agent is endowed with an empirically plausible degree of disappointment aversion, a rare disaster model can produce moments of asset returns that match the data reasonably well, using disaster probabilities and disaster sizes much smaller than have been employed previously in the literature.
This is good news. Quantifying the disaster risk faced by any one country is inherently difficult with limited time series data. And, it is open to debate whether the disaster risk relevant to, say, US investors is well-approximated by the sizable risks found by Barro and co-authors in cross-country data. On the other hand, we have evidence [see Starmer (JEL, 2000), Camerer and Ho (JRU, 1994) or Choi et al. (AER,2007)] that individuals tend to over-weight bad or disappointing outcomes, relative to the outcomes' weights under expected utility. Recognizing aversion to disappointment means that disaster risks need not be nearly as large as suggested by the cross-country evidence for a rare disaster model to produce average equity premia and risk-free rates that match the data.
I illustrate the interaction between disaster risk and disappointment aversion both analytically and in the context of a simple Rietz-like model of asset-pricing with rare disasters. I then analyze a richer model, in the spirit of Barro, with a distribution of disaster sizes, Epstein-Zin preferences, and partial default (in the event of a disaster) on the economy's `risk-free' asset. For small elasticities of intertemporal substitution, the model is able to match almost exactly the means and standard deviations of the equity return and risk-free rate, for disaster risks one-half or one-fourth the estimated sizes from Barro. For larger elasticities of intertemporal substitution, the model's fit is less satisfactory. Even so, apart from the volatility of the risk-free rate---which the model under-predicts---the results are broadly similar to those obtained by Gourio, but with disaster risks one-half or one-fourth as large.

Keywords: Rare disasters, disappointment aversion, asset pricing
JEL codes: E43, E44, G12

Matlab/Octave codes: Here is the code I wrote to solve the model. With disappointment aversion, calculating the SDF involves two fixed-point problems, one of which is to find the certainty equivalent of  future lifetime utility. For that piece, I use a vectorized bisection routine that may be of interest in itself (because of the vectorization).
Function file to generate simulated moments for the Rietz-like model of Section 3, given parameter inputs.

Slides: Here are Beamer slides for a brownbag talk at the Dallas Fed in December 2013: FRB Dallas slides. Here are Beamer slides for a presentation of the latest version at 2013 ESEM, Gothenburg, August 25-30: ESEM conference slides. And here are slides for a presentation of an earlier version at the 2013 CEF conference in Vancouver, BC, July 10-12, 2103: CEF conference slides.

The habit model of Campbell and Cochrane (1999) specifies a process for the `surplus ratio'---the excess of consumption over habit, relative to consumption---rather than an evolution for the habit stock. It's not immediately apparent if their formulation can be accommodated within the Markov chain framework of Mehra and Prescott (1985). This note illustrates one way to create a Campbell and Cochrane-like model within the Mehra-Prescott framework. A consequence is that we can perform another sort of reverse-engineering exercize---we can calibrate the resulting model to match the stochastic discount factor derived in the Mehra-Prescott framework by Melino and Yang (2003). The Melino-Yang SDF, combined with Mehra and Prescott's consumption process, yields asset returns that exactly match the first and second moments of the data, as estimated by Mehra and Prescott.

A byproduct of the exercize is an equivalent (in terms of SDFs) representation of Campbell-Cochrane preferences as a state-dependent version of standard time-additively-separable, constant relative risk aversion preferences. When calibrated to exactly match the asset return data, both the utility discount factor and the coefficient of relative risk aversion vary with the Markov state. Not surprisingly, our Campbell-Cochrane preferences are equivalent to a state-dependent representation with strongly countercyclical risk aversion. Less expected is the equivalent utility discount factor---it is uniformly greater than one, and countercyclical.

In their analysis, Melino and Yang dismissed out of hand state-dependent specifications where the utility discount factor exceeds one. Our model gives one plausible rationalization for such a specification.

JEL Codes: E44, G12
Keywords: habit, asset returns, stochastic discount factor, state-dependent preferences

Matlab codes are here and here.

Risk preferences, intertemporal substitution, and business cycle dynamics

This paper examines the implications of alternative specifications of risk preferences---including preferences displaying first-order risk aversion (FORA)---together with alternative assumptions regarding individuals' elasticities of intertemporal substitution (EIS), for the behavior of a technology shock-driven business cycle model. The most general version of the model I consider also includes external habit formation and capital adjustment costs. Risk preferences matter for some first moments, because of precautionary capital accumulation, though they have little impact on model-implied average asset returns. In terms of the models' second moment predictions, the assumed EIS and the presence or absence of habits matters a great deal, while the impact of alternative risk preferences is negligible. Some curious outcomes obtain in cases where the EIS is bigger than one or habits are present, including countercyclical consumption in the former case and countercyclical hours worked in the latter case.

Despite their negligible impact on the model dynamics or asset returns, risk preferences matter a great deal for the perceived welfare cost of aggregate volatility. Under the FORA specification I use, which I argue has some empirical plausibility, costs are as high as 1.3% of lifetime consumption.

This one's highly preliminary, but finished enough to make public.

Matlab code is here, in one zip file.  I apologize that I haven't had time to organize it better or make a README file for all these programs yet. I will do that soon.

Beamer slides for a presentation at the Riskbank are here.

Real business cycle dynamics under first-order risk aversion

This paper incorporates preferences that display first-order risk aversion (FORA) into a standard real business cycle model. Although FORA preferences represent a sharp departure from the expected utility/constant relative risk aversion (EU/CRRA) preferences common in the business cycle literature, the change has only a negligible effect on the model's second moment implications. In fact, for what I argue is an empirically reasonable "ballpark" calibration of the FORA preferences, the moment implications are essentially identical to those under EU/CRRA, while the welfare cost of aggregate fluctuations in the model is substantially larger.
[Note:  This one's really been superseded by the paper immediately above, but its focus is a bit different---much more emphasis on the specification of risk preferences---and it has the virtue of being relatively complete, rather than "in progress".]

Political Economy

Majority voting: A quantitative exploration (joint with Daniel Carroll and Eric Young)

We study the tax systems that arise in a once-and-for-all majority voting equilibrium embedded within a macroeconomic model of inequality. We find that majority voting delivers (i) a small set of outcomes, (ii) zero labor income taxation, and (iii) nearly zero transfers. We find that majority voting, contrary to the literature developed in models without idiosyncratic risk, is quite powerful at restricting outcomes; however, it also delivers predictions inconsistent with observed tax systems.

Almost orthogonal outcomes under probabilistic voting: A cautionary example

Probabilistic voting is often invoked in applications where the issue space is multidimensional, with little or nor justification for the form taken by voters' non-policy preferences. I illustrate by way of example the extreme fragility of probabilistic voting equilibria with respect to assumptions about the non-policy elements of voters' preferences. I also offer intuition for the fragility using the social welfare functions which also describe the equilibria. 

What do majority-voting politics say about redistributive taxation of consumption and factor income? Not much.

Tax rates on labor income, capital income and consumption---and the redistributive transfers those taxes finance---differ widely across developed countries. Can majority-voting methods, applied to a calibrated growth model, explain that variation? The answer I find is yes, and then some. In this paper, I examine a simple growth model, calibrated roughly to US data, in which the political decision is over constant paths of taxes on factor income and consumption, used to finance a lump-sum transfer. I first look at outcomes under probabilistic voting, and find that equilibria are extremely sensitive to the specification of uncertainty. I then consider other ways to restrict the range of majority-rule outcomes, looking at the model's implications for the shape of the Pareto set and the uncovered set, and the existence or non-existence of a Condorcet winner. Solving the model on discrete grid of policy choices, I find that no Condorcet winner exists and that the Pareto and uncovered sets, while small relative to the entire issue space, are large relative to the range of tax policies we see in data for a collection of 20 OECD countries. Taking that data as the issue space, I find that none of the 20 can be ruled out on efficiency grounds, and that 10 of the 20 are in the uncovered set. Those 10 encompass policies as diverse as those of the US, Norway and Austria. One can construct a Condorcet cycle including all 10 countries' tax vectors.

The key features of the model here, as compared to other models on the endogenous determination of taxes and redistribution, is that the issue space is multidimensional and, at the same time, no one voter type is sufficiently numerous to be decisive. I conclude that the sharp predictions of papers in this literature may not survive an expansion of their issue spaces or the allowance for a slightly less homogeneous electorate.

Here are slides based on the work, for a presentation at the Guanajuato Workshop for Young Economists, August 12-14, 2011.

Monetary Theory

Do payment systems matter: A new look (joint with Joe Haslag)

In this paper, we consider two alternative pure payments systems---the trade of goods for goods, or barter, and trade using intrinsically valueless fiat money. Here, the term payment system refers to the method of executing mutually beneficial trades, and `pure' means that each method of exchange is considered exclusively. Each payment system is examined in an economy with location-specific commodities, and households consist of vendor-shopper pairs. The household's decision problem includes a distance-related transaction cost; that is, the cost of trading with anyone from another location increases as the distance from the home location increases. We then ask, is the equilibrium set of consumption goods---and the quantity of each type---invariant to whether the vendor or the shopper pays the transaction cost? The answer is that in economies with monetary settlements, invariance fails.

Inflation measurement

Modal inflation

This is one those "just-out-of-curiosity" projects.  A typical explanation of the usefulness of robust inflation estimators (weighted medians or trimmed means) goes something like this: "We can think of 'the' inflation rate in a given month as the central tendency of the distribution of price changes of all items in the given month.  There are lots of ways to characterize the central tendency (or location) of a distribution, the three most familiar being mean, median and mode.  The inflation rates we see commonly reported are (weighted) means, but when distributions of price changes are fat-tailed, medians or trimmed means may be more efficient estimators."

In practice we see lots of simple weighted means (here or here), and robust measures like medians or trimmed means (here or here).  What about the mode?  Imagine sorting a given month's item-level price changes into equal-sized bins centered on the values one-month inflation rates are typically reported in ( ...-0.2%, -0.1%, 0.0%, +0.1%, +0.2%...), then find the mode (possibly after weighting for expenditure).  What does that series look like, and how does it compare with our more conventional series?  Like I said, just out of curiosity.

Core inflation measures constructed by excluding particularly volatile items from the price index have a long history.  The most common such measures are indices excluding the prices of food and energy items.  This paper attempts to shed some statistical light on the impact of excluding certain items from the Personal Consumption Expenditures (PCE) price index.  In particular, I am interested in the trade-off between reducing short-run volatility (relative to the volatility of the headline index) and possibly distorting the measurement of inflation over longer horizons.  Some of the questions which this paper addresses are: Which items have the highest time series volatility?  Among the items with high volatility, are there meaningful patterns in the distribution of volatility across high and low frequencies?  Which items, by their exclusion, have the largest impact on longer-horizon measures of inflation?  And which, by their exclusion, contribute the most to reducing high-frequency volatility in measured inflation?  Excluding those items which answer the last question yields a PCE index which compares favorably to PCE ex food and energy along several dimensions, while excluding only half as many items by expenditure weight.
[Note:  I'm sure the methodology described in this paper is useful, but data-wise, this work was unfortunately completed just prior to the BEA's comprehensive revision of the NIPAs in 2009---a revision that made some big changes to the organization of the underlying components of personal consumption expenditures.  Seeing if one gets similar results using the revised components is on the to-do list.]


Income redistribution and technology diffusion

This is joint work with Cyril Monnet & Erwan Quintin.  We're trying to model and quantify the impact of redistribution policies on the rate of technological diffusion in a a modified version of the model introduced by Greenwood and Yorukoglu (1997). Newer plants are characterized by a higher potential technology, and the rate of progress in the frontier technology determines long-run growth. Young plants need to invest in learning in order to reach their productivity potential, and this learning requires an input of skilled labor. An acceleration in the pace of progress thus causes a higher demand for skilled labor, and causes the skill premium to rise, which in turn leads more agents to invest in skills along the transition towards a new balanced growth path. Redistribution mechanisms---in particular progressive taxation of labor income--- distort this process by compressing the education premium. This delays the diffusion of newer technologies, making the transition towards a higher-growth path more protracted in economies that curb spikes in income inequality.
Our goal is to quantify the importance of this effect, and show that the broader implications of redistribution during periods of faster progress mesh well with the available evidence.