Risk preferences, asset pricing, and business cyclesAbstract
In this paper, I combine disappointment aversion, as employed by Routledge and Zin (Journal of Finance, 2010) Castro and Clementi (RED, 2010) with rare disasters in the spirit of Rietz (JME, 1988) Barro (QJE,2006) Gourio (Finance Research Letters, 2008) Gabaix (AER, 2008) and others. I find that, when the model's represented agent is endowed with an empirically plausible degree of disappointment aversion, a rare disaster model can produce moments of asset returns that match the data reasonably well, using disaster probabilities and disaster sizes much smaller than have been employed previously in the literature.
This is good news. Quantifying the disaster risk faced by any one country is inherently difficult with limited time series data. And, it is open to debate whether the disaster risk relevant to, say, US investors is wellapproximated by the sizable risks found by Barro and coauthors in crosscountry data. On the other hand, we have evidence [see Starmer (JEL, 2000), Camerer and Ho (JRU, 1994) or Choi et al. (AER,2007)] that individuals tend to overweight bad or disappointing outcomes, relative to the outcomes' weights under expected utility. Recognizing aversion to disappointment means that disaster risks need not be nearly as large as suggested by the crosscountry evidence for a rare disaster model to produce average equity premia and riskfree rates that match the data.
I illustrate the interaction between disaster risk and disappointment aversion both analytically and in the context of a simple Rietzlike model of assetpricing with rare disasters. I then analyze a richer model, in the spirit of Barro, with a distribution of disaster sizes, EpsteinZin preferences, and partial default (in the event of a disaster) on the economy's `riskfree' asset. For small elasticities of intertemporal substitution, the model is able to match almost exactly the means and standard deviations of the equity return and riskfree rate, for disaster risks onehalf or onefourth the estimated sizes from Barro. For larger elasticities of intertemporal substitution, the model's fit is less satisfactory. Even so, apart from the volatility of the riskfree ratewhich the model underpredictsthe results are broadly similar to those obtained by Gourio, but with disaster risks onehalf or onefourth as large.
Keywords: Rare disasters, disappointment aversion, asset pricing
JEL codes: E43, E44, G12
Matlab/Octave codes: Here is the code I wrote to solve the model. With disappointment aversion, calculating the SDF involves two fixedpoint problems, one of which is to find the certainty equivalent of future lifetime utility. For that piece, I use a vectorized bisection routine that may be of interest in itself (because of the vectorization).
Function file to generate simulated moments for the Rietzlike model of Section 3, given parameter inputs. Shell file to generate the tables and figure in Section 3.
Function file returns to generate moments for the Barrolike model of Section 4, given parameter inputs. Slides: Here are Beamer slides for a brownbag talk at the Dallas Fed in December 2013: FRB Dallas slides. Here are Beamer slides for a presentation of the latest version at 2013 ESEM, Gothenburg, August 2530: ESEM conference slides. And here are slides for a presentation of an earlier version at the 2013 CEF conference in Vancouver, BC, July 1012, 2103: CEF conference slides.
Abstract
The habit model of Campbell and Cochrane (1999) specifies a process for the `surplus ratio'the excess of consumption over habit, relative to consumptionrather than an evolution for the habit stock. It's not immediately apparent if their formulation can be accommodated within the Markov chain framework of Mehra and Prescott (1985). This note illustrates one way to create a Campbell and Cochranelike model within the MehraPrescott framework. A consequence is that we can perform another sort of reverseengineering exercizewe can calibrate the resulting model to match the stochastic discount factor derived in the MehraPrescott framework by Melino and Yang (2003). The MelinoYang SDF, combined with Mehra and Prescott's consumption process, yields asset returns that exactly match the first and second moments of the data, as estimated by Mehra and Prescott.
A byproduct of the exercize is an equivalent (in terms of SDFs) representation of CampbellCochrane preferences as a statedependent version of standard timeadditivelyseparable, constant relative risk aversion preferences. When calibrated to exactly match the asset return data, both the utility discount factor and the coefficient of relative risk aversion vary with the Markov state. Not surprisingly, our CampbellCochrane preferences are equivalent to a statedependent representation with strongly countercyclical risk aversion. Less expected is the equivalent utility discount factorit is uniformly greater than one, and countercyclical.
In their analysis, Melino and Yang dismissed out of hand statedependent specifications where the utility discount factor exceeds one. Our model gives one plausible rationalization for such a specification.
JEL Codes: E44, G12
Keywords: habit, asset returns, stochastic discount factor, statedependent preferences
Abstract
This paper examines the implications of alternative specifications of risk preferencesincluding preferences displaying firstorder risk aversion (FORA)together with alternative assumptions regarding individuals' elasticities of intertemporal substitution (EIS), for the behavior of a technology shockdriven business cycle model. The most general version of the model I consider also includes external habit formation and capital adjustment costs. Risk preferences matter for some first moments, because of precautionary capital accumulation, though they have little impact on modelimplied average asset returns. In terms of the models' second moment predictions, the assumed EIS and the presence or absence of habits matters a great deal, while the impact of alternative risk preferences is negligible. Some curious outcomes obtain in cases where the EIS is bigger than one or habits are present, including countercyclical consumption in the former case and countercyclical hours worked in the latter case.
Despite their negligible impact on the model dynamics or asset returns, risk preferences matter a great deal for the perceived welfare cost of aggregate volatility. Under the FORA specification I use, which I argue has some empirical plausibility, costs are as high as 1.3% of lifetime consumption.
This one's highly preliminary, but finished enough to make public.
Matlab code is here, in one zip file. I apologize that I haven't had time to organize it better or make a README file for all these programs yet. I will do that soon.
Beamer slides for a presentation at the Riskbank are here.
Real business cycle dynamics under firstorder risk aversion Abstract
This paper incorporates preferences that display firstorder risk aversion (FORA) into a standard real business cycle model. Although FORA preferences represent a sharp departure from the expected utility/constant relative risk aversion (EU/CRRA) preferences common in the business cycle literature, the change has only a negligible effect on the model's second moment implications. In fact, for what I argue is an empirically reasonable "ballpark" calibration of the FORA preferences, the moment implications are essentially identical to those under EU/CRRA, while the welfare cost of aggregate fluctuations in the model is substantially larger. [Note: This one's really been superseded by the paper immediately above, but its focus is a bit differentmuch more emphasis on the specification of risk preferencesand it has the virtue of being relatively complete, rather than "in progress".]
Here is the Dallas Fed working paper version.
Political EconomyMajority voting: A quantitative exploration (joint with Daniel Carroll and Eric Young) Abstract Almost orthogonal outcomes under probabilistic voting: A cautionary example Abstract Probabilistic voting is often invoked in applications where the issue space is multidimensional, with little or nor justification for the form taken by voters' nonpolicy preferences. I illustrate by way of example the extreme fragility of probabilistic voting equilibria with respect to assumptions about the nonpolicy elements of voters' preferences. I also offer intuition for the fragility using the social welfare functions which also describe the equilibria. What do majorityvoting politics say about redistributive taxation of consumption and factor income? Not much.Tax rates on labor income, capital income and consumptionand the redistributive transfers those taxes financediffer widely across developed countries. Can majorityvoting methods, applied to a calibrated growth model, explain that variation? The answer I find is yes, and then some. In this paper, I examine a simple growth model, calibrated roughly to US data, in which the political decision is over constant paths of taxes on factor income and consumption, used to finance a lumpsum transfer. I first look at outcomes under probabilistic voting, and find that equilibria are extremely sensitive to the specification of uncertainty. I then consider other ways to restrict the range of majorityrule outcomes, looking at the model's implications for the shape of the Pareto set and the uncovered set, and the existence or nonexistence of a Condorcet winner. Solving the model on discrete grid of policy choices, I find that no Condorcet winner exists and that the Pareto and uncovered sets, while small relative to the entire issue space, are large relative to the range of tax policies we see in data for a collection of 20 OECD countries. Taking that data as the issue space, I find that none of the 20 can be ruled out on efficiency grounds, and that 10 of the 20 are in the uncovered set. Those 10 encompass policies as diverse as those of the US, Norway and Austria. One can construct a Condorcet cycle including all 10 countries' tax vectors.
The key features of the model here, as compared to other models on the endogenous determination of taxes and redistribution, is that the issue space is multidimensional and, at the same time, no one voter type is sufficiently numerous to be decisive. I conclude that the sharp predictions of papers in this literature may not survive an expansion of their issue spaces or the allowance for a slightly less homogeneous electorate. Here are slides based on the work, for a presentation at the Guanajuato Workshop for Young Economists, August 1214, 2011.
In this paper, we consider two alternative pure payments systemsthe trade of goods for goods, or barter, and trade using intrinsically valueless fiat money. Here, the term payment system refers to the method of executing mutually beneficial trades, and `pure' means that each method of exchange is considered exclusively. Each payment system is examined in an economy with locationspecific commodities, and households consist of vendorshopper pairs. The household's decision problem includes a distancerelated transaction cost; that is, the cost of trading with anyone from another location increases as the distance from the home location increases. We then ask, is the equilibrium set of consumption goodsand the quantity of each typeinvariant to whether the vendor or the shopper pays the transaction cost? The answer is that in economies with monetary settlements, invariance fails.Monetary TheoryDo payment systems matter: A new look (joint with Joe Haslag)Abstract Inflation measurementModal inflationThis is one those "justoutofcuriosity" projects. A typical explanation of the usefulness of robust inflation estimators (weighted medians or trimmed means) goes something like this: "We can think of 'the' inflation rate in a given month as the central tendency of the distribution of price changes of all items in the given month. There are lots of ways to characterize the central tendency (or location) of a distribution, the three most familiar being mean, median and mode. The inflation rates we see commonly reported are (weighted) means, but when distributions of price changes are fattailed, medians or trimmed means may be more efficient estimators."
In practice we see lots of simple weighted means (here or here), and robust measures like medians or trimmed means (here or here). What about the mode? Imagine sorting a given month's itemlevel price changes into equalsized bins centered on the values onemonth inflation rates are typically reported in ( ...0.2%, 0.1%, 0.0%, +0.1%, +0.2%...), then find the mode (possibly after weighting for expenditure). What does that series look like, and how does it compare with our more conventional series? Like I said, just out of curiosity.
Core inflation measures constructed by excluding particularly volatile items from the price index have a long history. The most common such measures are indices excluding the prices of food and energy items. This paper attempts to shed some statistical light on the impact of excluding certain items from the Personal Consumption Expenditures (PCE) price index. In particular, I am interested in the tradeoff between reducing shortrun volatility (relative to the volatility of the headline index) and possibly distorting the measurement of inflation over longer horizons. Some of the questions which this paper addresses are: Which items have the highest time series volatility? Among the items with high volatility, are there meaningful patterns in the distribution of volatility across high and low frequencies? Which items, by their exclusion, have the largest impact on longerhorizon measures of inflation? And which, by their exclusion, contribute the most to reducing highfrequency volatility in measured inflation? Excluding those items which answer the last question yields a PCE index which compares favorably to PCE ex food and energy along several dimensions, while excluding only half as many items by expenditure weight.
[Note: I'm sure the methodology described in this paper is useful, but datawise, this work was unfortunately completed just prior to the BEA's comprehensive revision of the NIPAs in 2009a revision that made some big changes to the organization of the underlying components of personal consumption expenditures. Seeing if one gets similar results using the revised components is on the todo list.]
MiscellanyIncome redistribution and technology diffusionThis is joint work with Cyril Monnet & Erwan Quintin. We're trying to model and quantify the impact of redistribution policies on the rate of technological diffusion in a a modified version of the model introduced by Greenwood and Yorukoglu (1997). Newer plants are characterized by a higher potential technology, and the rate of progress in the frontier technology determines longrun growth. Young plants need to invest in learning in order to reach their productivity potential, and this learning requires an input of skilled labor. An acceleration in the pace of progress thus causes a higher demand for skilled labor, and causes the skill premium to rise, which in turn leads more agents to invest in skills along the transition towards a new balanced growth path. Redistribution mechanismsin particular progressive taxation of labor income distort this process by compressing the education premium. This delays the diffusion of newer technologies, making the transition towards a highergrowth path more protracted in economies that curb spikes in income inequality.
Our goal is to quantify the importance of this effect, and show that the broader implications of redistribution during periods of faster progress mesh well with the available evidence.

Economics >