(Click to expand)
Idea: Gives an explicit representation to Disappointment Averse preferences and to some Betweenness Preferences.
Abstract: One of the most well-known models of non-expected utility is Gul (1991)’s model of Disappointment Aversion. This model, however, is defined implicitly, as the solution to a functional equation; its explicit utility representation is unknown, which may limit its applicability. We show that an explicit representation can be easily constructed, using solely the components of the implicit one. We also provide a more general result: an explicit representation for preferences in the Betweenness class that also satisfy Negative Certainty Independence (Dillenberger, 2010) or its counterpart. We show how our approach gives a simple way to behaviorally identify the parameters of the representation and to study the consequences of disappointment aversion in a variety of applications.
PDF Download: Paper
Idea: We look more at the objects we want to select for the task at hand, not the object with the highest value.
Abstract: When choosing between options, such as food items presented in plain view, people tend to choose the option they spend longer looking at. The prevailing interpretation is that visual attention increases value. However, in previous studies, `value' was coupled to a behavioural goal, since subjects had to choose the item they preferred. This makes it impossible to discern if visual attention has an effect on value, or, instead, if attention modulates the information most relevant for the goal of the decision-maker. Here we present the results of two independent studies — a perceptual and a value - based task—that allow us to decouple value from goal-relevant information using specific task-framing. Combining psychophysics with computational modelling, we show that, contrary to the current interpretation, attention does not boost value, but instead it modulates goal-relevant information. This work provides a novel and more general mechanism by which attention interacts with choice.
PDF Download: Paper on eLife
(Previous titles: Time Lotteries, and Risk Attitude towards Time Lotteries.)
Idea: Studies time lotteries -- lotteries that pay a fixed amount in a random date. Experiment on this plus discussion on which models can accomodate it.
Abstract: We study preferences over lotteries in which both the prize and the payment date are uncertain. In particular, a time lottery is one in which the prize is fixed but the date is ran- dom. With Expected Discounted Utility, individuals must be risk seeking over time lotteries (RSTL). In an incentivized experiment, however, we find that almost all subjects violate this property. Our main contributions are theoretical. We first show that within a very broad class of models, which includes many forms of non-Expected Utility and time discounting, it is impossible to accommodate even a single violation of RSTL without also violating a property we termed Stochastic Impatience, a risky counterpart of standard Impatience. We then offer two positive results. If one wishes to maintain Stochastic Impatience, violations of RSTL can be accommodated by keeping Independence within periods while relaxing it across periods. If, instead, one is willing to forego Stochastic Impatience, violations of RSTL can be accommodated with a simple generalization of Expected Discounted Utility, obtained by imposing only the behavioral postulates of Discounted Utility and Expected Utility.
(Previous titles: Is it All Connected? A Testing Ground for Unified Theories of Behavioral Economics Phenomena, Is it All Connected? Understanding the Relationship Between Behavioral Phenomena , and Estimating the Relationship between Economic Preferences: A Testing Ground for Unified Theories of Behavior)
Idea: Studies with an experiment the empirical relation between behavioral phenomena like risk, ambiguity, and loss aversion, certainty, present, and status quo bias, endowment effect, updating speed, discounting, etc.
Abstract: We study the joint distribution of 11 behavioral phenomena in a group of 190 laboratory subjects and compare it to the predictions of existing models as a step in the development of a parsimonious, general model of economic choice. We find strong correlations between most measures of risk and time preference, between compound lottery and ambiguity aversion, and between loss aversion and the endowment effect. Our results support some, but not all attempts to unify behavioral economic phenomena. Overconfidence and gender are also predictive of some behavioral characteristics.
Idea: Study axiomatically stochastic choice as the outcome of a deliberate desire to randomize.
Abstract: We study stochastic choice as the outcome of deliberate randomization. We derive a general representation of a stochastic choice function where stochasticity allows the agent to achieve from any set the maximal element according to her underlying preferences over lotteries. We show that in this model stochasticity in choice captures complementarity between elements in the set, and thus necessarily implies violations of Regularity/Monotonicity, one of the most common properties of stochastic choice. This feature separates our approach from other models, e.g., Random Utility.
PDF Download: Paper
Idea: Show that the exposure and recall of violence has significant negative effects on short-term memory and cognitive-control, with experiments in Colombia.
Abstract: Previous research has investigated the effects of violence and warfare on individuals’ well-being, mental health, and individual prosociality and risk aversion. This study establishes the short- and long-term effects of exposure to violence on short-term memory and aspects of cognitive control. Short-term memory is the ability to store information. Cognitive control is the capacity to exert inhibition, working memory, and cognitive flexibility. Both have been shown to affect positively individual well-being and societal development. We sampled Colombian civilians who were exposed either to urban violence or to warfare more than a decade earlier. We assessed exposure to violence through either the urban district- level homicide rate or self-reported measures. Before undertaking cognitive tests, a randomly selected subset of our sample was asked to recall emotions of anxiety and fear connected to experiences of violence, whereas the rest recalled joyful or emotionally neutral experiences. We found that higher exposure to violence was associated with lower short-term memory abilities and lower cognitive control in the group recalling experiences of violence, whereas it had no effect in the other group. This finding demonstrates that expo- sure to violence, even if a decade earlier, can hamper cognitive functions, but only among individuals actively recalling emotional states linked with such experiences. A laboratory experiment conducted in Germany aimed to separate the effect of recalling violent events from the effect of emotions of fear and anxiety. Both factors had significant negative effects on cognitive functions and appeared to be independent from each other.
(Previous Title: Objective Lotteries as Ambiguous Objects: Allais, Ellsberg,and Hedging)
Idea: Model an agent who exhibits both Allais and Ellsberg-like behavior in the setup of Anscombe and Aumann (1963). Generalizes Gilboa and Schmeidler (1989) to allow for Allais-type behavior.
Abstract: Two of the most well-known regularities observed in preferences under risk and uncertainty are ambiguity aversion and the Allais paradox. We study the behavior of an agent who can display both tendencies at the same time. We introduce a novel notion of preference for hedging that applies to both objective lotteries and uncertain acts. We show that this axiom, together with other standard ones, is equivalent to a representation in which the agent: 1) evaluates ambiguity using multiple priors, as in the model of Gilboa and Schmeidler (1989); 2) evaluates objective lotteries by distorting probabilities as in the Rank Dependent Utility model, but using the worst from a set of distortions. We show that a preference for hedging is not sufficient to guarantee an Ellsberg-like behavior if the agent violates Expected Utility for objective lotteries; we provide a novel axiom that characterizes this case, linking the distortions for objective and subjective bets.
PDF Download: Paper
(Previous title: Stochastic Choice and Hedging)
Idea: Experimental paper in which we show that the majority of subjects choose to give a stochastic choice. This is in line with the interpretations of stochastic choice as emerging from an explicit preference for randomizing, as opposed to emerging from random utility or mistakes.
Abstract: We conduct an experiment in which subjects face the same questions repeated multiple times, with repetitions of two types: 1) following the literature, the repetitions are distant from each other; 2) in a novel treatment, the repetitions are in a row, and subjects are told that the questions will be repeated. We find that a large majority of subjects exhibit stochastic choice in both cases. We discuss the implications for models of stochastic choice.
(Previous title: A Variation on Ellsberg and Uncertain Probabilities vs. Uncertain Outcomes: An Experimental Study of Attitudes Towards Ambiguity)
Idea: Modifies Ellsberg experiments to allow for bet directly on the composition of the urn. The behavior observed suggests of restrictions of the set of priors.
Abstract: The classical Ellsberg experiment presents individuals with a choice problem in which the probability of winning a prize is unknown (uncertain). In this paper, we study how individuals make choices between gambles in which the uncertainty is in different dimensions: the winning probability, the amount of the prize, the payment date, and many combinations thereof. While the decision-theoretic models accommodate a rich variety of behaviors, we present experimental evidence that points at systematic behavioral patterns: (i) no uncertainty is preferred to uncertainty on any single dimension and to uncertainty on multiple dimensions, and (ii) “correlated” uncertainty on multiple dimensions is preferred to uncertainty on any single dimension.
Abstract:This article presents the results of a laboratory experiment and an online multi-country experiment testing the effect of motor vehicle eco-labels on consumers. The laboratory study featured a discrete choice task and questions on comprehension, while the ten countries online experiment included measures of willingness to pay and comprehension. Labels focusing on fuel economy or running costs are better understood, and influence choice about money-related eco-friendly behaviour. We suggest that this effect comes through mental accounting of fuel economy. In the absence of a cost saving frame, we do not find a similar effect of information on CO2 emissions and eco-friendliness. Labels do not perform as well as promotional materials. By virtue of being embedded into a setting designed to capture the attention, the latter are more effective. We found also that large and expensive cars tend to be undervalued once fuel economy is highlighted.
PDF Download: Paper
Idea: Study emotional and behavioral responses to anti-smoking pictures on cigaretters packaging.
Abstract: In this article we use data from a multi-country Randomized Control Trial study on the effect of anti-tobacco pictorial warnings on an individual’s emotions and behavior. By exploiting the exogenous variations of images as an instrument, we are able to identify the effect of emotional responses. We use a range of outcome variables, from cognitive (risk perception and depth of processing) to behavioural (willingness to buy and willingness to pay). Our findings suggest that the odds of buying a tobacco product can be reduced by 80% if the negative affect elicited by the images increases by one standard deviation. More importantly from a public policy perspective, not all emotions behave alike, as eliciting shame, anger, or distress proves more effective in reducing smoking than fear and disgust.
PDF Download: Paper
Abstract: Recent studies suggest psychological differences between conservatives and liberals, including that conservatives are more overconfident. We use a behavioral political economy model to show that while this is undoubtedly true for election years in the current era, there is no reason to believe that conservative ideologies are intrinsically linked to overconfidence. Indeed, it appears that in 1980 and before, conservatives and liberals were equally overconfident.
PDF Download: Paper
Idea: Novel class of preferences that satisfy the Certainty Effect (Allais Paradox) in which the decision maker has a set of utilities and considers the most pessimistic one of them. Characterized via one key axiom, Negative Certainty Independence (plus basic axioms).
Abstract: Many violations of the Independence axiom of Expected Utility can be traced to subjects’ attraction to risk-free prospects. The key axiom in this paper, Negative Certainty Independence (Dillenberger, 2010), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy Negative Certainty Independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well-defined sense. We show that our representation can also be derived from a ‘cautious’ completion of an incomplete preference relation.
PDF Download: Paper
Idea: Studies the role of overconfidence in ideology formation and voting, showing how it is linked with ideological extremism and higher turnout. It then tests the predictions in novel survey data, finding a strong support.
Abstract: This paper studies, theoretically and empirically, the role of overconfidence in political behavior. Our model of overconfidence in beliefs predicts that overconfidence leads to ideological extremeness, increased voter turnout, and increased strength of partisan identification. Moreover, the model makes many nuanced predictions about the patterns of ideology in society, and over a person’s lifetime. These predictions are tested using unique data that measure the overconfidence, and standard political characteristics, of a nationwide sample of over 3,000 adults. Our numerous predictions find strong support in these data. In particular, we document that overconfidence is a substantively and statistically important predictor of ideological extremeness and voter turnout.
Idea: Characterize choice that violate Warp because of an endogenous reference point, like in the case of the attraction effect.
Abstract: The goal of this paper is to develop, axiomatically, a revealed preference theory of reference-dependent behavior. Instead of taking the reference for an agent as exogenously given in the description of a choice problem, we suitably relax the Weak Axiom of Revealed Preference and we the existence of reference alternatives as well as the structure of choice behavior conditioned on those alternatives.
PDF Download: Paper
(Previous Title: Hypothesis Testing and Multiple Priors)
Idea: We extends the model of non-Bayesian updating in Ortoleva (2012), Hypothesis Testing mode, to the case of ambiguity averse agents.
Abstract: We study a model of non-Bayesian updating, based on the Hypothesis Testing model of Ortoleva (2012), for ambiguity averse agents. Agents ranks acts following the MaxMin Expected Utility model of Gilboa and Schmeidler (1989) and when they receive new information they update their set of priors as follows: If the information is such that all priors in the original set of priors assign to it a probability above a threshold, then the agent updates every prior in the set using Bayes’ rule. Otherwise: she looks at a prior over sets of priors; she updates it using a rule similar to Bayes’ rule for second order beliefs over sets; finally, she chooses the set of priors to which the updated prior over sets priors assigns the highest likelihood.
PDF Download: Paper
Idea: Model an agent who prefers a smaller menu because of the cost of thinking involved in choosing from the bigger one. Defines a notion of Thinking Aversion, and characterizes this cost.
Abstract: We study the behavior of an agent who dislikes large choice sets because of the ‘cost of thinking’ involved in choosing from them. We take as a primitive a preference relation over lotteries of menus and impose novel axioms that allow us to separately identify a ‘genuine’ preference over the content of menus from the cost of choosing from them. Using this, we formally define the notion of Thinking Aversion, much in line with the definitions of risk or ambiguity aversion. We characterize such preference as the difference between an affine evaluation of the content of the menu and a function that assigns to each menu a thinking cost. We provide conditions that allow us to interpret the cost of thinking about a menu as the cost that the agent has to sustain to figure out her preferences in order to make her choice.
PDF Download: Paper
Idea: A model in which agents are Bayesian after `normal' events, but have non-Bayesian reactions to low probability events, and whose behavior is modeled also after zero-probability ones.
Abstract: Bayes' rule has two well-known limitations: 1) it does not regulate the reaction to zero-probability events; 2) a sizable empirical evidence documents systematic violations of it. We introduce a behavioral rule, Dynamic Coherence, and show that it is equivalent to an alternative updating rule, the Hypothesis Testing model. According to it, the agent follows Bayes' rule if she receives information to which she assigned a probability above a threshold. Otherwise, she looks at a prior over priors; updates it using Bayes' rule for second-order priors; and chooses the prior to which the updated prior over priors assigns the highest likelihood. We apply the model to construct, in an example, a refinement of Perfect Bayesian Nash Equilibrium.
PDF Download: Paper
Idea: Studies incomplete preferences under uncertainty.
Abstract: We investigate the classical Anscombe-Aumann model of decision-making under uncertainty without the completeness axiom. We distinguish between the dual traits of "indecisiveness in beliefs" and "indecisiveness in tastes." The former is captured by the Knightian Uncertainty model, while the latter by the single-prior expected multi-utility model. We characterize axiomatically the latter model. Then, we show that, under Independence and Continuity, these two models can be jointly characterized by a means of a partial completeness property.
PDF Download: Paper
Idea: Characterize status-quo-dependent preferences under uncertainty. The agent keeps her status quo unless there is an option better than it for a set of priors. We also show that this implies that the agent is uncertainty averse.
Abstract: Motivated by the extensive evidence about the relevance of status quo bias both in experiments and in real markets, we study this phenomenon from a decision-theoretic prospective, focusing on the case of preferences under uncertainty. We develop an axiomatic framework that takes as a primitive the preferences of the agent for each possible status quo option, and provide a characterization according to which the agent prefers her status quo act if nothing better is feasible for a given set of possible priors. We then show that, in this framework, the very presence of a status quo induces the agent to be more uncertainty averse than she would be without a status quo option. Finally, we apply the model to a financial choice problem and show that the presence of status quo bias as modeled here might induce the presence of a risk premium even with risk neutral agents.
PDF Download: Paper
(Click to expand)
Idea: How do you allocate goods to people with the same ordinal but different cardinal preferences?
Abstract: Goods and services—public housing, medical appointments, schools—are often allocated to individuals who rank them similarly but differ in their preference intensities. We characterize optimal allocation rules when individual preferences are known and when they are not. Several insights emerge. First-best alloca- tions may involve assigning some agents “lotteries” between high- and low-ranked goods. When preference intensities are private information, second-best allocations always involve such lotteries and, crucially, may coincide with first-best allocations. Furthermore, second-best allocations may entail disposal of services. We discuss a market-based alternative and show how it differs.
Idea: Shows that agents with small-dimensional models are more confident in their predictive abilities.
Abstract: Different agents compete to predict a variable of interest related to a set of covariates via an unknown data generating process. All agents are Bayesian, but may consider different subsets of covariates to make their prediction. After observing a common dataset, who has the highest confidence in her predictive ability? We characterize it and show that it crucially depends on the size of the dataset. With small data, typically it is an agent using a model that is `small-dimensional,' in the sense of considering fewer covariates than the true data generating process. With big data, it is instead typically `large-dimensional,' possibly using more variables than the true model. These features are reminiscent of model selection techniques used in statistics and machine learning. However, here model selection does not emerge normatively, but positively as the outcome of competition between standard Bayesian decision makers. The theory is applied to auctions of assets where bidders observe the same information but hold different priors.
PDF Download: Paper (First version: June 2018; This version: June 2019)
Idea: How to infer a choice correspondence? From stochastic choice.
Abstract: Despite being the fundamental primitive of the study of decision-making in economics, choice correspondences are not observable: even for a single menu of options, we observe at most one choice of an individual at a given point in time, as opposed to the set of all choices she deems most desirable in that menu. However, it may be possible to observe a person choose from a feasible menu at various times, repeatedly. We propose a method of inferring the choice correspondence of an individual from this sort of choice data. First, we derive our method axiomatically, assuming an ideal dataset. Next, we develop statistical techniques to implement this method for real-world situations where the sample at hand is often fairly small. As an application, we use the data of two famed choice experiments from the literature to infer the choice correspondences of the participating subjects.
PDF Download: Paper (First version: June 2020; This version: February 2021)
Idea: Use ambiguous information to test dilation of sets of priors; rejects it.
Abstract: With common models of updating under ambiguity, new information may increase the amount of relevant ambiguity: the set of priors may ‘dilate.’ We test experimentally one sharp case: agents bet on a risky urn and get information that is truthful or not based on the draw from an Ellsberg urn. The set of priors should dilate; ambiguity averse agents should lower their value of bets; ambiguity seeking should increase it. Instead, we find that ambiguity averse agents do not change it; ambiguity seeking ones increase it substantially. We also test bets on ambiguous urns and find sizable reactions to ambiguous information.g ones increase it substantially. We also test bets on ambiguous urns and find sizable reactions to ambiguous information.
(Previous title: Ranges of Preferences and Randomization)
Idea: Study how frequent is preference for randomization. Finds that subjects want randomize for huge ranges of value.
Abstract: A growing literature has shown how people sometimes prefer to randomize between two options. We study how prevalent this behavior is in an experiment using a novel and simple method. We allow subjects to randomize between options in a series of questions in which one of the alternatives is fixed and the other varies, capturing the range of values for which subjects want to randomize. We find that most subjects choose to randomize in most questions. Crucially, they do so for ranges of values are ‘very large’: for example, when comparing a fixed amount $x with a lottery that pays $20 or $0 with equal chances, subjects typically randomize for all xs between $5.3 and $12. Large ranges are found in other questions as well, showing how prevalent the desire to randomization is. We connect ranges to standard choices, Certainty-Bias, and non-Monotonicity.
PDF Download: Paper (This version: January 2021)
Idea: Studies the correlation betweeen behavioral economic measures in a large-scale representative-sample incentivized survey. Find clear patterns.
Abstract: We study the pattern of correlations across a large number of behavioral regularities, with the goal of creating an empirical basis for more comprehensive theories of decision-making. We elicit 21 behaviors using an incentivized survey on a representative sample (n = 1,000) of the U.S. population. Our data show a clear and relatively simple structure underlying the correlations between these measures. Using principal components analysis, we reduce the 21 variables to six components corresponding to clear clusters of high correlations. We examine the relationship between these components, cognitive ability, and demographics. Common extant theories explain some of the patterns in our data, but each theory we examine is also inconsistent with some patterns as well.
PDF Download: Paper (November 2020, under review)
Idea: Studies the relation between Stochastic Impatience and models that separate time and risk preferences. Not easy to accomodate all.
Abstract: We study how the separation of time and risk preferences relates to a behavioral property that generalizes impatience to stochastic environments: Stochastic Impatience. We show that, within a broad class of models, Stochastic Impatience holds if and only if risk aversion is “not too high” relative to the inverse elasticity of intertemporal substitution. This result has implications for many known models. For example, for those of Epstein and Zin (1989) and Hansen and Sargent (1995), Stochastic Impatience is violated for all commonly used parameters.
PDF Download: Paper (First version: October 2017; This version: April 2020)
(Previous title: Willingness-To-Pay and Willingness-To-Accept are Probably Less Correlated than You Think)
Idea: Shows that Willingness to Pay and Willingness to Accept are not correlated, using 3 large scale studies on representative samples.
Abstract: A vast literature documents that willingness to pay (WTP) is less than willingness to accept (WTA) a monetary amount for an object, a phenomenon called the endowment effect. Using data from three incentivized studies with a total representative sample of 4,000 U.S. adults, we add one additional finding: WTA and WTP for a lottery are (essentially) uncorrelated. In contrast, independent measures of WTA (or WTP) are highly correlated, and relatively stable across time. Leading models of reference- dependent preferences are compatible with a zero correlation between WTA and WTP, but only for specific parameterizations and ruling out popular special cases. These models also predict a relationship between the endowment effect and loss aversion, which we do not find.
Idea: The Cautious EU model as a model of deliberate randomization.
Abstract: We axiomatize the Cautious Stochastic Choice Model. The Cautious Stochastic Choice model is a model of deliberate randomization where hedging motives lead the agent to make an intentional stochastic choice. The model allows us to link stochastic choice to Certainty Bias.
PDF Download: Paper (December 2018, under review)
(Previous title: Aspirations and growth: a model where the income of others acts as a reference point, and The Behavior of Others as a Reference Point: Prospect Theory, Inequality, and Growth)
Idea: Study growth when the average consumption of the society acts as a reference point for the consumers. Reference dependence is modeled using prospect theory.
Abstract: We study a model in which consumers are reference-dependent, modeled using prospect-theory, and their reference point is the average behavior of the society in that period. We show that in any of the equilibria of the economy after a finite number of periods the wealth distribution will become, and remain, either of perfect equality, or admit a ‘missing class’ (a particular form of polarization). We then study growth rates and show that, if we look at the equilibria with the highest growth, then the society with the highest growth rate is the one that starts with perfect equality. If we look at the equilibria with the lowest growth, however, then the society with a small amount of initial inequality is the one that attains the highest growth rate, while a society with perfect equality is the one with the lowest performance. All of these growth rates are weakly higher than the growth rate of a corresponding economy without reference-dependence.
PDF Download: Paper (March 2014 - under review)
Idea: Applies the model of "Revealed (P)Reference Theory" to the theory of product differentiation. Obtains that the presence of these biases might lead the economy back to an efficient equilibrium in which the monopolist extracts all the surplus.
Abstract: We apply the theoretical model of endogenous reference-dependence of Ok, Ortoleva and Riella (2011) to the theory of vertical product differentiation. We analyze the standard problem of a monopolist who offers a menu of alternatives to consumers of different types, but we allow for agents to exhibit a form of endogenous reference dependence like the attraction effect. We show that the presence of such biases might allow the monopolist to overcome some of the incentive compatibility constraints of the standard problem, leading the economy back towards the efficient equilibrium in which the monopolist extracts all the surplus. We then discuss welfare implications, showing that an increase in the fraction of customers who are subject to the attraction effect might not only increase the monopolist’s profits and total welfare, but consumer’s welfare as well.
PDF Download: Paper (September 2011)
Idea: Apply the model in "The price of Flexibility" to a portfolio choice.
Abstract: This paper analyzes the investment decision of an agent who has to figure out the correct model to use in order to evaluate all available assets. This is a costly process. She can choose either to endure this cost, or to simplify her choice by looking only at a subset of the available assets. We show that such an agent tends to participate less in the market, to under-diversify her portfolio, or to diversify it naively, by investing equal amounts in a selection of assets. These predictions qualitatively match the empirical observations of the investments of households, both in general investment situations and in the choice of 401(k) plans. Moreover, in a partial equilibrium setting, if agents are not aware that others bear the cost of discovering which model to use, then even a small cost can induce a sizable effect on the equilibrium prices. This is due to a phenomenon similar to a multiplier: agents observe prices that are different than expected, and in turn modify expectations, which affect prices, which in turn affect expectations, and so on.
(Currently no version to Download)
Idea: Short note that extends the Hypothesis Testing model to the case of an infinite state space.
Abstract: We extend the Hypothesis Testing model of Ortoleva (2010) to the case in which the state space is infinite. We show that almost identical results hold in this case as well.
(Currently no version to download)
Last Update, January 2021