Dissertation Defense: “Prior Updated: Essays on Belief Updating”, Kenneth Chan

Date and Time
Location
North Hall 2212

Speaker

Kenneth Chan, PhD Candidate, University of California, Santa Barbara

Biography

Kenneth is a PhD candidate studying Economics at UCSB. Prior to joining UCSB, he received a Bachelor of Social
Sciences in Economics from the National University of Singapore in 2020. Kenneth's research interests lie within the
realms of Behavioral and Experimental Economics. His current projects study how people incorporate new information
into their beliefs, contributing to our understanding of people's decision-making processes under uncertainty.
In addition to his academic pursuits, Kenneth has a strong interest in contract bridge, a partnership-based game
involving decision-making under conditions of uncertainty. He has represented Singapore in bridge and even engaged
in semi-professional play. This game sparked his inspiration to pursue a Ph.D. in economics to study human behavior
and decision-making processes.

Event Details

Join us to hear Kenneth’s dissertation defense. He will be presenting his dissertation titled, "Prior Updated:
Essays on Belief Updating". To access a copy of the dissertation, you must have an active UCSB NetID and
password.

Abstract and JEL Codes

This dissertation studies how people incorporate new information into their existing beliefs.

In the first chapter, I present an axiomatic characterization of the Grether (1980) model centered around the preservation of the Monotone Likelihood Ratio Property (MLRP). I also show that Bayesian updating can be characterized by the preservation of MLRP and the martingale property. Using this representation result, I identify a class of non- Bayesian updating rules where we can obtain useful comparative statics across different signal realizations in canonical belief updating problems. I also conducted an experiment to test the axioms that can be used to characterize Bayes’ rule to identify why are people non-Bayesian. At the individual level, most axioms are violated to some degree. I find that the violation of the preservation of MLRP is not too severe. This provides some validation for the widely used Grether (1980) model, and it suggests that comparative statics predictions across different signal realizations and prior beliefs in a belief updating problem are likely to hold.

The second chapter is a joint work with Sebastian Brown, where we study how people revise their wage expectations over time. We use a nationally representative survey, the labor supplement of the Survey of Consumer Expectations, to study how people update their wage expectations over time. Using a recently developed excess belief movement test for the martingale property (Augenblick and Rabin, 2021), we find strong evidence of non-Bayesian learning at the aggregate level. Among survey respondents who responded at least twice to the survey, we find an average movement in beliefs that is roughly 517% of the reduction in their beliefs’ uncertainty, 417% more than the Bayesian benchmark. This result is consistent with base rate neglect and overreaction to signals, and we found suggestive evidence that people exhibit base rate neglect. Our simulation shows that this result is unlikely to be explained solely by measurement error. We also found patterns of asymmetric updating, where individuals update their beliefs more when they receive good wage offers relative to bad wage offers.

The final chapter is a joint work with Gary Charness, Chetan Dave and Lucas Red- dinger where we study how confidence in prior affects how people update their beliefs. We design an experiment to test whether confidence in prior beliefs affects the degree of over-updating relative to the Bayesian benchmark. To manipulate subjects’ confidence in their prior beliefs, we adopt an experimental feature from Esponda, Oprea, and Yuksel (2023). Subjects are shown a 10 by 10 grid containing white and black squares, with the proportion of white squares corresponding to the prior in the updating task. In the low- confidence treatment, the grid is flashed for 0.25 seconds, while in the high-confidence treatment, the grid remains on the screen for 30 seconds, giving subjects enough time to count the number of white and black squares. Although the general updating pattern is one of under-updating relative to the Bayesian benchmark, we find that in the low- confidence treatment, subjects under-update to a lesser extent and place less weight on their prior beliefs when updating. We also propose an incentive-compatible method to measure the subject’s confidence.


JEL Codes: JEL: C91, D01, D83, D90