Dissertation Defense: “Three Essays in Behavioral and Experimental Economics”, Jing Zhou

Date and Time
Location
Zoom

Speaker

Jing Zhou, PhD Candidate, University of California, Santa Barbara

Biography

I am a Ph.D. candidate in Economics at University of California, Santa Barbara. My research interests are in Behavioral and Experimental Economics, and Information Economics. I use theoretical and empirical methods to study the origins of "irrational" economic decision-making and belief biases, as well as developing methodological tools to measure cognitively imprecise beliefs.

In my job market paper, I design a series of theory-inspired lab experiments grounded in three broad classes of economic theories to understand why people make seemingly suboptimal stochastic choices -- probability matching.

Abstract

This dissertation consists of three chapters that explore why individuals make seemingly suboptimal decisions in risk management, why they fail to use information in a Bayesian manner when updating their beliefs, and how novel methodological tools can be developed to advance modelling and inference about subjective beliefs or perceptions, considering cognitive limitations.

Chapter 1 studies the underlying mechanisms behind a classical behavioral puzzle in risk management, called Probability Matching. Probability matching refers to people's tendency to randomize between different risky options, or even match their choice frequency to the outcome probability, when choosing over binary lotteries that differ only in their probabilities. Why? I present an experiment designed to distinguish between three broad classes of explanations: models of Correlation-Invariant Stochastic Choice (mixing due to factors orthogonal to how outcomes are jointly determined, such as non-standard preferences or errors), models of Correlation-Sensitive Stochastic Choice (e.g., deliberately mixing due to misperceived hedging opportunity), and Framing Effects (indecisiveness due to frame-sensitive heuristics e.g., similarity heuristic: attending to dissimilar but irrelevant attributes (outcomes), while ignoring relevant attributes (probabilities)). My experimental design uses a diagnostic approach, differentiating between their testable predictions over a series of treatments. The results suggest that a substantial proportion of mixing behavior aligns with models of Correlation-Sensitive Stochastic Choice, while the other classes have limited explanatory power.

In Chapter 2, a joint work with Menglong Guan, ChienHsun Lin, and Ravi Vora, we experimentally investigate how people value and utilize different statistical characteristics of a set of realized binary signals, referred to as sample features, to understand why individuals deviation from the Bayesian benchmark when updating beliefs. We find that, subjects systematically under-infer the information contained in each sample feature. Furthermore, the magnitude of under-inference significantly varies across sample features. Specifically, under-inference is least severe with Sample Proportion (the relative frequency of different outcomes in the realized signals), compared to more informative features such as Sample Count (the absolute number of different outcomes in the realized signals). We also find that the standard measure of informativeness used in information theory does not fully explain subjects' preferences for sample features. Subjects demonstrate a strict preference for the information contained in the Sample Proportion over those without it and undervalue the usefulness of sample size. Combining preference and belief updating behaviors, we find that subjects deviate less from the Bayesian benchmark when provided with a more- preferred feature than a less-preferred one. These results suggest that some biases in signal usage is more likely an intentional deviation rather than a result of inattentive
heuristics.

In Chapter 3, a joint work with Xin Jiang, we introduce a novel elicitation method, called the Dynamic Binary Method (DBM), designed to address the common challenge individuals face in pinpointing the best point estimate of their beliefs, particularly when their beliefs are imprecise. Unlike Classical Methods (CM), which require respondents to make absolute judgments and form a point estimate of their true beliefs, DBM guides
them through a series of binary relative judgments, enabling them to express interval beliefs by exiting the process at any step. To assess the empirical validity of DBM, we conduct both within-subject and between-subject experiments using a diverse range of perception tasks drawn from previous literature and CM as a benchmark of performances in each task. We find that DBM does not perform significantly differently from CM at the aggregate level, regardless of whether the perception questions use artificial/laboratory settings or real-life settings, and irrespective of the measurement used. Notably, DBM outperforms CM when the objective truth is extreme. Furthermore, we find a negative correlation between the length of stated beliefs in tasks using DBM and their accuracy. Additionally, we find that the length stated in DBM can predict respondent's; performance in CM tasks at the aggregate level, albeit not strictly in a monotonic manner. Finally, we explore methods to use DBM-collected data for predicting stated point beliefs in DBM, offering insights into potential applications of the method beyond its immediate implementation.

JEL Classification: D81, D91, C91

Event Details

Join us to hear Jing’s dissertation defense. She will be presenting her dissertation titled, “Three Essays in
Behavioral and Experimental Economics”. To access a copy of the dissertation, you must have an active UCSB
NetID and password.

Zoom: https://ucsb.zoom.us/j/86411696881