%0 Thesis %A Hudja, Stanton N %D 2020 %T Essays on Experimental Economics and Innovation %U https://hammer.purdue.edu/articles/thesis/Essays_on_Experimental_Economics_and_Innovation/12226871 %R 10.25394/PGS.12226871.v1 %2 https://hammer.purdue.edu/ndownloader/files/22484417 %K Economics %K Experiment %K Continuous Time %K Bandits %K Innovation Contests %K Research Tournaments %K Voting %K Economics %K Experimental Economics %X My dissertation consists of four chapters. In the first chapter, I use a laboratory experiment to analyze how individuals resolve an exploration versus exploitation trade-off. The experiment implements a single-agent exponential bandit model. I find that, as predicted, subjects respond to changes in the prior belief, safe action, and discount factor. However, I commonly find that subjects give up on exploration earlier than predicted. I estimate a structural model that allows for risk aversion, base rate neglect/conservatism, and probability mis-weighting. I find support for risk aversion, conservatism, and probability mis-weighting as potential factors that influence subject behavior. Risk aversion appears to contribute to the finding that subjects explore less than predicted.

In the second chapter, I use a laboratory experiment to analyze how a group of voters experiment with a new reform. The experiment implements the continuous time Strulovici (2010) collective experimentation model. I analyze a subset of data where groups and single decision makers should eventually prefer to stop experimentation and abandon the reform. I find three results that are consistent with the modeled experimentation incentives. In this subset of data, groups stop experimentation earlier than single decision makers, wait longer to stop experimentation as the number of revealed winners increases, and stop experimentation earlier than the utilitarian optimum predicts. However, I also find that both groups and single decision makers stop experimentation earlier than predicted. Additional treatments show that this result is unlikely to be explained by standard explanations such as incorrect belief updating or risk aversion.

In the third chapter, I use a laboratory experiment to investigate the role of group size in an innovation contest. Subjects compete in a discrete time innovation contest, based on Halac et al. (2017), where subjects, at the start of each period, are informed of the aggregate number of innovation attempts. I compare two innovation contests, a two-person and four-person contest, that only differ by contest size and have the same probability of obtaining an innovation in equilibrium. The four-person contest results in more innovations and induces more aggregate innovation attempts than the two-person contest. However, there is some evidence that the two-person contest induces more innovation attempts from an individual than the four-person contest. Subjects' behavior is consistent with subjects placing more weight on their own failed innovation attempts, when updating their beliefs, than their competitors' failed innovation attempts.

In the fourth chapter, I investigate the role of performance feedback, in the form of a public leaderboard, in innovation competition that features sequential search activity and a range of possible innovation qualities. I find that in the subgame perfect equilibrium of contests with a fixed ending date (i.e., finite horizon), providing public performance feedback results in lower equilibrium effort and lower innovation quality. I conduct a controlled laboratory experiment to test the theoretical predictions and find that the experimental results largely support the theory. In addition, I investigate how individual characteristics affect competitive innovation activity. I find that risk aversion is a significant predictor of behavior both with and without leaderboard feedback and that the direction of this effect is consistent with the theoretical predictions.
%I Purdue University Graduate School