Truland Building, 400-A
April 07, 2011, 09:30 AM to 11:30 AM
My first chapter examines the empirical properties of two theoretically advanced mechanisms for eliciting subjective probabilities (Karni 2009, Econometrica). These mechanisms (“Declarative” and “Clock”) are of interests because they are incentive compatible without the assumptions of risk neutrality or expected utility maximization. Although considered to be theoretically equivalent, their empirical performance remains unknown, especially in the presence of naïve strategies. Considering a population with both “sophisticated” and “naïve” agents, I derive theoretical conditions under which beliefs elicited using the Clock mechanism are more accurate than those elicited using the Declarative mechanism. Then, via a laboratory study with inexperienced participants in which these conditions hold, I find empirical results that support the Clock's accuracy advantage.
My second chapter uses an experiment to identify the relative strength of the preference for appearing as opposed to being honest, a distinction made by Akerlof in 1983. In a two-stage prediction game, the preference for appearing honest is reflected in their probabilistic prediction of a fair die roll, while their preference for being honest is inferred from their self-reported die roll outcomes. I find that the vast majority (95%) of subjects were willing to incur a cost to preserve an honest appearance, but only 44% exhibited an intrinsic preference for honesty. Moreover, our type classification analysis suggests that after establishing an honest appearance people cheat to the greatest possible extent. These results suggest that “incomplete cheating” behavior frequently reported in the literature can be attributed more to a preference for maintaining appearances than an intrinsic aversion to maximum cheating.
My third chapter discusses and develops adaptive approaches for testing differences in locations in two- or k-sample environments. The Wilcoxon-Mann-Whitney and Jonckheere tests have dominated nonparametric analyses in behavioral sciences for the past seven decades. Their widespread use masks the fact that there exist simple adaptive procedures that use data-dependent statistical decision rules to select an optimal nonparametric test. My Monte-Carlo simulations show that these adaptive approaches increase the chance of detecting true differences in location without inflating Type-I error. I illustrate these procedures using data from Gneezy and Smorodinsky (2006). I have also developed a Stata package available to anyone interested in taking advantage of adaptive techniques in their own research.