Friday, August 29, 2014

Should we strive to be rational?

We tend to think of irrationality, like entropy, as something to be struggled against, but whose ultimate eradication, in ourselves and society at large, is impossible. We often see pure rationality as akin to the attainment of wisdom, a yardstick by which we might judge our mistakes and might have beens. In a world of inward thinking, where the struggle is against anonymous natural forces who bear us neither ill will nor good, this is exactly correct. The purely rational decision is indeed the one that best helps us attain our goals, that best helps us to succeed. We teach, and are fascinated by, systematic forms of irrationality--recency bias, false generalizations, loss aversion, sunk cost fallacy, and so on. In class, we often claim that self-knowledge is an essential first step in aiding our avoidance of these pitfalls.

Yet, in the world of outside thinking, none of this is true. In many situations, the attainment of rationality acts as a positive bar toward achieving our objectives. Indeed, even more remarkably, it can act as a bar toward achieving mutually desired outcomes--circumstances where we would all be better off. Irrationality, in these cases is not some genetic baggage handed to us by our hominid ancestors where it was once adaptive and now better off being shed--the cognitive equivalent of the human appendix. Rather, it was and remains a positive adaptation to certain types of social dilemmas.

Let me illustrate this by an example known as the travelers' dilemma. The story goes like this: Two antiques dealers are flying to a convention. Each carries a suitcase containing an identical antique item. Sadly, but perhaps unsurprisingly, their luggage goes missing and cannot be found. The airline is, of course, liable for the amount of the loss up to a maximum of (say) $100. While the airline does not know the precise value of the antiques, it does know that they are worth at least $2. To discover the value, the airline merely asks the antique dealers to write down the value of the lost item, up to the maximum of $100. If their appraisals agree, the airline will pay each the listed amount. If they disagree, the airline will pay each the amount of the lower appraisal. In addition, and this is critical to the story, the airline rewards the "honest" dealer (i.e. the one making the lower appraisal) by paying her a bonus of $2, which is deducted from the compensation paid to the "dishonest" dealer. Thus, if one dealer writes down x and the other y > x. The "x" dealer will receives x + 2 while the "y" dealer will get x - 2. 

Suppose that both dealers know that the true value exceeds the maximum of $100 and both are rational and selfish, caring only about their own reimbursement. Surely, both should simply tell the truth, writing down $100, and be done with it. But the logic of rationality implies something entirely different--in our perfectly rational world, each will write down the lowest value, $2, and receive only this amount in compensation.

The argument is a version of LFRB reasoning. Suppose that dealer 1 expects dealer 2 to tell the truth. Then, by writing down $99 instead of $100, dealer 1 can gain a small advantage to herself, receiving $101 instead of $100. Anticipating this, dealer 2 might instead write down $98, attempting to outfox the clever dealer 1. But, since dealer 1 is purely rational, she is infinitely clever, so she will anticipate this double-cross by 2 and write down $97, and so on until the minimum is reached. Notice what has happened: "society," consisting of the two dealers, might have gained $100 to each of its members. Instead, by pursuing optimal and rational decisions, each member ends up with only $2 instead.

Truth-telling is, of course, an ethical matter. In many cultures, parents strive to "irrationalize" their offspring by proclaiming the virtues of this course of action--especially when there is a temptation to lie for possible gain. Irrationality of this sort is precisely what is needed to improve matters. In this situation, so long as each dealer has enough confidence in the "morality"--or, equivalently, the irrationality--of the other party, offers need not collapse to the minimum. To see this, let us reconsider the calculus of decision making by dealer 1. Her initial conjecture was that dealer 2 would tell the truth, in which case writing down $99 is the best alternative. So long as there is sufficient chance that dealer 2 is moral/irrational and will indeed tell the truth, then $99 is the best course of action. So long as these same beliefs are held by dealer 2, he too will write down $99. While this is not the perfect solution, it massively dominates the world of pure rationality. Notice too that, in the end, both dealers lie (a little) in this scenario. That is, neither has to actually be irrational, sufficient weight on the possibility of moral/irrational actions suffice to break the depressing chain of logic leading to ever lower offers.

The travelers' dilemma is but one of many examples where irrationality is the "solution" to a trust problem. It is, at the very least, somewhat questionable as to whether teaching the tools of pure rationality serves a useful purpose in making us better citizens/leaders/managers. Much depends on whether the situations we face are primarily inward looking or outward looking. My contention, and indeed the whole point of the course, is to suggest that outward looking or, equivalently, social situations dominate the domain of decisions we make.


No comments: