Friday, August 29, 2014

Can gossip save society?

Gossip is often counted as something to be avoided. Many companies ask employees to refrain from gossip. The expression "she is a gossip" is often one of derision. Yet within game theory, gossip, or at least one version of it, is thought crucial for promoting trust. The basic idea goes like this: Suppose n people engage in repeated trust games/prisoners' dilemmas. These games are such that it is not possible to write down a contract for trust enforceable in a court, so those participating must rely on continuing relationships to support trust. But if a pair of individuals transact only infrequently, little trust can be built on such a tenuous foundation.

A more solid foundation might, however be built on the ongoing relationship of an individual with society at large rather than any single other. The basic idea is that anyone cheating on a relationship will be punished by ostracism--a transgression against one is treated as a transgression against all, with exclusion from future trust games serving as the punishment. Provided society is large, such a threat acts as a far more powerful deterrent and hence much more trust may be sustained.

Implementing this, of course, requires that an aggrieved party communicate her case to the rest of society. After all, if only Ann knows that Bob cheated her, society will be in no position to punish Bob. Clearly, Bob has little interest in telling the world he is a fink or cad (to use two delightfully archaic terms), so the job falls to Ann, who might be thought to relish the prospect of dragging Bob's name through the mud.

There are, however, several difficulties with this arrangement. The first is that Ann might lack the means of communicating to the rest of society all by herself. She may need others to propagate the message for her. But what social scientists antiseptically call "propagation" is, in lay terms, nothing more than gossip:

Did you hear what Bob did to Ann? He should be ashamed of himself. I'll never think of him the same way again...
And so society won't think of Bob in the same way. According to the logic of cooperation, he is now an outcast, "dead" to the society in which he once happily played trust games.

I previously wrote about how morality might help us solve certain social dilemmas, but here it clearly does not. Suppose that Ann's best (and only) friend, who, in a remarkable coincidence, happens to be named Carole, was brought up to believe that "a lady does not gossip." Now, Bob's misdeeds sadly go unpunished since the communication trail ends with Carole, who never speaks ill of others based on second-hand reports. But Carole's moral stance now redounds to the detriment of society. Bob, anticipating that there are enough Caroles in the population, now feels he can get away with his foul deeds. Strictly speaking, Carole is being irrational--her unwillingness to "spread the word" about Bob leaves her (and society) worse off.

This is all a bit unfair to Carole for, in this world, we assumed that individuals could only destroy the reputation of those who actually deserved it. The possibility of a "vicious rumor," a totally unfounded attack on someone, simply cannot arise. Indeed, even more weirdly, we don't really have to assume this. Citizens in game theory land have nothing to gain (and a little to lose, see below) from such actions and so, even if we allowed it, "smear campaigns" would never arise. Thus, Carole would still have nothing about which to be fearful in rumor mongering and her whole ladylike morality comes off as mere prudishness.

Yet, as a devoted follower of reality TV, I can readily report many incidents where false allegations were made, even against such kind and decent folks as Snooki from Jersey Shore. Worse yet, others with less scruples than Carole happily propagated them. Clearly, something important about human nature is missing from our little game theory world in terms of rumors.  

A second problem with this social glue, pointed out by Avinash Dixit, is that, even if there are no Caroles in the population, so long news passes sufficiently slowly, a "gossip equilibrium" might still not work. More precisely, he suggested that news is likely to travel slower in larger societies than in smaller ones and showed that, a tipping point arose wherein, despite gains from globalization, well-being could fall suddenly once "society," or rather the community that individuals counted as constituting society, grew too large.

Growing up in a small town, I was all too aware that everyone knew everyone else's business. I longed for the greater privacy (and anonymity) of city life. Rather than despairing at this, Dixit suggests that I had it exactly backwards--those despised  small town busybodies are, in fact, the social "glue" necessary for cooperation.

Game Theorists Nageeb Ali and David Miller point out an even more subtle problem: Perhaps Ann herself will choose to stay mum about Bob. They study a world where no one is squeamish about passing gossip and where news, once released, spreads quickly. Their point is that, by throwing Bob under the bus, Ann has, in effect, made society smaller---by exactly one member, Bob. Since cooperation is borne on the back of the threat of ostracism, with the loss of Bob, this threat has now lost a little of its punch. And hence, Like the rest of this literature, they assume that "vicious rumors," unjustified smears on another's character, are impossible, but justified smears are voluntary. by ratting out Bob, Anne has now reduced, by a tiny bit, the amount of cooperation possible. Since this leaves her worse off, she remains silent. Bob, cunning chap that he is, anticipates that Ann will bear her victimhood stoically and silently and hence has nothing to fear from cheating. Taking to its logical end, Ali and Miller show that this implies that the social glue of ostracism is completely worthless, the only sustainable trust is that from a bilateral relationship, regardless of society's size. Paradoxically, allowing victims the apparently worthless option of remaining silent completely destroys the value of society in enforcing trust.

It's a striking result, but I cannot help thinking that it says more about the power of logic than it does about the nature of human behavior. Certainly, in a large society, the effect of "losing" Bob is very small. If, for instance, we granted that Ann gained even an ounce of happiness from unburdening herself of the secret as to what Bob did, then, once society is large enough, this will be enough to overcome her reticence and hence permit society to maintain trust. Personally, I think an ounce of happiness is a vast understatement of the joy that many of us get in ensuring that Bob gets his comeuppance. Here again, irrationality may save us--provided that individuals have an irrational "taste" for justice, the problem of Ann's potential reticence quickly becomes a non-problem.

Is Game Theory Useless?

It is often said that, sometimes citing examples such as the travelers' dilemma, that game theory is useless precisely because its analysis is predicated on pure rationality and "real people" simply do not act this way. Another version of this critique runs that game theory assumes that people are selfish and therefore its analysis is likewise doomed to misprediction. While it is certainly true that the "first cut" at many problems makes both assumptions, it would be a very poor analysis indeed were this also the "last cut." As we saw above, the tools of game theory are easily adapted to situations where irrationality is part of the environment. Moreover, unlike other social sciences preoccupied with irrationality, game theory is capable of delineating between situations where irrationality will alter outcomes and situations where it will not. Roughly speaking, situations where long chains of foxing and outfoxing are required to arrive at a solution are easily altered by irrationality. Situations where only short chains--or no chains at all in some cases--are more robust.

As to the assumption of selfish behavior, it too might be a first cut, but game theory is, in fact, agnostic about the source of the preferences producing payoffs. The same tools used to analyze situations involving choices among selfish individuals may be used for choices involving sainted individuals who care deeply about others. The comparison between the two merely represent a change in the payoffs to each party from a given combination of choices.

For instance, consider a prisoners' dilemma type situation where individuals might cooperate (C) or defect (D). With purely selfish preferences, we might model payoffs as CC = {3,3}, DD = {1,1}, DC ={4,0} and CD = {0,4}. Dominance would tell us that, in a one-off setting, both sides would defect. By contrast, this same game played by individuals who care only about societal wealth and not its distribution, would produce payoffs of CC = {6,6}, DD = {2,2}, CD = DC = {4, 4}. This game too has a dominant strategy, to always cooperate and so, in a one-off setting, both sides would cooperate and the "dilemma" vanishes. Combinations of these preferences are similarly modeled.

To be fair, some versions of non-selfishness entail more than a simple payoff adjustment, depending on whether these preferences derive from outcomes or process. When social preferences derive from outcomes, i.e. I care about society's distribution of income but not about the path by which this distribution was reached, then it truly is merely a matter of adjusting payoffs. When preferences derive from process, i.e. I care about opportunities available to individuals but not necessarily the final outcomes produced by which opportunities are chosen, the situation is far more complex but still analyzable using game theoretic tools.

This difference is, of course, vitally important to the structure of just institutions. Indeed, a common expression of this difference concerns whether justice hinges on equality of opportunity or equality of outcome with appropriately different institutions and legal remedies depending on which preferences society holds.

Branches of game theory, notably implementation theory and mechanism design, take the view that institutions themselves---divided government, civil versus common law, federalism---can be understood as "games" devised by societies as a means of achieving preference goals, including, of course, social preferences.

It would be entirely fair to deride game theory for its absurd assumptions of pure rationality and pure selfishness were those, in fact, the only assumptions by which this mode of analysis might be employed. Fortunately, they are not. Unfortunately, this caricature of game theory still persists widely--game theorists may be brilliant at analyzing interactive decisions, but seem to be lousy at marketing their craft to the wider world. Sophisticated firms, such as Google, Amazon, eBay, Yahoo, and many others recognize the speciousness of these arguments and employ game theory (and game theorists) to good effect.

So when next you hear someone dismiss game theory with these facile arguments, do society (and yourself) a favor and rebut them.

Should we strive to be rational?

We tend to think of irrationality, like entropy, as something to be struggled against, but whose ultimate eradication, in ourselves and society at large, is impossible. We often see pure rationality as akin to the attainment of wisdom, a yardstick by which we might judge our mistakes and might have beens. In a world of inward thinking, where the struggle is against anonymous natural forces who bear us neither ill will nor good, this is exactly correct. The purely rational decision is indeed the one that best helps us attain our goals, that best helps us to succeed. We teach, and are fascinated by, systematic forms of irrationality--recency bias, false generalizations, loss aversion, sunk cost fallacy, and so on. In class, we often claim that self-knowledge is an essential first step in aiding our avoidance of these pitfalls.

Yet, in the world of outside thinking, none of this is true. In many situations, the attainment of rationality acts as a positive bar toward achieving our objectives. Indeed, even more remarkably, it can act as a bar toward achieving mutually desired outcomes--circumstances where we would all be better off. Irrationality, in these cases is not some genetic baggage handed to us by our hominid ancestors where it was once adaptive and now better off being shed--the cognitive equivalent of the human appendix. Rather, it was and remains a positive adaptation to certain types of social dilemmas.

Let me illustrate this by an example known as the travelers' dilemma. The story goes like this: Two antiques dealers are flying to a convention. Each carries a suitcase containing an identical antique item. Sadly, but perhaps unsurprisingly, their luggage goes missing and cannot be found. The airline is, of course, liable for the amount of the loss up to a maximum of (say) $100. While the airline does not know the precise value of the antiques, it does know that they are worth at least $2. To discover the value, the airline merely asks the antique dealers to write down the value of the lost item, up to the maximum of $100. If their appraisals agree, the airline will pay each the listed amount. If they disagree, the airline will pay each the amount of the lower appraisal. In addition, and this is critical to the story, the airline rewards the "honest" dealer (i.e. the one making the lower appraisal) by paying her a bonus of $2, which is deducted from the compensation paid to the "dishonest" dealer. Thus, if one dealer writes down x and the other y > x. The "x" dealer will receives x + 2 while the "y" dealer will get x - 2. 

Suppose that both dealers know that the true value exceeds the maximum of $100 and both are rational and selfish, caring only about their own reimbursement. Surely, both should simply tell the truth, writing down $100, and be done with it. But the logic of rationality implies something entirely different--in our perfectly rational world, each will write down the lowest value, $2, and receive only this amount in compensation.

The argument is a version of LFRB reasoning. Suppose that dealer 1 expects dealer 2 to tell the truth. Then, by writing down $99 instead of $100, dealer 1 can gain a small advantage to herself, receiving $101 instead of $100. Anticipating this, dealer 2 might instead write down $98, attempting to outfox the clever dealer 1. But, since dealer 1 is purely rational, she is infinitely clever, so she will anticipate this double-cross by 2 and write down $97, and so on until the minimum is reached. Notice what has happened: "society," consisting of the two dealers, might have gained $100 to each of its members. Instead, by pursuing optimal and rational decisions, each member ends up with only $2 instead.

Truth-telling is, of course, an ethical matter. In many cultures, parents strive to "irrationalize" their offspring by proclaiming the virtues of this course of action--especially when there is a temptation to lie for possible gain. Irrationality of this sort is precisely what is needed to improve matters. In this situation, so long as each dealer has enough confidence in the "morality"--or, equivalently, the irrationality--of the other party, offers need not collapse to the minimum. To see this, let us reconsider the calculus of decision making by dealer 1. Her initial conjecture was that dealer 2 would tell the truth, in which case writing down $99 is the best alternative. So long as there is sufficient chance that dealer 2 is moral/irrational and will indeed tell the truth, then $99 is the best course of action. So long as these same beliefs are held by dealer 2, he too will write down $99. While this is not the perfect solution, it massively dominates the world of pure rationality. Notice too that, in the end, both dealers lie (a little) in this scenario. That is, neither has to actually be irrational, sufficient weight on the possibility of moral/irrational actions suffice to break the depressing chain of logic leading to ever lower offers.

The travelers' dilemma is but one of many examples where irrationality is the "solution" to a trust problem. It is, at the very least, somewhat questionable as to whether teaching the tools of pure rationality serves a useful purpose in making us better citizens/leaders/managers. Much depends on whether the situations we face are primarily inward looking or outward looking. My contention, and indeed the whole point of the course, is to suggest that outward looking or, equivalently, social situations dominate the domain of decisions we make.


Thursday, August 28, 2014

Class #2 Highlights

The theme of class #2 was look forward, reason back (LFRB), a fundamental idea in game theory. The point of LFRB is to look ahead to the end of the game, anticipate what future moves will be, and then respond accordingly. Anticipation, as we discussed, requires two ingredients:

  1. What are the possibilities? What strategies are possible in reaction to my move?
  2. What are the preferences? What do the other players value? Do they care about money, reputation, ego, social justice, or something else entirely?
The combination of what is possible and how these possibilities are valued, allows one to anticipate the future (almost like looking into a crystal ball) and react (or pro-act) accordingly.

To get a sense of how this works, we studied two common dilemmas: The trust dilemma and the reputation dilemma. The trust dilemma describes poverty and underinvestment in many places. Here, we considered a situation between an investor and an entrepreneur. The entrepreneur creates a high expected return from the capital investment, but legal institutions are such that contracts cannot be enforced. Faced with a one-off situation, we saw that, no matter how profitable the opportunity, the investor would not commit capital since it would anticipate being expropriated (cheated). Solving this dilemma has preoccupied academics, NGOs, and, of course, banks and other investors, for many years. A common solution is repeated interaction even across generations. The keiretsu system of "family" investment in Asia occurred in large part as a response to this LFRB dilemma. Grameen banks, which use the community sanctions to enforce contracts, represent a more recent response.

Apart from the developing world, versions of this dilemma occur all the time within firms. For instance, one needs help from an expert from another team on a given project. But there is no formal contract for repayment of this service existing within the firm. Depending on the nature of the relationship with the other team or the expert, many times such synergies fail to be realized as the expert, or his team, fear that the project manager will take all the credit for solving the problem without "repaying" the expert.

If you have other examples of the trust dilemma, please share them either in the comment section or by writing me directly.

A more subtle LFRB situation is the reputation dilemma. In this situation, a "long-term" player confronts a succession of n short-term (one-shot) players. The short term player can choose to challenge the long-term player or not and the long-term player must decide whether to fight, which is costly to both, or accommodate. Early in the game, reputation is obviously important, so the long-term player would seem to want to fight (or at least have that reputation). Near the end, however, both players realize that reputation is no longer valuable, so accommodation will be anticipated correctly. Here, we see the power of LFRB--there is no credible way for the long-run player to gain a fighting reputation if she is rational. In effect, the resolve of the player unravels. If the long-run player won't fight the last player, then the penultimate short-run player need not fear either, since there is no reason to fight to maintain a worthless reputation. But now the same is true of the short run player third from the end, and so on...all the way to the beginning.

How can the long-run player solve this situation? Paradoxically, it is her very rationality that is the cause of the problem. If the long-run player were crazy, reputation could easily be maintained, and challenges deterred. The precise way in which this happens is the subject of our next class.

Tuesday, August 26, 2014

Class #2 Prep

In addition to the assigned readings, you have two key missions for class #2:

1. Find a team to join and decide on a nickname. in advance of class!

2. Read through the NBA free agent game and start planning your strategy as player or incumbent or player. The role of rival firm will be played by my assistant Sibo, in all cases.

Welcome to Game Theory - Fall 2014

Welcome to Game Theory. If you are open to the possibility, this course can fundamentally change the way you look at the world--or at least that is my hope. Why do I think this? The main reason is that game theory fosters what I call "outside thinking" or, if you like, empathy. Game Theory is ultimately about seeing the world through others' eyes--walking a mile in their mocassins--and the adjusting your choices in light of theirs.

Sometimes, situations can be adversarial and the insights gains from empathy are used to frustrate the others' plans, to pre-empt or forestall their best options, or to cause confusion so that they make the wrong choices. When we think of games, such as chess, checkers, backgammon, poker, and so on, we often think of "zero sum" situations where, for one side to win, the other must lose. Empathy in such situations is often called "outsmarting" the other player.

Yet, in many business situations, the "game" may be adversarial, but need not be. Consider two firms competing in price for extremely price sensitive consumers. At first blush, the game seems purely adversarial---for one firm to win the business, the other must lose. And in the short run this is indeed the case. But taking a longer view, one sees possibilities for cooperation. It is in neither firms' interest to enter into a price war, Both desire to avoid such an outcome and are willing to sacrifice some to achieve this, providing the sacrifices are perceived as fair and that they are not too costly compared to pursuing purely selfish ends. Gae Theory is about these situations as well---how can firms bridge their differences and live in harmony.

Apart from empathy, the key learning from the first class was:

Look forward, reason back
This principle underpins much of Game Theory. Roughly, it means that, in making a plan, one needs to look ahead to the likely outcomes, accounting for the reaction of the other players in the game. In chess, it is not enough to simply spot a winning position several moves hence. One also needs to account for the likely reaction of the other player. If she can block the intended advance, generate a more potent threat, or simply capture the piece required for complete the position, then the original plan becomes worthless. Only plans that anticipate reactions are worthwhile.

But how can one anticipate future reactions? The course is Game Theory and not Mind Reading. This is all true, but here we benefit from the fundamental principle of choice imposed by economics:

Individuals will choose whatever action they perceive to be in their best interests.
In a way, this hardly needs to be enunciated as a principle for it seems self-evident. Yet it is important to dispose of some myths about this premise. First, the principle does not say that individuals will pursue whatever action nets them the most money. If such an action is immoral, unethical, or perhaps harms others, an individual may well elect not to pursue it, yet still abide by the tenet above. The tenet takes no position as to the factors that might reasonably constitute a person's "interest." Money might be part of this, but so too might be status, conformity to some set of behaviors deemed to be moral by the individual in question. It might account for the interests of others---family, friends, pets, or even strangers.

Look forward reason back, together with the behavioral tenet suggest that if we can understand what others consider "their interests," we can anticipate their choices and thereby conform our own choices to meet a particular end.  We need not be mind readers, but we do need to be empathetic to anticipate others' choices.

Our hiring game highlighted some of the differences between inside and outside thinking. Inside thinking suggests that, if we see a potential employee with a dazzling resume but a middling interview, we interpret these past successes as signals of quality and therefore ignore our own perceptions. Or, we might go the other way and simply discount past successes altogether, as Somit apparently did during our game.

Outside thinking, however, enables us to interpret the signal content of past choices, to interpret how much weight to give to the resume versus the interview. Early on in the process, hiring choices indeed represent a signal about the interview. With a stream of successes, however, hiring decisions convey no information, since the same choice, hire, is made regardless of the interview result. Awareness of this possibility, what is termed an information cascade, can only be had through empathy--understanding why others made the choices they did.

The rest of the course explores these ideas in two ways. First, given the rules of the game, how can we learn to empathize and thereby predict the actions of others. Second, how can we change the rules of the game to produce a better outcome for ourselves and, possibly, others we care about.
 

Thursday, May 17, 2012

How to Win at Battleship

Battleship 1

I used to play Battleship with my son, but he doesn't want to play with me any more. He's often wondered how I can consistently win the game. He assumes (and he's right) that, since I'm a game theorist, I've probably figured out some sort of mind-reading trick to be able to intuit where his ships are located. Apart from mind reading, I also rely on an optimal search strategy. Anyway, here's an amusing piece in Slate about a Microsoft employee who has taken this idea to its logical extreme.

http://www.slate.com/blogs/browbeat/2012/05/16/_battleship_how_to_win_the_classic_board_game_every_time.html