Sunday, September 7, 2014

Class #3 Highlights

In this class, we created an algorithm to implement LFRB reasoning. Essentially, we begin at the final branch of the game for each possible history of play. We then trim off all branches save for the best one. We then substitute these optimized payoffs into the immediately preceding stage of the game and repeat.

We learned that, so long as the game is UGOIGO and there are no ties in choices anywhere on the tree, this procedure will produce a unique prediction. Note, however, that this prediction assumes (a) you have correctly specified the payoffs; and (b) the both individuals are sophisticated enough to look far enough forward; and (c) this mutual sophistication is common knowledge. Part (c) deserves a bit more comment: Even if both sides truly are super-rational, if each has some doubt over the rationality of the other party, then this doubt will be incorporated into choices. And this can have an echo effect leading to choices far from the original prediction.

To see this, consider a game being played by Anna and Bob, both of whom are fully rational. Anna, however, suspects that Bob might not be rational and so builds this intro her strategy. Bab suspects that Anna is skeptical, so Bob factors Anna's skeptical response into his own response. But now Anna, being quite sophisticated herself, will anticipate that "rational" versions of Bob will react strategically to her skepticism, and so will again amend her respone ad infinitum.

To see this reasoning in action, consider the guess the 2/3 of the average game you may have played earlier in your MBA career. In this game, individuals choose numbers beween zero and 100. The person guessing closes to 2/3 of the average of all choices wins the prize. It may easily be shown that choosing 0 is the unique equilibrium. Now let's add the possibility of doubt: Suppose that Anna thinks that, with some small probability, the others won't get the game, and so will choose at random, producing 50 on average. In response, she will no longer wish to bid zero, but an amount just a bit higher than zero to win in the event others screwed up the game. But now, all other sophisticated individuals will anticipate mores like Anna's and so will increase their choices above zero by even more, and so on. The point is that the original conclusion falls apart once doubts about others' rationality are permitted.

Scenario Planning

Most strategy groups in large organizations engage in a scenario planning process. Often, this amounts to delineating the strategies available to rivals and then placing probabilities of each path. This is fine save for the last step. To a game theorist, the last step is quintessential outward thinking and requires One to create a mental model of the rival--to look at the world from their point of view--and derive optimal responses given this mental model. Of course, different mental models might produce different optimizing choices. Thus, placing weights on strategies is essentially placing weight on differing mental models producing those choices. This is more art than science, but has the virtue of making clear the assumptions driving the percentages placed on choices. Often, this will change the initial "gut feel" weights.

The Coors caselet offers some flavor for how to use LFRB in scenario planning, even absent decisive information about payoffs. One of the nice things about our recipe for LFRB is that only ordinal information is required, i.e. it suffices merely to know that outcome A is preferred to outcome B, but not by how much A is preferred to B. This vastly reduces the data load of the analysis.

No comments: