Rumours by Fleetwood Mac is, in my opinion, one of the best rock albums ever released. The sales charts bear out this appraisal, Rumours, like Dark Side of the Moon, set records that may never be broken (especially with the advent of digital music) for album sales. But what has this got to do with game theory?
One song on Rumours, which became Bill Clinton's campaign theme song many years after, is Don't Stop Thinkin' About Tomorrow. The song offers the upbeat message that the future always holds something better than the past and so one ought to concentrate on its possibility rather than dwelling in the shortcomings of the day before. Perhaps useful advice for a heart broken teenager but hardly the stuff of deep insight. (Indeed, almost bitter advice for someone like me, whose joints and musculature are progressively being turned to Swiss cheese by an autoimmune illness.) But anyway, back to our story.
The broader point of this song is that the actions of the present ought properly to be considered in light of the future. In other words, moving back one step, one choosing how to act, one ought properly never to stop "thinkin' about tomorrow" since it is the consequences of tomorrow that determine the costs and benefits of today's actions.
And, indeed, no better or deeper point can be made about the most important insight in all of game theory that, by harnessing the future, the present may be tamed.
Rewind to our one-off prisoners dilemma situation. This situation seems utterly hopeless and, in the vacuum outside time, is hopeless. It makes not difference the timing of moves nor the sophistication of opponents, the inexorable conclusion in such situations is that both parties are doomed to defect and thereby receive the lower rather than thre higher payoff. But if we place this situation back in
time, where there is a future, then a more palatable (and sensible) conclusion obtains. So long as both
parties don't stop Thinkin' about tomorrow, and so long as tomorrow is important enough, cooperation is possible. Indeed, this simple insight is at the heart of the vast majority of "contracting" occurring in the world.
While the word contract brings to mind the formality of legal documents, its underlying idea is not so formal. A contract is any agreement willingly undertaken by two parties. While we may think of such informal contracts as arm's length "handshake agreements" they need not be. Spouses deciding on a rotation of chores or child care is no less a contract for its informality. Roughly speaking, anything that trades off a present benefit for one in either another form (money, cooking, etc.) or time (tomorrow, a year from now) is a contract.
What game theory says is that, even if such contracts hold no water in any court of law, they still might be fulfilled so long as they are "self-enforcing" which is a fancy way of saying that both sides find it in their interests to execute on the contract rather than renege. Much of game theory is casting
about for circumstances in which contracts
The key, in many instances, is that both sides don't stop thinkin' about tomorrow, which disciplines their behavior today. While defecting on a relational contract might be a fine idea today, when it is your turn to give up value/spend cost, such behavior seems less good in light of what is given up over many tomorrows where no one is willing to engage in future relational contracts.
Curiously, social psychologists independently discovered this idea, albeit without the help of game theory. They talk of "equity theory", the idea that we each keep a mental account of favors granted and favors received for each acquaintance. According to this theory, when the accounts fall too far out of balance, relationships are "liquidated" ---in effect declaring bankruptcy on the friendship.
The point though is really the same. If it is better not to honor a relational contract than to do so, such agreements cease to be self-enforcing and breach becomes inevitable. Where psychologists would
differ concerns favors never asked for in the first place. For instance, Ann is sick and so Bob makes her pots and pots of chicken soup, which Ann abhors and has never requested. To an economist, Bob's offering Places Ann under no particular obligation to repay whereas a psychologist (and certainly my grandmother) would see this as an odious debt accrued by Bob from Ann's care and attention. Under the psych theory, Ann and Bob's relationship will likely founder over the unrequited chicken soup whereas an economist might see this as tangential, and indeed, irrelevant, to Ann and Bob's other dealings with one another.
Who do you think is corrcf? Our generic economist or my grandmother?
Sunday, September 28, 2014
The IAWLP in auctions
The OPEC auctions are curious in a certain way. While they are conducted a standard English auctions, they differ in that no one goes home without a prize (i.e. a country). Thus, even the loser of these auctions wins something. How does this change bidding?
We know from the IAWLP that the best strategy in a private values Vickey/English setting is simply to bid up to your value. But for a variety of reasons, our OPEC setting is not this setting at all. For one thing, values aren't really private--your estimate of the value of the auction is actually useful information for others seeking to determine the value. For another, this is not a one-off auction. Losing a given auction implies that there are a number of other countries up for auction. Finally. There are only finitely many opportunities, so we can apply LFRB to anticipate how things might proceed. All of this makes the usual strategy of bidding "truthfully" whatever that means, a dead letter.
Let's start at the end, our usual game theory zen strategy. If there is only one auction left and only two bidders, how should you bid? Clearly, wat matters is not how much you value the UAE, but rather how much more you value it than Nigeria. This, of course, will depend on what you gleaned from earlier winning auction bids and bidders. The higher is the expected price over the course of the game, the larger the value of the gap between the production capacity of the UAE v Nigeria. At the same time, to achieve this level of production might require some curtailment of production on UAE v Nigeria that lessens this gap. None of this invalidates the usual strategy of bidding up to the point where you are indifferent between winning and losing, though it does affect where this indifference value lies.
So who wins this auction? Obviously, whoever thinks the UAE is worth more--whatever team is more confident that the price will be high and that the UAE can maintain high production while
maintaining this high price will get the item. Otherwise, it will be a toss up--the value will be the same to both players.
Now let's work our way back. The closer to the beginning of the auctions, the more moving parts on play. Early bidders need to worry about both the vagaries of valuations relative to the "numeraire" item, Nigeria, and in the face if ever declining future competition. Both of these factors make these earlier bids more risky. But there is another wild card in the mix--leadership. Unlike beanie babies or even oil tracts, the items for bid here have values that are determined enormously by leadership activities. If you win Saudi, will you be able to convince others to refrain from producing and hence benefit? How much production will you yourself have to curtail to make these agreements work? In short, value is socially constructed by the winning bidder. In that sense, such auctions are neither private not publicly valued, rather the value of each country is interdependent--the winner of each country potentially determines the value of all.
In that respect, valuing a country in OPEC has, as its closest analog, valuing a company in a takeover. While the acquired company has a set of assets and ip which might be utilized, exactly how it is utilized and whether this is effective has everything to do with the acquirer. For example, the company Digg was bought by Yahoo some years ago. Digg was a very cool company at the forefront of crowd sourcing content and well ahead of its competitors. The synergies with Yahoo, a content curation company above all else, were obvious and important. The market raved over this acquisition. But Yahoo treated Digg like it did all its other properties. Rather than integrating its technology to make for better curation, as the market anticipated, Digg was left to fend for itself as an independent revenue generating property, something it was never especially good at. As a result, it languished.
While I mention Digg to make a point, it is far from an isolated incidence. When valuing acquisitions, leadership in putting this asset to good use is central to this valuation Saudi in the hands of a poor leader is not a good bet at almost any (reasonable) price. In the hands of a master strategist, it is a bargain at almost any price. The game theory lesson (and it is a tough one to put into practice until you're near the top of the company) is that leadership plays a huge role in dictating the value of an acquisition, no matter what the cash flow or valuation multiple says.
We know from the IAWLP that the best strategy in a private values Vickey/English setting is simply to bid up to your value. But for a variety of reasons, our OPEC setting is not this setting at all. For one thing, values aren't really private--your estimate of the value of the auction is actually useful information for others seeking to determine the value. For another, this is not a one-off auction. Losing a given auction implies that there are a number of other countries up for auction. Finally. There are only finitely many opportunities, so we can apply LFRB to anticipate how things might proceed. All of this makes the usual strategy of bidding "truthfully" whatever that means, a dead letter.
Let's start at the end, our usual game theory zen strategy. If there is only one auction left and only two bidders, how should you bid? Clearly, wat matters is not how much you value the UAE, but rather how much more you value it than Nigeria. This, of course, will depend on what you gleaned from earlier winning auction bids and bidders. The higher is the expected price over the course of the game, the larger the value of the gap between the production capacity of the UAE v Nigeria. At the same time, to achieve this level of production might require some curtailment of production on UAE v Nigeria that lessens this gap. None of this invalidates the usual strategy of bidding up to the point where you are indifferent between winning and losing, though it does affect where this indifference value lies.
So who wins this auction? Obviously, whoever thinks the UAE is worth more--whatever team is more confident that the price will be high and that the UAE can maintain high production while
maintaining this high price will get the item. Otherwise, it will be a toss up--the value will be the same to both players.
Now let's work our way back. The closer to the beginning of the auctions, the more moving parts on play. Early bidders need to worry about both the vagaries of valuations relative to the "numeraire" item, Nigeria, and in the face if ever declining future competition. Both of these factors make these earlier bids more risky. But there is another wild card in the mix--leadership. Unlike beanie babies or even oil tracts, the items for bid here have values that are determined enormously by leadership activities. If you win Saudi, will you be able to convince others to refrain from producing and hence benefit? How much production will you yourself have to curtail to make these agreements work? In short, value is socially constructed by the winning bidder. In that sense, such auctions are neither private not publicly valued, rather the value of each country is interdependent--the winner of each country potentially determines the value of all.
In that respect, valuing a country in OPEC has, as its closest analog, valuing a company in a takeover. While the acquired company has a set of assets and ip which might be utilized, exactly how it is utilized and whether this is effective has everything to do with the acquirer. For example, the company Digg was bought by Yahoo some years ago. Digg was a very cool company at the forefront of crowd sourcing content and well ahead of its competitors. The synergies with Yahoo, a content curation company above all else, were obvious and important. The market raved over this acquisition. But Yahoo treated Digg like it did all its other properties. Rather than integrating its technology to make for better curation, as the market anticipated, Digg was left to fend for itself as an independent revenue generating property, something it was never especially good at. As a result, it languished.
While I mention Digg to make a point, it is far from an isolated incidence. When valuing acquisitions, leadership in putting this asset to good use is central to this valuation Saudi in the hands of a poor leader is not a good bet at almost any (reasonable) price. In the hands of a master strategist, it is a bargain at almost any price. The game theory lesson (and it is a tough one to put into practice until you're near the top of the company) is that leadership plays a huge role in dictating the value of an acquisition, no matter what the cash flow or valuation multiple says.
Thursday, September 25, 2014
A/B Test
I am up in Seattle hanging out with the folks at Amazon today and talking about A/B tests, experiments that websites conduct in order to improve the user experience (and profitability too, sometimes). Coming home tonight, I hit upon the idea for a great Hollywood screenplay.
(Aside: I don't write screenplays, but, if any game theory alums do, I'd be happy to collaborate on this idea.)
Twins A and B (girls) have just come off bad breakups. They live in NYC and Boston but are otherwise alike in every way. They talk on the phone, as they do every night, feeling bad about their miserable love lives and determined to fix it. Twin A suggests they visit a popular dating website she read about. Immediately after the call, they each turn on their iPads and bring up the website. It turns out, however, that the site is conducting an A/B test on its matching algorithm just as they query it.
(Sidenote: as each twin calls up the website, a clock ticks down to the 1/1000 of a second as they query it. Twin A lands on an odd numbered time while twin B lands on an even numbered time since she started a tiny bit later than the other twin. This produces the A/B test, which is rigged to the 1/1000 of a second time a session starts.
Back scene: techies in some Silicon Valley startup. Techie 1 talks about how the website has succeeded by having likes attract. Techie 2 tells a story about how, with his girlfriend, opposites attract. What if that strategy actually produces better matches. Techie 1 says that data talks and bulllshit walks, so only an A/B test can settle things for sure. He proposes that, on Sunday night, they run such a test on the east coast and then track all the matches that result to see how love really works. Techie 2, confident that opposites attract, agrees and bets $100 that he's right. Techie 1 shakes hands on it.)
Back to our twins. Twin A is matched with someone just like her. He's outdoorsy and easygoing, ruggedly handsome and with a steady, albeit boring job. Twin B is matched with her opposite. She likes the outdoors while he is an urban creature favoring clubs, bars, jazz, etc. She is an all-American clean cut girl while he is rather grungy. After the first date, twin A has found her perfect match while Twin B is appalled, but yet fascinated somehow. Both go on subsequent dates and eventually fall in love.
The rest of the story traces out the arc of their lives. Since this is Hollywood, both have to run into terrible problems. If this is a PG film, then both will turn out to be with the right guys in the end. If rated R, then the person who seems to be alike will become a different, and controlling person, altogether. And violence will ensue, terrifying and possibly hurting Twin A. The person who seems to be opposite of twin B will turn out to be alike in terms of his heart, so the outside bits don't count for much. He will also be the person to rescue twin A from her fate.
Or we can go rated R in the 70s. In this case, both guys turn out to be Mr. Wrong and wreck the twins lives. Divorced and with children to take care of, alone, the twins call each other in the ending scene to commiserate their fate.
The drawback to this idea is that it is a bit like Sliding Doors, a film from a few years ago, but different enough, I suspect, to be interesting.
No idea what this has to do with game theory, but it seemed interesting to me.
(Aside: I don't write screenplays, but, if any game theory alums do, I'd be happy to collaborate on this idea.)
Twins A and B (girls) have just come off bad breakups. They live in NYC and Boston but are otherwise alike in every way. They talk on the phone, as they do every night, feeling bad about their miserable love lives and determined to fix it. Twin A suggests they visit a popular dating website she read about. Immediately after the call, they each turn on their iPads and bring up the website. It turns out, however, that the site is conducting an A/B test on its matching algorithm just as they query it.
(Sidenote: as each twin calls up the website, a clock ticks down to the 1/1000 of a second as they query it. Twin A lands on an odd numbered time while twin B lands on an even numbered time since she started a tiny bit later than the other twin. This produces the A/B test, which is rigged to the 1/1000 of a second time a session starts.
Back scene: techies in some Silicon Valley startup. Techie 1 talks about how the website has succeeded by having likes attract. Techie 2 tells a story about how, with his girlfriend, opposites attract. What if that strategy actually produces better matches. Techie 1 says that data talks and bulllshit walks, so only an A/B test can settle things for sure. He proposes that, on Sunday night, they run such a test on the east coast and then track all the matches that result to see how love really works. Techie 2, confident that opposites attract, agrees and bets $100 that he's right. Techie 1 shakes hands on it.)
Back to our twins. Twin A is matched with someone just like her. He's outdoorsy and easygoing, ruggedly handsome and with a steady, albeit boring job. Twin B is matched with her opposite. She likes the outdoors while he is an urban creature favoring clubs, bars, jazz, etc. She is an all-American clean cut girl while he is rather grungy. After the first date, twin A has found her perfect match while Twin B is appalled, but yet fascinated somehow. Both go on subsequent dates and eventually fall in love.
The rest of the story traces out the arc of their lives. Since this is Hollywood, both have to run into terrible problems. If this is a PG film, then both will turn out to be with the right guys in the end. If rated R, then the person who seems to be alike will become a different, and controlling person, altogether. And violence will ensue, terrifying and possibly hurting Twin A. The person who seems to be opposite of twin B will turn out to be alike in terms of his heart, so the outside bits don't count for much. He will also be the person to rescue twin A from her fate.
Or we can go rated R in the 70s. In this case, both guys turn out to be Mr. Wrong and wreck the twins lives. Divorced and with children to take care of, alone, the twins call each other in the ending scene to commiserate their fate.
The drawback to this idea is that it is a bit like Sliding Doors, a film from a few years ago, but different enough, I suspect, to be interesting.
No idea what this has to do with game theory, but it seemed interesting to me.
Thursday, September 18, 2014
The Paradox of Commitment
Commitment, in the form of renouncing or eliminating certain strategic possibilities ahead of time to gain an advantage, is one of the fundamental insights of game theory. Unlike decision problems, where more options can never leave one worse off> In interactive problems, games, where reactions come, not from nature, but rather from the exercise of free will on the part of others, more options, capabilities, capacity, client reach, and so on, is not always better. Thus, game theory offers numerous paradoxical situations where discarding or destroying assets, making certain strategies impossible to undertake and other such seemingly destructive behavior is, in fact, correct strategy.
One never, of course, does such things for one's own benefit. Rather, they are done to influence others playing the game to alter their course of action or reaction to a more favorable path. As such, these negative actions, the dismantling of a plant or the entering into of a binding contract removing a strategic possibility, must be done publicly. It must be observed and understood by others playing the game to have any effect.
One of the most famous situations illustrating the folly of commitment in private appears in the movie Dr. Strangelove where, in the climactic scene, it is revealed that the Russians have built a doomsday machine set to go off in the event of nuclear attack. Moreover, to complete the commitment, the Soviets have added a self-destruct mechanism to the device. It also goes off if tampered with or turned off. Since the machine will destroy the Earth, it ought properly to dissuade all countries to engage in nuclear combat.
But there's a problem--the US is only made aware of the machine after non-recallable bombers have been launched to deliver a devastating nuclear attack at the behest of a berserk air force commander. Why did the Soviets not tell the US and the world about the machine, asks Doctor Strangelove to the Soviet ambassador?
While paradoxical initially, the idea that fewer options can improve one's strategic position is intuitive once grasped and was understood at an intuitive level long before the invention of game theory.
But I want to talk about a less well-known paradox: if such commitment strategies succeed by altering others' play in a more favorable direction from the perspective of the committing party, why would these others choose to observe the commitment in the first place? Shouldn't they commit not to observe in an effort to frustrate the commitment efforts of others? It turns out that this second level of commitment is unnecessary, at least in the formal argument, all that is needed is a small cost to observe the choices made by the committing party for the value of commitment to be nullified.
For example, two players are playing a WEGO game where they choose between two strategies, S and C (labels will become clear shortly). The equilibrium in this game turns out to be (C, C), but player 1 would gain an advantage if she could commit to strategy S, which would provoke S in response, and raise her payoff. Thus, strategy C can be thought of as the Cournot strategy while S represents the Stackelberg strategy in terms of archetypal quantity setting games. Suppose further that, if 1 could be sure that 2 played S, she would prefer to play C in response, so (S, S) is not an equilibrium in the WEGO game.
The usual way player 1 might go about achieving the necessary commitment is by moving first and choosing S. Player 2 moves second, chooses S in response, and lo and behold, IGOUGO beats WEGO as per our theorem. Player 2 is perhaps worse off, but player 1 has improved her lot by committing to S.
But now let us slightly complicate the game by paying attention not just to the transmission of information but also its receipt. After player 1 moves, player 2 can choose to pay an observation cost, c, to observe 1's choice perfectly. This cost is very small but, if not paid, nothing about 1's choice is revealed to 2. After deciding on whether to observe or not, player 2 then chooses between C and S and payoffs are determined.
Looking forward and reasoning back, consider the situation where 2 chooses to observe 1's move. In that case, she chooses S if player 1 chose S and C if player 1 chose C. So far, so good. If she does not observe, then she must guess what player 1 might have chosen. If she's sufficiently confident that player 1 has chosen S, then S is again the best choice. Otherwise, C is best.
So should she observe or not? If commitment is successful, then player 2 will anticipate that player 1 has chosen S. Knowing this, there is no point in observing since, in equilibrium, player 2 will choose the same action, S, regardless of whether she looks or not. Thus, the value of information is zero while the cost gathering and interpreting the information is not, so, behaving optimally, player 2 never observes and thereby economizes (a little) by avoiding the cost c.
But then what should player 1 do? Anticipating that player 2 won't bother to observe her action, there is now no point in playing S since C was always the better choice. Thus, player 1 will choose C and it is now clear that the whole commitment posture was, in fact, mere stagecraft by player 1.
Of course, player 2 is no fool and will anticipate player 1's deception; therefore, if the equilibrium involves no observation, player 1 must have chosen C, and hence player 2 chooses C. Since we know that player 2 never pays the (wasteful) observation cost in equilibrium, the only equilibrium is (C, C), precisely as it was in the WEGO game. In other words, so long as there is any friction to observing player 1's choice, i.e. to receiving the information, first-mover commitment is impossible.
The issue would seem to be the conflict between players 1 and 2 where the latter has every incentive to frustrate the commitment efforts of the former since, if successful, 2 is worse off. But consider this game: Suppose that (S, S) yields each player a payoff of 3. (C, C), on the other hand, yields each player only 2. If player 2 chooses C in response to player 1's choice of S, both players earn zero while if the reverse occurs, player 2 chooses S and player 1 chooses C, then player 1 earns 4 while player 2 only earns 1. This fits our game above: C is a dominant strategy for player 1 while 2 prefers to match whatever 1 does.
This game has some of the flavor of a prisoner's dilemma. It is a one-sided trust game. By playing the WEGO game, both players lose out compared to the socially optimal (S, S), yet (S, S) is unsustainable because 1 will wish to cheat on any deal by selecting C. One-sidedness arises from that fact that, while player 1 can never be trusted to play S on her own initiative, player 2 can be trusted so long as he is confident about 1's choice of S.
Player 1 seeks to overcome her character flaw by moving first and committing to choose S, anticipating that 2 will follow suit. Surely now 2 will bother to observe if the costs are sufficiently low? Unfortunately, he will not. Under the virtuous (S, S) putative equilibrium, player 2 still has no reason to pay to observe player 1's anticipated first move since, again, 2 will choose the same action, S, regardless. Knowing this, 1 cannot resist the temptation to cheat and again we are back to (C, C) for the same reasons as above. Here the problem is that, to overcome temptation, 1 must be held to account by an observant player 2. But 2 sees no point in paying a cost merely to confirm what he already knows, so observation is impossible.
What is needed is a sort of double commitment--2 must first commit to observe, perhaps by prepaying the observation cost or by some other device. Only then can 1 commit to play S, and things play out nicely.
While paradoxical and logically correct, it seems quite silly to conclude that effective commitment is impossible. After all, people and firms do commit in various ways, their choices are observed, and these commitments have their intended effect. So what gives?
One answer is that, in reality-land, strategies are not so stark as simply S or C. There are many versions of S and likewise of C and the particular version chosen might depend on myriad environmental factors not directly observable to player 2. Now the information may be valuable enough that observation is optimal.
Seeds of doubt about the other's rationality can also fix the commitment problem. Ironically, this cure involves envisaging the possibility of a pathologically evil version of player 1. These evil types always act opportunistically by choosing C. Now there is a reason for 2 to look since she cannot be certain of 1's virtue.
A third possibility is that observing is simply unavoidable or that observation costs are negative. Curiosity is a trait common to many animals including humans. We derive joy from learning new things even if there is no direct economic value associated with this learning. Thus, individuals pay good money for intro texts on literary theory even though, for most of us, learning about Derrida's theories of literary deconstruction is of dubious economic value. Obviously, if the cost c were negative, i.e. a benefit, the problem vanishes and commitment is restored.
So if the theory is so implausible, then why bother bringing it up? One answer is to point out some countermeasures to commitment strategies. After all, if player 1 can "change the game" by committing to a strategy first, why can't player 2 change the game by committing to be on a boat in Tahoe and hence out of touch with what 1 is up to? A better answer is that it highlights the fact that commitment is a two-way street. Effective commitment requires not just that player 1 transmit the commitment information but that player 2 receive (and correctly decode) this information. Game theorists and others have spent endless hours thinking up different strategies for creating transmittable information, but precious little time thinking about its receipt. My own view is that this is a mistake since deafness on the part of other players destroys the value of commitment just as effectively as muteness on the part of the committing party.
Returning to Strangelove, it's not enough that the Soviet premiere transmit the information about the doomsday device ahead of time, for commitment to be effective such information must be heard and believed. This suggests the following alternative problem--even if the premiere had disclosed the existence of the doomsday machine, would the US have believed it? If not, Slim Pickens might still be waving his cowboy hat while sitting atop a nuclear bomb plummeting down to end all life. Yee-hah!
One never, of course, does such things for one's own benefit. Rather, they are done to influence others playing the game to alter their course of action or reaction to a more favorable path. As such, these negative actions, the dismantling of a plant or the entering into of a binding contract removing a strategic possibility, must be done publicly. It must be observed and understood by others playing the game to have any effect.
One of the most famous situations illustrating the folly of commitment in private appears in the movie Dr. Strangelove where, in the climactic scene, it is revealed that the Russians have built a doomsday machine set to go off in the event of nuclear attack. Moreover, to complete the commitment, the Soviets have added a self-destruct mechanism to the device. It also goes off if tampered with or turned off. Since the machine will destroy the Earth, it ought properly to dissuade all countries to engage in nuclear combat.
But there's a problem--the US is only made aware of the machine after non-recallable bombers have been launched to deliver a devastating nuclear attack at the behest of a berserk air force commander. Why did the Soviets not tell the US and the world about the machine, asks Doctor Strangelove to the Soviet ambassador?
The premiere likes surprises.comes the pitiful answer. And so unobserved commitment is, in effect, no commitment at all.
While paradoxical initially, the idea that fewer options can improve one's strategic position is intuitive once grasped and was understood at an intuitive level long before the invention of game theory.
But I want to talk about a less well-known paradox: if such commitment strategies succeed by altering others' play in a more favorable direction from the perspective of the committing party, why would these others choose to observe the commitment in the first place? Shouldn't they commit not to observe in an effort to frustrate the commitment efforts of others? It turns out that this second level of commitment is unnecessary, at least in the formal argument, all that is needed is a small cost to observe the choices made by the committing party for the value of commitment to be nullified.
For example, two players are playing a WEGO game where they choose between two strategies, S and C (labels will become clear shortly). The equilibrium in this game turns out to be (C, C), but player 1 would gain an advantage if she could commit to strategy S, which would provoke S in response, and raise her payoff. Thus, strategy C can be thought of as the Cournot strategy while S represents the Stackelberg strategy in terms of archetypal quantity setting games. Suppose further that, if 1 could be sure that 2 played S, she would prefer to play C in response, so (S, S) is not an equilibrium in the WEGO game.
The usual way player 1 might go about achieving the necessary commitment is by moving first and choosing S. Player 2 moves second, chooses S in response, and lo and behold, IGOUGO beats WEGO as per our theorem. Player 2 is perhaps worse off, but player 1 has improved her lot by committing to S.
But now let us slightly complicate the game by paying attention not just to the transmission of information but also its receipt. After player 1 moves, player 2 can choose to pay an observation cost, c, to observe 1's choice perfectly. This cost is very small but, if not paid, nothing about 1's choice is revealed to 2. After deciding on whether to observe or not, player 2 then chooses between C and S and payoffs are determined.
Looking forward and reasoning back, consider the situation where 2 chooses to observe 1's move. In that case, she chooses S if player 1 chose S and C if player 1 chose C. So far, so good. If she does not observe, then she must guess what player 1 might have chosen. If she's sufficiently confident that player 1 has chosen S, then S is again the best choice. Otherwise, C is best.
So should she observe or not? If commitment is successful, then player 2 will anticipate that player 1 has chosen S. Knowing this, there is no point in observing since, in equilibrium, player 2 will choose the same action, S, regardless of whether she looks or not. Thus, the value of information is zero while the cost gathering and interpreting the information is not, so, behaving optimally, player 2 never observes and thereby economizes (a little) by avoiding the cost c.
But then what should player 1 do? Anticipating that player 2 won't bother to observe her action, there is now no point in playing S since C was always the better choice. Thus, player 1 will choose C and it is now clear that the whole commitment posture was, in fact, mere stagecraft by player 1.
Of course, player 2 is no fool and will anticipate player 1's deception; therefore, if the equilibrium involves no observation, player 1 must have chosen C, and hence player 2 chooses C. Since we know that player 2 never pays the (wasteful) observation cost in equilibrium, the only equilibrium is (C, C), precisely as it was in the WEGO game. In other words, so long as there is any friction to observing player 1's choice, i.e. to receiving the information, first-mover commitment is impossible.
The issue would seem to be the conflict between players 1 and 2 where the latter has every incentive to frustrate the commitment efforts of the former since, if successful, 2 is worse off. But consider this game: Suppose that (S, S) yields each player a payoff of 3. (C, C), on the other hand, yields each player only 2. If player 2 chooses C in response to player 1's choice of S, both players earn zero while if the reverse occurs, player 2 chooses S and player 1 chooses C, then player 1 earns 4 while player 2 only earns 1. This fits our game above: C is a dominant strategy for player 1 while 2 prefers to match whatever 1 does.
This game has some of the flavor of a prisoner's dilemma. It is a one-sided trust game. By playing the WEGO game, both players lose out compared to the socially optimal (S, S), yet (S, S) is unsustainable because 1 will wish to cheat on any deal by selecting C. One-sidedness arises from that fact that, while player 1 can never be trusted to play S on her own initiative, player 2 can be trusted so long as he is confident about 1's choice of S.
Player 1 seeks to overcome her character flaw by moving first and committing to choose S, anticipating that 2 will follow suit. Surely now 2 will bother to observe if the costs are sufficiently low? Unfortunately, he will not. Under the virtuous (S, S) putative equilibrium, player 2 still has no reason to pay to observe player 1's anticipated first move since, again, 2 will choose the same action, S, regardless. Knowing this, 1 cannot resist the temptation to cheat and again we are back to (C, C) for the same reasons as above. Here the problem is that, to overcome temptation, 1 must be held to account by an observant player 2. But 2 sees no point in paying a cost merely to confirm what he already knows, so observation is impossible.
What is needed is a sort of double commitment--2 must first commit to observe, perhaps by prepaying the observation cost or by some other device. Only then can 1 commit to play S, and things play out nicely.
While paradoxical and logically correct, it seems quite silly to conclude that effective commitment is impossible. After all, people and firms do commit in various ways, their choices are observed, and these commitments have their intended effect. So what gives?
One answer is that, in reality-land, strategies are not so stark as simply S or C. There are many versions of S and likewise of C and the particular version chosen might depend on myriad environmental factors not directly observable to player 2. Now the information may be valuable enough that observation is optimal.
Seeds of doubt about the other's rationality can also fix the commitment problem. Ironically, this cure involves envisaging the possibility of a pathologically evil version of player 1. These evil types always act opportunistically by choosing C. Now there is a reason for 2 to look since she cannot be certain of 1's virtue.
A third possibility is that observing is simply unavoidable or that observation costs are negative. Curiosity is a trait common to many animals including humans. We derive joy from learning new things even if there is no direct economic value associated with this learning. Thus, individuals pay good money for intro texts on literary theory even though, for most of us, learning about Derrida's theories of literary deconstruction is of dubious economic value. Obviously, if the cost c were negative, i.e. a benefit, the problem vanishes and commitment is restored.
So if the theory is so implausible, then why bother bringing it up? One answer is to point out some countermeasures to commitment strategies. After all, if player 1 can "change the game" by committing to a strategy first, why can't player 2 change the game by committing to be on a boat in Tahoe and hence out of touch with what 1 is up to? A better answer is that it highlights the fact that commitment is a two-way street. Effective commitment requires not just that player 1 transmit the commitment information but that player 2 receive (and correctly decode) this information. Game theorists and others have spent endless hours thinking up different strategies for creating transmittable information, but precious little time thinking about its receipt. My own view is that this is a mistake since deafness on the part of other players destroys the value of commitment just as effectively as muteness on the part of the committing party.
Returning to Strangelove, it's not enough that the Soviet premiere transmit the information about the doomsday device ahead of time, for commitment to be effective such information must be heard and believed. This suggests the following alternative problem--even if the premiere had disclosed the existence of the doomsday machine, would the US have believed it? If not, Slim Pickens might still be waving his cowboy hat while sitting atop a nuclear bomb plummeting down to end all life. Yee-hah!
Secrets and Lies
There are many instances where players in a game can control the timing and transparency of their moves. Indeed, a fundamental question firms face when making key strategic decisions is whether to keep them a secret or to reveal. Geographic decisions like plant openings or closings, strategic alliances with foreign partners, or overseas initiatives are often revealed, sometimes with great fanfare. Other decisions, such as the details of a product design or merger target, are closely guarded secrets. Firms also tell lies (or at least exaggerations) at times, for instance, in the schedule for a software release or in plans to acquire targets done to jack up its price to a competitor. What can game theory tell us about secrets, lies, and timing.
One way of thinking about secrets is to imagine a game whose timing is fixed but where disclosure is at the discretion of the participants. For instance, firm 1 moves first, followed by firm 2. When firm 1 moves, it can choose to (truthfully) disclose its action to the world or not. This amounts to the choice between a WEGO game versus an IGOUGO game from the perspective of firm 1. The key question then, is when should firm 1 disclose its strategy. For a large class of situations, game theory offers a sharp answer to this question:
On to the specifics: Suppose that both firms are playing a game with finite strategies and payoffs and complete information; that is, both firms know precisely the game that they are playing. Let x denote the firm 1's choice and y denote firm 2's. Let y(x) denote how firm 2 would respond if it thought firm 1 were choosing strategy x. Let x(y) be similarly defined for firm 1. A pure strategy equilibrium is a pair (x', y') where y' = y(x') and x'=x(y'). Let P(x, y) be firm 1's payoff when x and y are selected. Thus, in the above equilibrium, firm 1 earns P(x', y') or, equivalently, P(x', y(x')).
Now consider an IGOUGO situation. Here, firm 1 can choose from all possible x. Using look forward, reason back (LFRB), firm 1 anticipates that 2 will play y(x) when 1 plays x. Thus, 1's payoffs in this situation are P(x, y(x)). Notice that, by simply choosing x = x', 1 earns exactly the same payoff as in the WEGO game. OTOH, firm 1 typically will have some other choice x* that produces even higher payoffs P(x*,y(x*)). Thus, IGOUGO is generically better than WEGO and certainly never worse.
But this begs the question of secrets and lies. Why doesn't firm 1 simply try to persuade firm 2 that it will play x* and thereby induce y* =y(x*), thus replicating the ideal setting of the IGOUGO game? One reason is that, generically x(y*), i.e., 1's best response to 2's playing y*, is not to play x*. One might view this as an opportunity since this means that available to firm 1 is some strategy x** = x(y*) with the property P(x**,y*) > P(x*,y*). That is, if 1 can persuade 2 to play y*, it can do even better than playing x*, the IGOUGO strategy.
The problem, of course, is that firm 2 will only play y* if it is convinced 1 will play x*. Since 1 will never play x* if this persuasion is successful, then 1 can never convince 2 of its intentions to play x*. As a result, the whole situation unravels to the original WEGO outcome of (x', y'), which is worse for firm 1 than IGOUGO. Put differently, firm 1's promises to play x* are simply never credible.
This story about the unraveling of non-credible pronouncements, however, has a profound and undesirable implication. It implies that, in game theory land, it is impossible to trick or deceive the other party (at least in our class of games). Deceptions will always be seen through and hence rendered ineffective. But we see all sorts of attempts at deception in business and in life. Surely people would not spend so much time and effort constructing deceptions if they never worked. Here again, full rationality is the culprit. Our game theory land firms are hard headed realists. They never naively took the other's words at face value. Instead, they viewed them through the cynical lens that implied that promises which were not in 1's self-interest would never be honored.
Life is sometimes like this, but thankfully not always. People do keep their word even when keeping it is not in their self-interest. Moreover, others believe these "honey words," acting as though they are true, even if they are possibly not credible when put to the test. Moreover, we teach our children precisely this sort of rationality---to honor promises made, even if we don't want to. As usual, game theory can accommodate this sort of thing simply by amending the game to allow a fraction of firm 2's to be naive, believing firm 1's promises and to allow a fraction of "honorable" firm 1's who keep promises made, even when better off not doing so. Once we admit this possibility, bluffing becomes an acceptable, and occasionally profitable, strategy. A deceitful and selfish player 1 may well spend time trying to convince 2 that he will play x* and therefore 2 should play y* since there is a chance that 2 will act upon this.
This does, however, muddy our result a bit. We now need to adjust this to:
With enough naifs in the firm 2 population, 1 may well prefer to take its chances on trickery and deceit rather than committing to the ex post unappealing action x*.
One way of thinking about secrets is to imagine a game whose timing is fixed but where disclosure is at the discretion of the participants. For instance, firm 1 moves first, followed by firm 2. When firm 1 moves, it can choose to (truthfully) disclose its action to the world or not. This amounts to the choice between a WEGO game versus an IGOUGO game from the perspective of firm 1. The key question then, is when should firm 1 disclose its strategy. For a large class of situations, game theory offers a sharp answer to this question:
Disclosure is always better than secrecy.
On to the specifics: Suppose that both firms are playing a game with finite strategies and payoffs and complete information; that is, both firms know precisely the game that they are playing. Let x denote the firm 1's choice and y denote firm 2's. Let y(x) denote how firm 2 would respond if it thought firm 1 were choosing strategy x. Let x(y) be similarly defined for firm 1. A pure strategy equilibrium is a pair (x', y') where y' = y(x') and x'=x(y'). Let P(x, y) be firm 1's payoff when x and y are selected. Thus, in the above equilibrium, firm 1 earns P(x', y') or, equivalently, P(x', y(x')).
Now consider an IGOUGO situation. Here, firm 1 can choose from all possible x. Using look forward, reason back (LFRB), firm 1 anticipates that 2 will play y(x) when 1 plays x. Thus, 1's payoffs in this situation are P(x, y(x)). Notice that, by simply choosing x = x', 1 earns exactly the same payoff as in the WEGO game. OTOH, firm 1 typically will have some other choice x* that produces even higher payoffs P(x*,y(x*)). Thus, IGOUGO is generically better than WEGO and certainly never worse.
But this begs the question of secrets and lies. Why doesn't firm 1 simply try to persuade firm 2 that it will play x* and thereby induce y* =y(x*), thus replicating the ideal setting of the IGOUGO game? One reason is that, generically x(y*), i.e., 1's best response to 2's playing y*, is not to play x*. One might view this as an opportunity since this means that available to firm 1 is some strategy x** = x(y*) with the property P(x**,y*) > P(x*,y*). That is, if 1 can persuade 2 to play y*, it can do even better than playing x*, the IGOUGO strategy.
The problem, of course, is that firm 2 will only play y* if it is convinced 1 will play x*. Since 1 will never play x* if this persuasion is successful, then 1 can never convince 2 of its intentions to play x*. As a result, the whole situation unravels to the original WEGO outcome of (x', y'), which is worse for firm 1 than IGOUGO. Put differently, firm 1's promises to play x* are simply never credible.
This story about the unraveling of non-credible pronouncements, however, has a profound and undesirable implication. It implies that, in game theory land, it is impossible to trick or deceive the other party (at least in our class of games). Deceptions will always be seen through and hence rendered ineffective. But we see all sorts of attempts at deception in business and in life. Surely people would not spend so much time and effort constructing deceptions if they never worked. Here again, full rationality is the culprit. Our game theory land firms are hard headed realists. They never naively took the other's words at face value. Instead, they viewed them through the cynical lens that implied that promises which were not in 1's self-interest would never be honored.
Life is sometimes like this, but thankfully not always. People do keep their word even when keeping it is not in their self-interest. Moreover, others believe these "honey words," acting as though they are true, even if they are possibly not credible when put to the test. Moreover, we teach our children precisely this sort of rationality---to honor promises made, even if we don't want to. As usual, game theory can accommodate this sort of thing simply by amending the game to allow a fraction of firm 2's to be naive, believing firm 1's promises and to allow a fraction of "honorable" firm 1's who keep promises made, even when better off not doing so. Once we admit this possibility, bluffing becomes an acceptable, and occasionally profitable, strategy. A deceitful and selfish player 1 may well spend time trying to convince 2 that he will play x* and therefore 2 should play y* since there is a chance that 2 will act upon this.
This does, however, muddy our result a bit. We now need to adjust this to:
When rivals are sophisticated, disclosure is always best.
With enough naifs in the firm 2 population, 1 may well prefer to take its chances on trickery and deceit rather than committing to the ex post unappealing action x*.
Corporate Culture
What does culture, corporate culture no less, have to do with game theory? At first blush the two worlds could not seem further apart. Game theory, with its bizarre concern for turning every single social interaction, from exchanged smiles in the morning to (at least in Mad Men world) exchanged gropings after hours, into a series of tables, trees, and math, seems a faintly ridiculous place to look for insights about something as amorphous, fluid, and nuanced as culture. Yet, perhaps in spite of itself, game theory has something (maybe a lot) to say about this topic.
Ask an economist about culture in the workplace and he or she (mostly he) will respond with stories about the importance of getting incentives right. Such a response seems faintly insulting as culture is obviously much more than a series of carrots and sticks used to induce corporate donkeys to trod some dreary path carrying their pack.
Yet this is correct, to an extent, for cultures, however noble, meaningful or even godly, founder quickly in the face of bad pecuniary incentives. Happily for us (though not necessarily for the individuals so described), tens of thousands of individuals in the US during the first half of the 19th century became unwitting test subjects for examining this hypothesis. At that time, as apparently at all times, people were sure that civilization was going to hell in a handbasket, that godliness, respect for others, politeness, manners, and the work ethic were all pale shadows of their former selves. In short, many were convinced that civilized society was breaking down or in crisis. They were absolutely convinced that their sons and daughters, high on the freedom (or perhaps stronger spirits) of America's rapidly advancing frontiers, were in the process of sending taking a giant step backwards in terms of civilized society.
(A sidenote: These sons and daughters were, in all likelihood, drunk a great deal of the time. The combination of perishable grains, long winters, bad roads, and knowledge of distilling proved a potent "cocktail" for individuals living in such wild frontier places as Ohio, western New York or western Pennsylvania, far away from the Land that Time Forgot, which lies to the east in the Keystone state. Having a grog--or several--before breakfast, at meals, and at breaks was considered perfectly normal. Rather than having a coffee break, workers would stop for "dram breaks" several times a day. This no doubt contributed to the violence, especially domestic violence, of frontier life, as well as the prevalence of workplace accidents, and possibly, to some degree, the remarkable fecundity of the population, which grew at 4% per annum without any considerable influx of immigrants.)
Anyway, back to our story. Rather than merely bemoaning the sad state of civilization, many people formed utopias--societies set apart and usually dedicated to some prescription or other for the well-led life or for a right and just society. Often these prescriptions were religious as their was a tremendous religious revival occurring at the time, mainly consisting of the formation of new Christian sects. Sometimes the prescriptions were the product of reasoning and "science" by (mainly pseudo) intellectuals . Many of these utopias saw money and, more broadly, property as the root of the problem and banished it from their communities. All property was joint. All production was to be shared. Individuals should take as they need and work according to their ability and in whatever field they thought best. There were also stringent social rules tight control over sex, drinking, profane language, and other behaviors deemed as societal ills. Incentives, so far as any existed, relied purely on social and godly rewards and punishments. Such societies would have little use for our typical economist above, except possibly as a strong back to contribute to crop growing.
Overall, the results of these utopias were...awful. Most fell apart within less than five years of their founding, often ending in bankruptcy, numerous legal battles, and sharp acrimony. Utopias frequently ended up starving since most members preferred to write manifestos about the great deeds of their utopia rather than engaging in (still labor intensive) food production, house production, animal husbandry or any other task likely to produce the necessities of life. Laziness in general was a constant problem as many people would happily do nothing whatever whenever possible.People ate too much from what little food was produced. And worst of all people bickered constantly about what (or in some cases who) was theirs. The absence of currency or formal property rights did not mean that individuals gave up the pursuit of "stuff." Quite the contrary as individuals spent a great deal of time scheming and conniving to acquire squatting rights more and better shelter, food, furnishings, and so on.
There were, however, some successes. In New England, one utopia was, in fact if not in name, nothing more than a company mill town run entirely from the labor of young women. Their parents entrusted them to the manager/mayor/chief priest of the utopia (an older and richer man, of course), to keep them out of trouble and earn some money for their families until they married. To be precise, these women worked hard--very hard--and were given food and board in the utopia in exchange for their labor. They also had to adhere to rules on pain of being "sent home." The most important rule was that, other than the manager/mayor/priest, there were to be no men in the community, either as members or visitors. A cynic might see this "utopia" as little more than a scheme to take advantage of cheap, unskilled labor under a mere facade of societal improvement. Nonetheless, it clearly met some sort of need for New England families to make some money and not have to worry about protecting their daughter's "virtue."
Another notable success were the Shakers. They were a more traditional religious utopia founded by a woman who believed herself to be the second incarnation of Jesus Christ. Her charisma, combined with a strong practical streak---well spelled out rules and the ultimate sanction, banishment and damnation by eternal hellfire---caused the place to run pretty effectively. Ultimately, it was undone by a central rule--celibacy--no individual, under any circumstance, could reproduce. Punishment for doing so, or even performing certain activities that might, if the stars were aligned, lead to reproduction, was exile from the community, stringently enforced. The Shakers lasted surprisingly long given this stricture. They also left us with a nice style of furniture.
But back to the main plot---ask an economist about culture and hear about incentives. Well, it seems not to be entirely nonsense. Incentives do matter a great deal to culture, despite the claims of many intellectuals even today. .
Ask a specialist in organizational behavior (usually trained as a psychologist) about culture and you'll receive a much different answer. She will talk about the importance of empathy, transparency, safety in the frank exchange of feelings, trust, and other such behaviors. This is not to say that these people do not believe in incentives, they do, but tend to call them by different names than economists. To give but one example, most social psychologists believe in a theory called "relational equity" and argue that good cultures are marked by relative balance in relational equity accounts of the key actors. According to this theory, each side keeps an account of the good deeds done for the other (and presumably offsets these debits with credits for bad deeds though this is not much talked about). A relational equity account is balanced when goodness, measured somehow, is approximately equal. Things break down when inequalities persist or grow. It seems intuitive that I might stop being such close friends with someone to whom I grant a string of gifts and favors while receiving nothing back in return. But psychologists think things run the other way as well. I may wish to divorce a friend who is "too nice," someone who endlessly does me good turns at such a rate that I cannot possibly keep up in reciprocation. Thus, both not nice enough and too nice present problems. At any rate, some version of incentives runs through much of the literature on leadership and culture though the incentives tend to be of a more amorphous and personal character rather than the rough and ready dollar variety that economists like.
So where does this leave game theory? One central insight of game theory is that the same set of external and internal incentives can produce wildly different cultures depending on beliefs about the choices (and feelings) of others. Put differently, the same game (i.e. set of actions, payoffs, preferences, and information) can give rise to multiple cultures/equilibria depending on beliefs. Moreover, many things influence these beliefs. Past history, the shared set of company values, the types of individuals recruited, social norms, and so on are all influencers. In game theory, these features are sometimes referred to by the shorthand of focality, circumstances that make one set of beliefs about the actions others will take more likely than some other set of beliefs. While the list of focality influencers offered above are all concepts our OB specialist would readily recognize and endorse, our economist can readily support the idea of changing the game, or even just changing how the game is presented.
Let's make this concrete: Consider the archetypal game Stag Hunt: Two parties (or perhaps many parties) must choose between an ambitious but risky action that requires them to work together to pull off or a safe, but lower payoff action that does not. Since innovation lies at the heart of the long-term success of nearly any company, most leaders would probably wish to encourage their employees and business units to choose ambitious actions at least some of the time. One school of managerial thought suggests that we treat employees as "owner/operators" and hold them closely accountable for their financial performance, typically measured at quarterly time intervals. Thus, if an employee or group/division/branch, etc. does well---meets its numbers---it is rewarded, if it "falls down" it suffers some sanction, and if it does something really great, perhaps an extra award might accrue. Trouble arises in determining how much this extra award might be. While firms have a good sense of the value of normal operations and can reward accordingly, the value of an innovation is rarely immediately apparent and, consequently, a firm might, with good reason, be hesitant to reward it lavishly. Returning to our stag hunt, this implies that gap between the payoff from meeting the numbers versus missing is likely larger than the gap between the payoff from successful innovation versus meeting the numbers, at least in the short run.
But note what our firm, following "best practices" has done---they've unwittingly made it very risky to undertake that ambitious project. If the project requires cooperation across several managers, then the firm has, in effect, given veto power to their most risk averse manager. Not exactly a prescription for innovation.
It needn't be this way and, in fact, probably was not this way back in the firm's formative years. At that time, the firm was small, everyone knew one another closely, working together nearly all the time in the firm's startup phase, and, moreover, ambitious projects were undertaken and were successful, perhaps those projects are the reason the firm is now big and faces this problem in the first place. One could simply write off the two situations as a difference in the personalities of the managers and leave it at that, arguing that those early manager/founders were extraordinary and so they pulled the ambitious projects off whereas the current crop are not made from the same stern stuff. This might be true, but is hardly useful to a firm that wishes to be innovative.
Finally, we are to the heart of the matter. There seems nothing wrong with the incentives of the firm as success is rewarded to differing degrees and failure punished. There may be something wrong elsewhere in the culture, the wrong people, the wrong way of fostering interaction on intergroup projects, and so on, but we have little guide where to look. Treating the cultural situation as a game, however, offers some prospects. From this analysis, we might discover that our incentive system is set up to make safe play "risk dominant" and endeavor to fix this. Curiously, one fix would be to make the incentives less high-powered, to make it okay to fail, perhaps even rewarding failure . We might discover that managers across groups are simply too busy, too productive, to get to know each other so as to form sufficient confidence that promises made by the other will be carried through. This suggests a different set of fixes including retreats, high performer training, or even something as simple as social events for managers.
The point is that, by thinking of key challenges in terms of a game, we gain deeper insights into the pathology of the problem and hence a much better idea of which of the many solutions on offer-- monetary, social, coaching, trust, transparency, and so on--to choose. In short, game theory provides a lens through which to understand our culture better.
Ask an economist about culture in the workplace and he or she (mostly he) will respond with stories about the importance of getting incentives right. Such a response seems faintly insulting as culture is obviously much more than a series of carrots and sticks used to induce corporate donkeys to trod some dreary path carrying their pack.
Yet this is correct, to an extent, for cultures, however noble, meaningful or even godly, founder quickly in the face of bad pecuniary incentives. Happily for us (though not necessarily for the individuals so described), tens of thousands of individuals in the US during the first half of the 19th century became unwitting test subjects for examining this hypothesis. At that time, as apparently at all times, people were sure that civilization was going to hell in a handbasket, that godliness, respect for others, politeness, manners, and the work ethic were all pale shadows of their former selves. In short, many were convinced that civilized society was breaking down or in crisis. They were absolutely convinced that their sons and daughters, high on the freedom (or perhaps stronger spirits) of America's rapidly advancing frontiers, were in the process of sending taking a giant step backwards in terms of civilized society.
(A sidenote: These sons and daughters were, in all likelihood, drunk a great deal of the time. The combination of perishable grains, long winters, bad roads, and knowledge of distilling proved a potent "cocktail" for individuals living in such wild frontier places as Ohio, western New York or western Pennsylvania, far away from the Land that Time Forgot, which lies to the east in the Keystone state. Having a grog--or several--before breakfast, at meals, and at breaks was considered perfectly normal. Rather than having a coffee break, workers would stop for "dram breaks" several times a day. This no doubt contributed to the violence, especially domestic violence, of frontier life, as well as the prevalence of workplace accidents, and possibly, to some degree, the remarkable fecundity of the population, which grew at 4% per annum without any considerable influx of immigrants.)
Anyway, back to our story. Rather than merely bemoaning the sad state of civilization, many people formed utopias--societies set apart and usually dedicated to some prescription or other for the well-led life or for a right and just society. Often these prescriptions were religious as their was a tremendous religious revival occurring at the time, mainly consisting of the formation of new Christian sects. Sometimes the prescriptions were the product of reasoning and "science" by (mainly pseudo) intellectuals . Many of these utopias saw money and, more broadly, property as the root of the problem and banished it from their communities. All property was joint. All production was to be shared. Individuals should take as they need and work according to their ability and in whatever field they thought best. There were also stringent social rules tight control over sex, drinking, profane language, and other behaviors deemed as societal ills. Incentives, so far as any existed, relied purely on social and godly rewards and punishments. Such societies would have little use for our typical economist above, except possibly as a strong back to contribute to crop growing.
Overall, the results of these utopias were...awful. Most fell apart within less than five years of their founding, often ending in bankruptcy, numerous legal battles, and sharp acrimony. Utopias frequently ended up starving since most members preferred to write manifestos about the great deeds of their utopia rather than engaging in (still labor intensive) food production, house production, animal husbandry or any other task likely to produce the necessities of life. Laziness in general was a constant problem as many people would happily do nothing whatever whenever possible.People ate too much from what little food was produced. And worst of all people bickered constantly about what (or in some cases who) was theirs. The absence of currency or formal property rights did not mean that individuals gave up the pursuit of "stuff." Quite the contrary as individuals spent a great deal of time scheming and conniving to acquire squatting rights more and better shelter, food, furnishings, and so on.
There were, however, some successes. In New England, one utopia was, in fact if not in name, nothing more than a company mill town run entirely from the labor of young women. Their parents entrusted them to the manager/mayor/chief priest of the utopia (an older and richer man, of course), to keep them out of trouble and earn some money for their families until they married. To be precise, these women worked hard--very hard--and were given food and board in the utopia in exchange for their labor. They also had to adhere to rules on pain of being "sent home." The most important rule was that, other than the manager/mayor/priest, there were to be no men in the community, either as members or visitors. A cynic might see this "utopia" as little more than a scheme to take advantage of cheap, unskilled labor under a mere facade of societal improvement. Nonetheless, it clearly met some sort of need for New England families to make some money and not have to worry about protecting their daughter's "virtue."
Another notable success were the Shakers. They were a more traditional religious utopia founded by a woman who believed herself to be the second incarnation of Jesus Christ. Her charisma, combined with a strong practical streak---well spelled out rules and the ultimate sanction, banishment and damnation by eternal hellfire---caused the place to run pretty effectively. Ultimately, it was undone by a central rule--celibacy--no individual, under any circumstance, could reproduce. Punishment for doing so, or even performing certain activities that might, if the stars were aligned, lead to reproduction, was exile from the community, stringently enforced. The Shakers lasted surprisingly long given this stricture. They also left us with a nice style of furniture.
But back to the main plot---ask an economist about culture and hear about incentives. Well, it seems not to be entirely nonsense. Incentives do matter a great deal to culture, despite the claims of many intellectuals even today. .
Ask a specialist in organizational behavior (usually trained as a psychologist) about culture and you'll receive a much different answer. She will talk about the importance of empathy, transparency, safety in the frank exchange of feelings, trust, and other such behaviors. This is not to say that these people do not believe in incentives, they do, but tend to call them by different names than economists. To give but one example, most social psychologists believe in a theory called "relational equity" and argue that good cultures are marked by relative balance in relational equity accounts of the key actors. According to this theory, each side keeps an account of the good deeds done for the other (and presumably offsets these debits with credits for bad deeds though this is not much talked about). A relational equity account is balanced when goodness, measured somehow, is approximately equal. Things break down when inequalities persist or grow. It seems intuitive that I might stop being such close friends with someone to whom I grant a string of gifts and favors while receiving nothing back in return. But psychologists think things run the other way as well. I may wish to divorce a friend who is "too nice," someone who endlessly does me good turns at such a rate that I cannot possibly keep up in reciprocation. Thus, both not nice enough and too nice present problems. At any rate, some version of incentives runs through much of the literature on leadership and culture though the incentives tend to be of a more amorphous and personal character rather than the rough and ready dollar variety that economists like.
So where does this leave game theory? One central insight of game theory is that the same set of external and internal incentives can produce wildly different cultures depending on beliefs about the choices (and feelings) of others. Put differently, the same game (i.e. set of actions, payoffs, preferences, and information) can give rise to multiple cultures/equilibria depending on beliefs. Moreover, many things influence these beliefs. Past history, the shared set of company values, the types of individuals recruited, social norms, and so on are all influencers. In game theory, these features are sometimes referred to by the shorthand of focality, circumstances that make one set of beliefs about the actions others will take more likely than some other set of beliefs. While the list of focality influencers offered above are all concepts our OB specialist would readily recognize and endorse, our economist can readily support the idea of changing the game, or even just changing how the game is presented.
Let's make this concrete: Consider the archetypal game Stag Hunt: Two parties (or perhaps many parties) must choose between an ambitious but risky action that requires them to work together to pull off or a safe, but lower payoff action that does not. Since innovation lies at the heart of the long-term success of nearly any company, most leaders would probably wish to encourage their employees and business units to choose ambitious actions at least some of the time. One school of managerial thought suggests that we treat employees as "owner/operators" and hold them closely accountable for their financial performance, typically measured at quarterly time intervals. Thus, if an employee or group/division/branch, etc. does well---meets its numbers---it is rewarded, if it "falls down" it suffers some sanction, and if it does something really great, perhaps an extra award might accrue. Trouble arises in determining how much this extra award might be. While firms have a good sense of the value of normal operations and can reward accordingly, the value of an innovation is rarely immediately apparent and, consequently, a firm might, with good reason, be hesitant to reward it lavishly. Returning to our stag hunt, this implies that gap between the payoff from meeting the numbers versus missing is likely larger than the gap between the payoff from successful innovation versus meeting the numbers, at least in the short run.
But note what our firm, following "best practices" has done---they've unwittingly made it very risky to undertake that ambitious project. If the project requires cooperation across several managers, then the firm has, in effect, given veto power to their most risk averse manager. Not exactly a prescription for innovation.
It needn't be this way and, in fact, probably was not this way back in the firm's formative years. At that time, the firm was small, everyone knew one another closely, working together nearly all the time in the firm's startup phase, and, moreover, ambitious projects were undertaken and were successful, perhaps those projects are the reason the firm is now big and faces this problem in the first place. One could simply write off the two situations as a difference in the personalities of the managers and leave it at that, arguing that those early manager/founders were extraordinary and so they pulled the ambitious projects off whereas the current crop are not made from the same stern stuff. This might be true, but is hardly useful to a firm that wishes to be innovative.
Finally, we are to the heart of the matter. There seems nothing wrong with the incentives of the firm as success is rewarded to differing degrees and failure punished. There may be something wrong elsewhere in the culture, the wrong people, the wrong way of fostering interaction on intergroup projects, and so on, but we have little guide where to look. Treating the cultural situation as a game, however, offers some prospects. From this analysis, we might discover that our incentive system is set up to make safe play "risk dominant" and endeavor to fix this. Curiously, one fix would be to make the incentives less high-powered, to make it okay to fail, perhaps even rewarding failure . We might discover that managers across groups are simply too busy, too productive, to get to know each other so as to form sufficient confidence that promises made by the other will be carried through. This suggests a different set of fixes including retreats, high performer training, or even something as simple as social events for managers.
The point is that, by thinking of key challenges in terms of a game, we gain deeper insights into the pathology of the problem and hence a much better idea of which of the many solutions on offer-- monetary, social, coaching, trust, transparency, and so on--to choose. In short, game theory provides a lens through which to understand our culture better.
Tuesday, September 9, 2014
Class #4 Highlights
In class #4, we studied the game theory value of an option. Unlike in finance, where an option's value accrues only to the extent that it is exercised, in game theory/outward thinking, the value of an option depends on its strategic effect, i.e. the degree to which it changes other players' moves. Success means that the option is not exercised since its whole point was to dissuade rivals from choosing moves that would trigger the option.
We saw this most clearly demonstrated in the NBA game. Absent the option, competition gave the bulk of the surplus created to the player, who enjoyed a 9 million dollar salary. By contrast, the presence of a right of first refusal option (often called a meet or buy or MFN option in other settings), effectively foreclosed competition. This allowed the incumbent team to capture the surplus.
The keys to this option were: (a) incumbent team offered the greatest surplus, and (b) some transaction cost frictions for any other team engaging in competition. If instead, the rival team and player produced $11 million in surplus, the restricted free agency clause would do nothing to curb competiition since the rival team could alway offer $10 million +$1 and foreclose the incumbent, albeit at considerable expense.
The second key highlight of the class was an analysis of WEGO games, i.e. games where moves are either simultaneous or sequential but unknown. In these games, one needs to guess the likelihood of various moves on the part of the rival. Rather than simply guessing moves, EMPATHY is needed to understand the rival's motives and, with this understanding, the implications for moves. A weak version of this idea is rationalizability. A move can be rationalized if their are some beliefs the rival might hold that make this move the best option. We can discard the possibility that the rival will choose moves that are not rationalizable.
In the prisoner's dilemma, rationalizability sufficed to predict the defect, defect outcome since cooperation is not a best response to any beliefs. In the Battle of the Sexes, however, rationalizability told up nothing since all moves are rationalizable.
A much stronger solution in WEGO games is Nash equilibrium, which says that beliefs are correct and both parties act on their beliefs. In the BoS game, coordination on either outcome is a Nash equilibrium. Non-coordination is not. Note that Nash equilibrium is appropriate when sophisticated or experience players participate in a game. Otherwise, more limited notions of sophistication better capture the situation.
In general, our technique for solving WEGO games is: Empathize then optimize, i.e. make predictions about rival beliefs (and the implied actions) and then choose your best action given this process.
We saw this most clearly demonstrated in the NBA game. Absent the option, competition gave the bulk of the surplus created to the player, who enjoyed a 9 million dollar salary. By contrast, the presence of a right of first refusal option (often called a meet or buy or MFN option in other settings), effectively foreclosed competition. This allowed the incumbent team to capture the surplus.
The keys to this option were: (a) incumbent team offered the greatest surplus, and (b) some transaction cost frictions for any other team engaging in competition. If instead, the rival team and player produced $11 million in surplus, the restricted free agency clause would do nothing to curb competiition since the rival team could alway offer $10 million +$1 and foreclose the incumbent, albeit at considerable expense.
The second key highlight of the class was an analysis of WEGO games, i.e. games where moves are either simultaneous or sequential but unknown. In these games, one needs to guess the likelihood of various moves on the part of the rival. Rather than simply guessing moves, EMPATHY is needed to understand the rival's motives and, with this understanding, the implications for moves. A weak version of this idea is rationalizability. A move can be rationalized if their are some beliefs the rival might hold that make this move the best option. We can discard the possibility that the rival will choose moves that are not rationalizable.
In the prisoner's dilemma, rationalizability sufficed to predict the defect, defect outcome since cooperation is not a best response to any beliefs. In the Battle of the Sexes, however, rationalizability told up nothing since all moves are rationalizable.
A much stronger solution in WEGO games is Nash equilibrium, which says that beliefs are correct and both parties act on their beliefs. In the BoS game, coordination on either outcome is a Nash equilibrium. Non-coordination is not. Note that Nash equilibrium is appropriate when sophisticated or experience players participate in a game. Otherwise, more limited notions of sophistication better capture the situation.
In general, our technique for solving WEGO games is: Empathize then optimize, i.e. make predictions about rival beliefs (and the implied actions) and then choose your best action given this process.
Sunday, September 7, 2014
Class #3 Highlights
In this class, we created an algorithm to implement LFRB reasoning. Essentially, we begin at the final branch of the game for each possible history of play. We then trim off all branches save for the best one. We then substitute these optimized payoffs into the immediately preceding stage of the game and repeat.
We learned that, so long as the game is UGOIGO and there are no ties in choices anywhere on the tree, this procedure will produce a unique prediction. Note, however, that this prediction assumes (a) you have correctly specified the payoffs; and (b) the both individuals are sophisticated enough to look far enough forward; and (c) this mutual sophistication is common knowledge. Part (c) deserves a bit more comment: Even if both sides truly are super-rational, if each has some doubt over the rationality of the other party, then this doubt will be incorporated into choices. And this can have an echo effect leading to choices far from the original prediction.
To see this, consider a game being played by Anna and Bob, both of whom are fully rational. Anna, however, suspects that Bob might not be rational and so builds this intro her strategy. Bab suspects that Anna is skeptical, so Bob factors Anna's skeptical response into his own response. But now Anna, being quite sophisticated herself, will anticipate that "rational" versions of Bob will react strategically to her skepticism, and so will again amend her respone ad infinitum.
To see this reasoning in action, consider the guess the 2/3 of the average game you may have played earlier in your MBA career. In this game, individuals choose numbers beween zero and 100. The person guessing closes to 2/3 of the average of all choices wins the prize. It may easily be shown that choosing 0 is the unique equilibrium. Now let's add the possibility of doubt: Suppose that Anna thinks that, with some small probability, the others won't get the game, and so will choose at random, producing 50 on average. In response, she will no longer wish to bid zero, but an amount just a bit higher than zero to win in the event others screwed up the game. But now, all other sophisticated individuals will anticipate mores like Anna's and so will increase their choices above zero by even more, and so on. The point is that the original conclusion falls apart once doubts about others' rationality are permitted.
Scenario Planning
Most strategy groups in large organizations engage in a scenario planning process. Often, this amounts to delineating the strategies available to rivals and then placing probabilities of each path. This is fine save for the last step. To a game theorist, the last step is quintessential outward thinking and requires One to create a mental model of the rival--to look at the world from their point of view--and derive optimal responses given this mental model. Of course, different mental models might produce different optimizing choices. Thus, placing weights on strategies is essentially placing weight on differing mental models producing those choices. This is more art than science, but has the virtue of making clear the assumptions driving the percentages placed on choices. Often, this will change the initial "gut feel" weights.
The Coors caselet offers some flavor for how to use LFRB in scenario planning, even absent decisive information about payoffs. One of the nice things about our recipe for LFRB is that only ordinal information is required, i.e. it suffices merely to know that outcome A is preferred to outcome B, but not by how much A is preferred to B. This vastly reduces the data load of the analysis.
We learned that, so long as the game is UGOIGO and there are no ties in choices anywhere on the tree, this procedure will produce a unique prediction. Note, however, that this prediction assumes (a) you have correctly specified the payoffs; and (b) the both individuals are sophisticated enough to look far enough forward; and (c) this mutual sophistication is common knowledge. Part (c) deserves a bit more comment: Even if both sides truly are super-rational, if each has some doubt over the rationality of the other party, then this doubt will be incorporated into choices. And this can have an echo effect leading to choices far from the original prediction.
To see this, consider a game being played by Anna and Bob, both of whom are fully rational. Anna, however, suspects that Bob might not be rational and so builds this intro her strategy. Bab suspects that Anna is skeptical, so Bob factors Anna's skeptical response into his own response. But now Anna, being quite sophisticated herself, will anticipate that "rational" versions of Bob will react strategically to her skepticism, and so will again amend her respone ad infinitum.
To see this reasoning in action, consider the guess the 2/3 of the average game you may have played earlier in your MBA career. In this game, individuals choose numbers beween zero and 100. The person guessing closes to 2/3 of the average of all choices wins the prize. It may easily be shown that choosing 0 is the unique equilibrium. Now let's add the possibility of doubt: Suppose that Anna thinks that, with some small probability, the others won't get the game, and so will choose at random, producing 50 on average. In response, she will no longer wish to bid zero, but an amount just a bit higher than zero to win in the event others screwed up the game. But now, all other sophisticated individuals will anticipate mores like Anna's and so will increase their choices above zero by even more, and so on. The point is that the original conclusion falls apart once doubts about others' rationality are permitted.
Scenario Planning
Most strategy groups in large organizations engage in a scenario planning process. Often, this amounts to delineating the strategies available to rivals and then placing probabilities of each path. This is fine save for the last step. To a game theorist, the last step is quintessential outward thinking and requires One to create a mental model of the rival--to look at the world from their point of view--and derive optimal responses given this mental model. Of course, different mental models might produce different optimizing choices. Thus, placing weights on strategies is essentially placing weight on differing mental models producing those choices. This is more art than science, but has the virtue of making clear the assumptions driving the percentages placed on choices. Often, this will change the initial "gut feel" weights.
The Coors caselet offers some flavor for how to use LFRB in scenario planning, even absent decisive information about payoffs. One of the nice things about our recipe for LFRB is that only ordinal information is required, i.e. it suffices merely to know that outcome A is preferred to outcome B, but not by how much A is preferred to B. This vastly reduces the data load of the analysis.
Subscribe to:
Posts (Atom)