Biology student Derrick Grunwald told me about the following tale of the game theory of bacteria. It turns out that certain bacteria come in two varieties, altruistic and selfish. The classification relates to their reaction to emitting a certain molecule. The selfish types emit the molecule and then immediately claim it for themselves using a receptor in another part of the creature. The altruists emit the molecule to the colony and receive the average emissions available, sort of like a public good. Apparently, this process of emitting and receiving creates fitness for the bacteria. Derrick tells me that there is an ideal amount of the molecule to receive, m*. A bacteria exposed to too much is unhealthy as is one exposed to too little.
The puzzle to biologists is how their can be altruistic bacteria. While other-regarding or even eugenic preferences are possible in higher primates, it seems a stretch to consider such motives in bacteria.
So how can we resolve this puzzle using game theory and what does this tell us about the nature of these bacterial colonies? First off, why are there volunteers in the first place? What possible benefit is there from volunteering? When a bacteria emits the molecule, it doesn't get the amount exactly right. Sometimes it emits too much, sometimes too little. On average, it's the correct amount, but individually it is not. Thus, the volunteer bacteria are engaging in a bit of risk pooling. By emitting in general, all of these errors average out in the colony and each individual absorbs just the right amount of the molecule thanks to the magic of the law of large numbers. A colony of selfish bacteria are choosing not to insure. This is obviously less fit than insuring, but does have some advantages of reliability.
Let us study "equilibria" of this game. Suppose that a colony consists entirely of volunteers and it is invaded by a small number of selfish types. The volunteers will still absorb approximately m* of the molecule while the selfish will absorb 2m*--way too much of the molecule. Thus, the selfish are "killed with kindness" by the volunteers. Hence, all volunteers comprises an equilibrium.
What about a colony of all selfish types? While less fit than the volunteers, this colony is also immune to invasion. If a small number of volunteers show up, they hardly affect the absorption of the selfish, which is still approximately m* on average, but the volunteers themselves get almost none of the molecule. Thus, they too die out. So all selfish is an equilibrium despite its inferiority to the insurance scheme worked out by the volunteers.
What about mixed colonies? This is possible, but unstable. If the colony consists of a fraction f of volunteer types, a co-existing equilibrium can arise. In this situation, selfish types are systematically exposed to too much of the molecule since they absorb some of the production of the volunteers. Volunteers systematically have too little of the molecule since the selfish types are "stealing" some. And if the fitness of the two types is approximately equal, they can coexist.
The instability arises from the following problem. If the co-existing colony is invaded by selfish types, this improves the fitness of the selfish, since the emissions of the volunteers are now more diluted by the additional selfish types, but reduces the fitness of volunteers since there are now more selfish types "stealing" the molecule. Hence, such a perturbation would cause the colony to eventually drift to an all-selfish society. By contrast, if the colony is invaded by volunteers, just the opposite occurs--volunteers become more fit while selfish become less fit. Only if invasions of various types are occurring often enough to bring the population back to the equilibrium fraction f, will the colony continue to coexist. This is unlikely if invasions occur randomly, even if both types of invasions are equally likely.
Thus, using game theory, the puzzle of altruistic bacteria may be understood as purely selfish behavior.
Friday, November 21, 2014
Wednesday, October 29, 2014
If a tree falls in the forest...
There's an old conundrum that asks whether, when a tree falls in the forest, and know one is there to hear it, does it make a sound? Millions of creators of social media must constantly ask themselves the same question. How many thousands of tweets, facebook posts, youtube videos, and blog entries pass through the ether, unheard, unread, and unknown.
In the words of Thomas Gray (Elegy Written in a Country Churchyard)
At Berkeley-Haas, the Dean's suite often coaxes a reluctant professoriate to embrace the age of social media, to interact with our students outside the classroom in these social spaces. We are advised that the millenials we teach are especially receptive to such bite-sized portions of wisdom, that, in fact, they prefer them to the more traditional long-form of the lecture hall or the textbook. We are advised to turn our classes upside-down, to engage in all possible ways for the ever elusive mindshare.
What we are not offered, however, is evidence. Does any of this flailing outside the classroom matter? Do the students even want it?
I conducted an A/B test to measure this. Before each of my last two Wednesday classes, I wrote a blog entry. I viewed the entries as similarly interesting to my class. If anything, the treatment entry is more interesting. The key treatment was announcing the presence of the entry in one case, and saying nothing in the other.
Here are the results of the experiment:
Blog entry with no announcement: +1, 0 comments, 14 views.
Blog entry with announcement, +1, 0 comments, 14 views.
It takes no statistics to see that awareness that a blog entry has been written makes no difference whatsoever.
What should we make of this experiment? My take is the following: All the hype about social is a bunch of hooey. Individuals want well-produced solid entertainment. There may, at one point, have been novelty value in the power of individuals to create content, but that point has long passed. What millenials want is the same thing that all previous generations want, solid amusement for their out of class hours. So far as I know, this is the first experiment to test the desires of MBA millenials to read the random thoughts of their blogging social professors. Regardless, it's a finding worthy of wider circulation. Simply put, calls to embrace social as an important information stream are, quite simply, nonsense.
It's a sad conclusion, and it won't stop me from writing since I derive joy from the process myself, but it suggests a refocus in pedagogy away from the "flavor of the month" and back toward the heart of the matter, which is providing great experiences for students in the classroom.
A lovely blog/youtude/facebook/twitter steam is all well and good, but it should be seen for what it is, entirely peripheral and largely wasted motion, at least so far as my sample is representative.
In the words of Thomas Gray (Elegy Written in a Country Churchyard)
And so it is with social media. How many Ansel Addams' or Auguste Monet's are bound to rest, undisturbed and undiscovered, amidst the detritus of the communication explosion.Full many a flower is born to blush unseen, /And waste its sweetness on the desert air.
At Berkeley-Haas, the Dean's suite often coaxes a reluctant professoriate to embrace the age of social media, to interact with our students outside the classroom in these social spaces. We are advised that the millenials we teach are especially receptive to such bite-sized portions of wisdom, that, in fact, they prefer them to the more traditional long-form of the lecture hall or the textbook. We are advised to turn our classes upside-down, to engage in all possible ways for the ever elusive mindshare.
What we are not offered, however, is evidence. Does any of this flailing outside the classroom matter? Do the students even want it?
I conducted an A/B test to measure this. Before each of my last two Wednesday classes, I wrote a blog entry. I viewed the entries as similarly interesting to my class. If anything, the treatment entry is more interesting. The key treatment was announcing the presence of the entry in one case, and saying nothing in the other.
Here are the results of the experiment:
Blog entry with no announcement: +1, 0 comments, 14 views.
Blog entry with announcement, +1, 0 comments, 14 views.
It takes no statistics to see that awareness that a blog entry has been written makes no difference whatsoever.
What should we make of this experiment? My take is the following: All the hype about social is a bunch of hooey. Individuals want well-produced solid entertainment. There may, at one point, have been novelty value in the power of individuals to create content, but that point has long passed. What millenials want is the same thing that all previous generations want, solid amusement for their out of class hours. So far as I know, this is the first experiment to test the desires of MBA millenials to read the random thoughts of their blogging social professors. Regardless, it's a finding worthy of wider circulation. Simply put, calls to embrace social as an important information stream are, quite simply, nonsense.
It's a sad conclusion, and it won't stop me from writing since I derive joy from the process myself, but it suggests a refocus in pedagogy away from the "flavor of the month" and back toward the heart of the matter, which is providing great experiences for students in the classroom.
A lovely blog/youtude/facebook/twitter steam is all well and good, but it should be seen for what it is, entirely peripheral and largely wasted motion, at least so far as my sample is representative.
Tuesday, October 28, 2014
Pregnant MBAs
When most people think of game theory, they think of situations in which two or more individuals compete in some situation or game. Chess is the quintessential example. Yet the thinking underlying formulating a good plan when playing against others is just as important (and useful) when thinking about your own future path.
An episode of Seinfeld captures this idea beautifully when "morning Jerry" curses aloud the bad choices made by "evening Jerry." Evening Jerry stays out too late and indulges too much making life difficult for morning Jerry, who has to pay the piper for this excess. He then muses that morning Jerry does have a punishment available to evening Jerry. Were morning Jerry to lose his job, evening Jerry would be without funds or friends to pursue his wild lifestyle. Not mentioned is that this punishment is also costly to afternoon Jerry, who is likewise in no position to pay the bill at Monk's Cafe for meals with Elaine and George.
While the Seinfeld routine is meant to be funny, for many individuals, the problems of the activities of their many selves are no laughing matter. Anyone struggling with their weight curses their night self or their stressed out self for lack of willpower. That giant piece of cake that stressed-out self saw as deliverance means a week of salads and many extra hours at the gym for the other selves.
I bring this up because we all suffer from the evening Jerry problem, but for MBAs, it's one of the most serious problems they'll ever face. For most MBAs, the two years spent getting this degree represents the final time in life when complete attention can be paid to learning and single-mindedly building human and social capital. The constraints of work and family offer vastly less time for such activities in the future. While there may be occasionally breaks for internal training or executive education, such breaks are rare. Moreover, for busy managers, even these times may not constitute a break. Work does not stop simply because you are absent. Crises still need to be dealt with and deadlines met.
Moreover, the gains made during this time can profoundly affect an MBA's career. They can be the difference between the C-suite and merely marking time in middle management. They can be the difference between the success and failure of a startup. They can be the difference between getting a position offering a desired work-life balance and accepting a position that does not. They can be all the difference in life.
Yet, inward thinking would have us see the choices we make quite narrowly. Will an extra weekend in Tahoe be the difference between an A- and a B+? Will it be the difference between making or missing a class? Will it be the difference between actively participating in a case discussion or not? Will they be the difference between seeing or missing some outside speaker in an industry of interest? Viewed in this light, such choices are of decidedly small caliber. An extra day in Tahoe can hardly be the difference in anything of consequence.
Wilhelm Steinetz, the great chess champion, averred that success in chess is a consequence of the accumulation of small advantages. Viewed narrowly, inwardly one might say, the differences Steinetz had in mind seem trivial. To most chess players, the board, and one's chances of winning, are not appreciably different with a slightly better versus a slightly worse pawn formation. Yet a grandmaster does not see things this way at all. Such individuals are consummate outward thinkers, constantly planning for the endgame, for the win, which will only be determined much later. From that perspective, such trivial things are of utmost importance. And indeed they are as, more often than not, they mark the difference between victory and defeat in championship play.
Outward thinking also allows one to take a longer-term view on the time spent while pursuing an MBA. The apparent difference between the talent at the middle and the top tier of many organizations is often remarkably small. The CFO does not have appreciably more IQ points than someone lower down. She does not work vastly more hours or have vastly better social capital. Rather, her rise commonly represents an accumulation of small advantages--a win early in her career that distinguished her as high potential, a management style that avoided or defused a conflict that might have derailed her progress, an unexpected opportunity because a head at some other firm remembered her as standing out. In the end, it is clear that she is CFO material while the middle manager is not, but it didn't start out that way.
We might be tempted to chalk all this up to luck. She got the breaks while her sister, unhappily struggling as a middle manager somewhere, didn't. Admittedly, luck plays a role, but chance, somehow, seems to mysteriously favor some more than others. Consider the situation of poker or blackjack players. To win, one needs to have a good hand, i.e. one needs to be lucky. Yet some people consistently win at these games and others consistently lose. Luck evens out over many hands, or over many events in the course of a lifetime, yet there are genuine poker all-stars.
Much like evening Jerry, MBA life is filled with temptations, filled with vast stress-easing slices of chocolate cake. We might think, "What's the harm? We all work hard. I deserve this." But unlike the dieter, who can learn from his mistake and say no to the cake the next time around, an MBA gets only one chance to get it right. Mess it up this time and there is no tomorrow to make amends. There are no do-overs. The MBA equivalent of chocolate cake today may not mean a week of salads and workouts, but a lifetime of them.
So the lesson here is our usual one: look forward, reason back. How will our future selves perceive the choices we are making today? Will they be happy?If not, then it's time to make different choices.
So how does any of this relate to the title of this post, Pregnant MBAs. When a woman learns she is pregnant, she will often alter her lifestyle, sometimes radically, After all, she's now responsible for her unborn baby as well as herself, so she eats better, stops smoking and drinking, exercises more, and so on. She wants her baby to have its best chance in the world. In a sense, every MBA is pregnant. You are also responsible for two---your present self and your future self. Outward thinking is nothing more than this awareness, so obvious to a pregnant woman,,but submerged beneath the surface at all other times. Give your future self his or her best chance in the world.
An episode of Seinfeld captures this idea beautifully when "morning Jerry" curses aloud the bad choices made by "evening Jerry." Evening Jerry stays out too late and indulges too much making life difficult for morning Jerry, who has to pay the piper for this excess. He then muses that morning Jerry does have a punishment available to evening Jerry. Were morning Jerry to lose his job, evening Jerry would be without funds or friends to pursue his wild lifestyle. Not mentioned is that this punishment is also costly to afternoon Jerry, who is likewise in no position to pay the bill at Monk's Cafe for meals with Elaine and George.
While the Seinfeld routine is meant to be funny, for many individuals, the problems of the activities of their many selves are no laughing matter. Anyone struggling with their weight curses their night self or their stressed out self for lack of willpower. That giant piece of cake that stressed-out self saw as deliverance means a week of salads and many extra hours at the gym for the other selves.
I bring this up because we all suffer from the evening Jerry problem, but for MBAs, it's one of the most serious problems they'll ever face. For most MBAs, the two years spent getting this degree represents the final time in life when complete attention can be paid to learning and single-mindedly building human and social capital. The constraints of work and family offer vastly less time for such activities in the future. While there may be occasionally breaks for internal training or executive education, such breaks are rare. Moreover, for busy managers, even these times may not constitute a break. Work does not stop simply because you are absent. Crises still need to be dealt with and deadlines met.
Moreover, the gains made during this time can profoundly affect an MBA's career. They can be the difference between the C-suite and merely marking time in middle management. They can be the difference between the success and failure of a startup. They can be the difference between getting a position offering a desired work-life balance and accepting a position that does not. They can be all the difference in life.
Yet, inward thinking would have us see the choices we make quite narrowly. Will an extra weekend in Tahoe be the difference between an A- and a B+? Will it be the difference between making or missing a class? Will it be the difference between actively participating in a case discussion or not? Will they be the difference between seeing or missing some outside speaker in an industry of interest? Viewed in this light, such choices are of decidedly small caliber. An extra day in Tahoe can hardly be the difference in anything of consequence.
Wilhelm Steinetz, the great chess champion, averred that success in chess is a consequence of the accumulation of small advantages. Viewed narrowly, inwardly one might say, the differences Steinetz had in mind seem trivial. To most chess players, the board, and one's chances of winning, are not appreciably different with a slightly better versus a slightly worse pawn formation. Yet a grandmaster does not see things this way at all. Such individuals are consummate outward thinkers, constantly planning for the endgame, for the win, which will only be determined much later. From that perspective, such trivial things are of utmost importance. And indeed they are as, more often than not, they mark the difference between victory and defeat in championship play.
Outward thinking also allows one to take a longer-term view on the time spent while pursuing an MBA. The apparent difference between the talent at the middle and the top tier of many organizations is often remarkably small. The CFO does not have appreciably more IQ points than someone lower down. She does not work vastly more hours or have vastly better social capital. Rather, her rise commonly represents an accumulation of small advantages--a win early in her career that distinguished her as high potential, a management style that avoided or defused a conflict that might have derailed her progress, an unexpected opportunity because a head at some other firm remembered her as standing out. In the end, it is clear that she is CFO material while the middle manager is not, but it didn't start out that way.
We might be tempted to chalk all this up to luck. She got the breaks while her sister, unhappily struggling as a middle manager somewhere, didn't. Admittedly, luck plays a role, but chance, somehow, seems to mysteriously favor some more than others. Consider the situation of poker or blackjack players. To win, one needs to have a good hand, i.e. one needs to be lucky. Yet some people consistently win at these games and others consistently lose. Luck evens out over many hands, or over many events in the course of a lifetime, yet there are genuine poker all-stars.
Much like evening Jerry, MBA life is filled with temptations, filled with vast stress-easing slices of chocolate cake. We might think, "What's the harm? We all work hard. I deserve this." But unlike the dieter, who can learn from his mistake and say no to the cake the next time around, an MBA gets only one chance to get it right. Mess it up this time and there is no tomorrow to make amends. There are no do-overs. The MBA equivalent of chocolate cake today may not mean a week of salads and workouts, but a lifetime of them.
So the lesson here is our usual one: look forward, reason back. How will our future selves perceive the choices we are making today? Will they be happy?If not, then it's time to make different choices.
So how does any of this relate to the title of this post, Pregnant MBAs. When a woman learns she is pregnant, she will often alter her lifestyle, sometimes radically, After all, she's now responsible for her unborn baby as well as herself, so she eats better, stops smoking and drinking, exercises more, and so on. She wants her baby to have its best chance in the world. In a sense, every MBA is pregnant. You are also responsible for two---your present self and your future self. Outward thinking is nothing more than this awareness, so obvious to a pregnant woman,,but submerged beneath the surface at all other times. Give your future self his or her best chance in the world.
Friday, October 24, 2014
(Not) Solving Social Dilemmas
The secret ingredient to solving social dilemmas is punishment. Without the ability to punish transgressors, good behavior must rely entirely on individual self-control in abstaining from the temptation to cheat and obtain a reward. Some individuals are up to this challenge. Their consciences are so strongly developed to chasten them for engaging in dishonorable behavior that this is not a problem. But for most, it is. Indeed, a fundamental human weakness is to go for today's reward over tomorrow's, a failing which destroys most attempts at dieting let along social dilemmas.
Sufficient punishment fixes this problem by adding the force of extrinsic incentives to intrinsic. Given sufficient reinforcement, even those more "pragmatic" consciences can be persuaded to do the right thing. Yet, even when such punishments are available, a literal-minded view of game theory suggests that they must always be available. Individuals must never perceive themselves as getting off scot free. The usual version of this argument uses look forward, reason back type reasoning. Suppose the game will end in period N. Then clearly bad behavior in that period is unpunishable and hence everyone will be bad. But this destroys punishment for bad behavior in the penultimate period, and so on. The pure logic of the situation implies that any game known to end at a fixed time, however far into the future, will see cooperation remorselessly break down right from the start.
But, while logically sound, this is a very silly prediction when the number of periods is long. Despite the logic of LFRB, bad behavior in early periods will be punished and good behavior rewarded with reciprocity in early periods of the game, only breaking down when the endgame is sufficiently close. Here again we see the friendly face of irrationality. Suppose there is some chance that others playing the game don't "get it." They, instead, think they are playing the infinitely repeated version of the game whereby cooperation can be sustained by threats of punishment, and play accordingly.
What is a rational soul to do? The answer is, when in Rome, do as Romans do. In other words, pretending that you too don't get the game is a perfectly logical and sensible response, at least until some point deep in the game where knowledge that punishment won't work becomes too hard to ignore. As with some of our other examples, only a seed of doubt about the rationality of others is needed so long as the endgame is sufficiently far off into the future. Such a seed of doubt seems to me eminently more reasonable than the maintained assumption in the standard model, that everyone understands the game perfectly,
But if we're playing a short game consisting of, say, only 8 periods. Now the logic of LFRB has real force, and we would seem to be genuinely doomed. Running such a game in the lab reveals that things are not quite as bad as all that, but they are definitely pretty bad.
Or maybe not. Let's change the game a little. Suppose, at the end of each period, each player has a chance to punish one or more of the others. Punishment involves destroying some of their payoffs. But this punishment is not free, it costs the punishing individual something as well. This would seem of immense help since the whole problem was that we lacked an ability to punish in the last period and, this wrecked the incentives in all the earlier periods. Now, we no longer have this problem. If someone misbehaves in the last period, we punish them via value destruction and all is once again well in the world. But such a plan only works if the punishment is credible, if we can count on the punishment to be delivered should an individual try to test societal resolve. There are two problems with credibility. First, there is a social dilemma in who delivers the punishment. While we all might agree that cheaters should be punished, each of us would rather that someone else delivers the punishment and hence takes the hit to their own payoffs. But even if we sorted this out by agreeing on whose job it was to punish, we might still be in trouble. In the last period of the game, who would actually carry through with the punishment?
This is a version of our earlier problem. In the last period, there is no future reward for today's good behavior nor punishment for bad. Since the whole point of punishing is to obtain future good behavior, what is the point of punishing anyone in the last period of the game? Worse yet, not only is punishment pointless, but it is also costly. So why would anyone believe that misbehavior will be punished in the last period? By the same logic, there is no point in punishing in the penultimate period either and again the endgame casts its long shadow back to the first period. Irrationality seems to work less well as a way out of this particular logical bind. The endgame is simply too near.
Yet, remarkably, such schemes seem to work, at least in western cultures. Economists Ernst Fehr, Simon Gaechter, and a number of other co-authors performed versions of this experiment in labs in the US and Europe. Remarkably, they found the such a scheme proved excellent at producing cooperation. Some individuals tested societal resolve early in the game and invariably received bloody noses for this insolence.
But why does it work? The answer, in this case, seems to be morality. While conscience is weak in resisting temptation, it is rather stronger when we have a chance to unsheathe the sword of justice to be used on others. That is, even though there was a personal cost to punishment, it seemed to be more than offset by a personal benefit in administering justice.
[A curious sidenote to this stories, in Middle Eastern, Greek, and North African cultures, the sword of justice operated in the other direction---those who did not cheat were punished. That is, individuals cooperating in a prisoners dilemma were hit with penalties from those who did. This soon produced conformity in a failure to solve social dilemmas.]
This solution is all well and good when the punishment can be surgically directed at transgressors, but suppose instead that there is collateral damage--to punish the guilty, some innocents must be punished as well. In work in progress by myself and former Berkeley undergrad Seung-Keun Martinez, we investigate this possibility. We first consider the opposite extreme where, to punish one person, you must punish everyone. One might think that such a scheme would be either marginally useful or utterly useless. In fact, it is neither. Compared to the world in which no punishment was allowed, punishment of this sort actually reduces cooperation. This is not simply from the value destroyed by punishment (though there is some of that), but rather from a reaction to the possibility of such punishment. When individuals fear suffering punishment when doing the "right thing," they seem to become despondent and to cooperate less than those under no such threat.
We are now in the process of investigating intermediate levels of "collateral damage" from punishment to determine where the 'tipping point" might lie.
Sufficient punishment fixes this problem by adding the force of extrinsic incentives to intrinsic. Given sufficient reinforcement, even those more "pragmatic" consciences can be persuaded to do the right thing. Yet, even when such punishments are available, a literal-minded view of game theory suggests that they must always be available. Individuals must never perceive themselves as getting off scot free. The usual version of this argument uses look forward, reason back type reasoning. Suppose the game will end in period N. Then clearly bad behavior in that period is unpunishable and hence everyone will be bad. But this destroys punishment for bad behavior in the penultimate period, and so on. The pure logic of the situation implies that any game known to end at a fixed time, however far into the future, will see cooperation remorselessly break down right from the start.
But, while logically sound, this is a very silly prediction when the number of periods is long. Despite the logic of LFRB, bad behavior in early periods will be punished and good behavior rewarded with reciprocity in early periods of the game, only breaking down when the endgame is sufficiently close. Here again we see the friendly face of irrationality. Suppose there is some chance that others playing the game don't "get it." They, instead, think they are playing the infinitely repeated version of the game whereby cooperation can be sustained by threats of punishment, and play accordingly.
What is a rational soul to do? The answer is, when in Rome, do as Romans do. In other words, pretending that you too don't get the game is a perfectly logical and sensible response, at least until some point deep in the game where knowledge that punishment won't work becomes too hard to ignore. As with some of our other examples, only a seed of doubt about the rationality of others is needed so long as the endgame is sufficiently far off into the future. Such a seed of doubt seems to me eminently more reasonable than the maintained assumption in the standard model, that everyone understands the game perfectly,
But if we're playing a short game consisting of, say, only 8 periods. Now the logic of LFRB has real force, and we would seem to be genuinely doomed. Running such a game in the lab reveals that things are not quite as bad as all that, but they are definitely pretty bad.
Or maybe not. Let's change the game a little. Suppose, at the end of each period, each player has a chance to punish one or more of the others. Punishment involves destroying some of their payoffs. But this punishment is not free, it costs the punishing individual something as well. This would seem of immense help since the whole problem was that we lacked an ability to punish in the last period and, this wrecked the incentives in all the earlier periods. Now, we no longer have this problem. If someone misbehaves in the last period, we punish them via value destruction and all is once again well in the world. But such a plan only works if the punishment is credible, if we can count on the punishment to be delivered should an individual try to test societal resolve. There are two problems with credibility. First, there is a social dilemma in who delivers the punishment. While we all might agree that cheaters should be punished, each of us would rather that someone else delivers the punishment and hence takes the hit to their own payoffs. But even if we sorted this out by agreeing on whose job it was to punish, we might still be in trouble. In the last period of the game, who would actually carry through with the punishment?
This is a version of our earlier problem. In the last period, there is no future reward for today's good behavior nor punishment for bad. Since the whole point of punishing is to obtain future good behavior, what is the point of punishing anyone in the last period of the game? Worse yet, not only is punishment pointless, but it is also costly. So why would anyone believe that misbehavior will be punished in the last period? By the same logic, there is no point in punishing in the penultimate period either and again the endgame casts its long shadow back to the first period. Irrationality seems to work less well as a way out of this particular logical bind. The endgame is simply too near.
Yet, remarkably, such schemes seem to work, at least in western cultures. Economists Ernst Fehr, Simon Gaechter, and a number of other co-authors performed versions of this experiment in labs in the US and Europe. Remarkably, they found the such a scheme proved excellent at producing cooperation. Some individuals tested societal resolve early in the game and invariably received bloody noses for this insolence.
But why does it work? The answer, in this case, seems to be morality. While conscience is weak in resisting temptation, it is rather stronger when we have a chance to unsheathe the sword of justice to be used on others. That is, even though there was a personal cost to punishment, it seemed to be more than offset by a personal benefit in administering justice.
[A curious sidenote to this stories, in Middle Eastern, Greek, and North African cultures, the sword of justice operated in the other direction---those who did not cheat were punished. That is, individuals cooperating in a prisoners dilemma were hit with penalties from those who did. This soon produced conformity in a failure to solve social dilemmas.]
This solution is all well and good when the punishment can be surgically directed at transgressors, but suppose instead that there is collateral damage--to punish the guilty, some innocents must be punished as well. In work in progress by myself and former Berkeley undergrad Seung-Keun Martinez, we investigate this possibility. We first consider the opposite extreme where, to punish one person, you must punish everyone. One might think that such a scheme would be either marginally useful or utterly useless. In fact, it is neither. Compared to the world in which no punishment was allowed, punishment of this sort actually reduces cooperation. This is not simply from the value destroyed by punishment (though there is some of that), but rather from a reaction to the possibility of such punishment. When individuals fear suffering punishment when doing the "right thing," they seem to become despondent and to cooperate less than those under no such threat.
We are now in the process of investigating intermediate levels of "collateral damage" from punishment to determine where the 'tipping point" might lie.
Tuesday, October 21, 2014
The Costs of Coordination
Large organizations face a fundamental dilemma. They want their employees to coordinate on doing the right thing, on the firm's strategy, but they also want them to coordinate with one another. The two may differ in their importance depending on the organization. For some, coordination with one another may be of little consequence while coordination at large may be crucial--think of creative industries where artists labor alone to achieve some vision consistent with the roadmap of the company. For others, employees working in tandem is the important thing--think of iPhone production lines in Shenzen.
So what would the firm desire of its employees? The answer is simple and is, in fact, equivalent to a regression. The firm wishes to minimize the sum of squared error between an employee's action and the true visision. The only piece of data an employee has is her signal. Simple statistics will confirm that the "regression coefficient" on this piece of information is equal to 1/2, i.e. the signal-noise statistic from above. That is, under ideal circumstances, each employee will merely do her best to conform her action to the expected value of the vision conditional on her signal.
So far, so good, but what will employees actually do? Also from basic statistics, it is apparent that the best choice for an employee is to selection an action that places half the weight on the expected vision conditional on each employee's signal, s, and the other half on the expectation of the other employees' actions. The expected state, as we saw above, is nothing more than half the signal. The latter is more complicated, but becomes much simpler if we assume the firm is large--indeed so large that we can guess the average action using the law of large numbers, which tells us that sum of others' signals converges to the underlying vision, which, as we saw, was just half of the signal.
Finally, let us suppose, in equilibrium, an individual chooses an action equal to w times her own signal, where w is a mnemonic for weight. Thus, the equilibrium equation becomes:
whereas the optimal weight is f. It may be easily verified that w* < f. Also, from this equation, it is clear, and intuitive, that the problem is lessened the smaller the importance of coordinating with other employees (i.e. the larger is R) and the more articulate the leader (i.e. the larger is f).
Some academic housekeeping: The model described above is a version of one due to Morris and Shin, American Economic Review, 2003. They give a rather different interpretation to the model though. Moreover, their version of the model assumes that individuals have no prior beliefs. Under this (peculiar) assumption, the gap between the equilibrium weight on signals and the statistically optimal weight disappears, but this its absence is purely an artifact of their setup. The observations in this blog piece come from theory and experiments I'm currently writing about in a working paper with Don Dale at Muhlenberg College.
For game theorists and economists, the first trouble that comes to mind is what is known as agency problems, employees might not have the right incentives in mind to perform in conjunction with the firm and so they diverge in their actions and things go wrong. Let us set aside these possibilities and imagine that somehow these issues have been solved. Employees have nothing more than the company's interest at heart. This would suggest that all of our problems are solved and things are well with the world. Or are they? The world is full of miscommunication. While I try my best to articulate various ideas as clearly and carefully as I can, the sad fact is that I am doomed to failure, as is most anyone else caring to do likewise.
For CEOs and other leaders, communicating their vision effectively is central to their quality of leadership.
What can game theory tell us about the problem of imperfect communication of a leader's vision on the firm's prospects, even when incentives are well-aligned? Unlike our sad stories about understanding persuasion, things are a bit more fruitful here.
For CEOs and other leaders, communicating their vision effectively is central to their quality of leadership.
What can game theory tell us about the problem of imperfect communication of a leader's vision on the firm's prospects, even when incentives are well-aligned? Unlike our sad stories about understanding persuasion, things are a bit more fruitful here.
As usual, we make a simple model to describe the complex world of a leader seeking to impart her vision. To be precise, suppose that the leader's vision can be thought of as a normally distributed random variable with mean equal to 0 known variance, equal to 1 say. The idea here is that, under average conditions, a leader has a standard, long-run vision, which we will normalize at zero. However, the business climate changes, which requires some alteration of this vision. New rivals emerge, acquisitions are made, employees innovate new business areas. Our normal distribution represents these alterations.
Employees are perfectly aware of the long-run vision. It's part of the firm's core DNA. They also know that it changes with conditions, but don't know precisely how it changes. Indeed, understanding how to translate changes in the business landscape into vision is a large part of the leader's value.
But knowing the right vision is only half the battle. Our leader must also articulate it. So our CEO makes a speech, or composes a set of leadership principles, or does any number of things to express the current vision. All of this is transmitted to employees. Employees, however, only imperfectly understand their leader's wishes. Instead of learning the vision exactly, each gets a signal of the vision, which equals the truth plus a standard normal error term.
Employees are perfectly aware of the long-run vision. It's part of the firm's core DNA. They also know that it changes with conditions, but don't know precisely how it changes. Indeed, understanding how to translate changes in the business landscape into vision is a large part of the leader's value.
But knowing the right vision is only half the battle. Our leader must also articulate it. So our CEO makes a speech, or composes a set of leadership principles, or does any number of things to express the current vision. All of this is transmitted to employees. Employees, however, only imperfectly understand their leader's wishes. Instead of learning the vision exactly, each gets a signal of the vision, which equals the truth plus a standard normal error term.
If this sounds like a statistics problem so far, that's because it is. Indeed, we'll tear a page out of our notebook on regressions to assess what the leader would like the employees to do in a moment. Meanwhile, note that employees know that they understand the vision only imperfectly, so they form posteriors about the true vision. These consist of placing some weight on the prior--the long-run visions, with the rest going to the signal. The weight on the signal (optimally) consists of a ratio of the variance in vision divided by the sum of the variance of the vision term and the error term, i.e. a version of the signal to noise ratio. In our example, the weight is 50-50.
Employees then choose an action to undertake. We will assume there are many employees and that actions are chosen simultaneously. Of course, this is not really the case, but it simulates the idea that it is a big organization and the actions of others are not readily observed.
How are these actions chosen? Suppose that the payoffs of employees depend on matching the strategy, with a quadratic loss function with 50% weight (importance) , and matching each other, with another quadratic loss function with the complementary weight, also 50%. To be precise, each employee wishes to match the average action of the others. Employees seek to maximize their payoffs.
Now, this would seem the most ordinary of coordination problems--everyone has the same goal and, on average, gets the same signal. Better yet, the signal is unbiased and equal to the truth. But let's see how things play out.
Now, this would seem the most ordinary of coordination problems--everyone has the same goal and, on average, gets the same signal. Better yet, the signal is unbiased and equal to the truth. But let's see how things play out.
Before proceeding, let's simplify things still further. Suppose that, from the perspective of the company as a whole, mismatched actions are of no consequence. What matters is simply the coordination of action to strategy. To reconcile this with the individual incentives above, suppose that the payoffs from miscoordination among employees is normalized whereby deviations among employees in terms of coordination add up to zero when summed over the entire firm. (It's not hard to do this, but the exact math is beyond the scope of a blog.)
So what would the firm desire of its employees? The answer is simple and is, in fact, equivalent to a regression. The firm wishes to minimize the sum of squared error between an employee's action and the true visision. The only piece of data an employee has is her signal. Simple statistics will confirm that the "regression coefficient" on this piece of information is equal to 1/2, i.e. the signal-noise statistic from above. That is, under ideal circumstances, each employee will merely do her best to conform her action to the expected value of the vision conditional on her signal.
So far, so good, but what will employees actually do? Also from basic statistics, it is apparent that the best choice for an employee is to selection an action that places half the weight on the expected vision conditional on each employee's signal, s, and the other half on the expectation of the other employees' actions. The expected state, as we saw above, is nothing more than half the signal. The latter is more complicated, but becomes much simpler if we assume the firm is large--indeed so large that we can guess the average action using the law of large numbers, which tells us that sum of others' signals converges to the underlying vision, which, as we saw, was just half of the signal.
Finally, let us suppose, in equilibrium, an individual chooses an action equal to w times her own signal, where w is a mnemonic for weight. Thus, the equilibrium equation becomes:
w s = 0.5 x (s/2) + 0.5 x w (s/2)
where x denotes the "times" symbol. The left-hand side is the equilibrium action while the right-hand side is the weighted average of the expected value of the vision conditional on the signal and the expected average of others' conditional on the signal. Since all employees are alike, we suppose they all play the same strategy.
Solving this equation yields the equilibrium weight:
w* = s/3
or, equivalently, individuals place a weight equal to one-third on their signal with the remaining 2/3rds weight on the long-term vision (or, more formally, their prior belief) equal to zero.
The result, then, is shockingly bad. the weak communication skills of the leader combined with the general noise of the business environment meant that, optimally, an employee should only give weight equal to 1/2 on her signal. But the strategic interaction of employees trying to coordinate with one another creates an "echo chamber" wherein employees place even less weight, only one-third, on their signals. As a consequence, the company suffers.
Intuitively, since employees seek to coordinate with one another, their initial conservatism, placing weight one-half on the long-run vision, creates a focal point for coordinating actions. Thus, a best response when all other employees are placing between one-third and one-half weight on their signals is to place a bit less weight on one's own signal. This creates a kind of "race to the bottom" which only ends when everyone places one-third weight. In short, coordination produces conservatism in the firm. Put differently, by encouraging coordination amongst employees, any organization builds in a type of cultural conservatism that makes the organization resistant to change. This does not mean that coordination, or incentives for coordination are, per se, bad, only that the inertial incentives created thereby and not much appreciated--or even understood.
Is this something real or merely the fanciful calculations of a game theorist? My own experience suggests that the little model captures something that is true of the world. While working as a research scientist at Yahoo, four CEOs came and went during my time there. Each had a markedly different vision. Yet the needs of coordination at Yahoo were such that, despite changes in CEO, new articulations of vision, and so on, the organization was surprisingly little affected. Employees, based on the need to work in harness with others, willfully discounted the new vision of an incoming CEO. The model seems to capture some, but certainly not all, of these forces. For instance, part of the reason for discounting, absent in the model, was that employees grew skeptical of the likely tenure of any CEO.
For the record, if we let R denote the weight on matching vision and f the signal-noise statistic from above, the general formula is:
w* = R f/(1 - (1 - R) f)
whereas the optimal weight is f. It may be easily verified that w* < f. Also, from this equation, it is clear, and intuitive, that the problem is lessened the smaller the importance of coordinating with other employees (i.e. the larger is R) and the more articulate the leader (i.e. the larger is f).
Some academic housekeeping: The model described above is a version of one due to Morris and Shin, American Economic Review, 2003. They give a rather different interpretation to the model though. Moreover, their version of the model assumes that individuals have no prior beliefs. Under this (peculiar) assumption, the gap between the equilibrium weight on signals and the statistically optimal weight disappears, but this its absence is purely an artifact of their setup. The observations in this blog piece come from theory and experiments I'm currently writing about in a working paper with Don Dale at Muhlenberg College.
Thursday, October 16, 2014
Inspiring Words
Leadership is, to a great extent, the gentle art of persuasion. Leaders inspire others to follow them, to work for them, sometimes even to give up their own lives for them. How do they do it? Partially by example to be sure, but even here persuasion has a role to play. When we say that Jeff Bezos lives the leadership principles articulated and promulgated at Amazon, it makes the valid point that individuals credit others for how they behave, but conveniently ignores the fact that it was Bezos who articulated and promulgated the principles in the first place.
One of the most striking examples of leadership purely by the powers of persuasion was the rise of Barack Obama. Obama, for all his subsequent faults, was matchless in using words to inspire many thousands of young people who had never even voted previously to give him money, work for his organization, and persuade others to vote for him. Even more remarkably, he duplicated the feat again four years later, by most accounts, after spending much of that period by not leading by example. This no doubt overstates the power of his oratory for he had a remarkably savvy organization helping him to vacuum up all that money and effort, but others less gifted have had equally efficient teams, yet achieved nothing like Obama's success.
What can game theory tell us about the power of leaders to persuade? The answer, if we are to be entirely honest, is surprisingly little. A large part of the problem is that communication in the world of game theory is almost entirely informational, but persuasion, while it will certainly draw upon and convey some information, taps into something much deeper and less purely transactional than what one might learn from hearing the local weather forecast. This is not to say that information-centric communication, or persuasion, is uninteresting, rather that it somewhat misses the boat if we truly wish to understand why some individuals are hailed as visionaries while others, offering the same facts and conclusions, are not.
To get the flavor for the bloodless world of communications viewed through the lens of game theory, consider the following problem. A political leader wants to convince a supporter/follower to perform a certain action. The right action depends on something called a state variable, which you might think of as a shorthand description for a set of factors, political, economic, cultural, etc., that influence what a reasonable person would conclude is the correct action. To keep things simple, suppose that one state, which we will call low, represents a low political threat environment. Some action is needed to secure victory, but not too much. The other state, which we will call high threat, requires frenzied activity such as massive calling campaigns to get out the vote, and so on.
The follower wants to do the "right thing" for the leader, but knows little about political threats, and so on.
Knowing nothing about the state, our follower will elect some intermediate range of activity, imperfect for either state but somewhat helpful in both.
Now for the rub or, as we in the profession write, the "central tension of the model." The leader too wants the follower to do the right thing, but prefers that she do more political activity in either state. The degree of difference in views about how much activity to perform in each state represents the conflict between the two. Our leader's job, then, is to inspire his followers to do more than they otherwise would, but this will prove difficult since, in game theory land, the only trump card the leader holds is his knowledge of the state.
So, to inspire his supporters, our leader comes to town and makes a speech attempting to rally them to, in the leader's eyes, the right amount of activity. How does this speech go? What should our leader say? The answer, it turns out, depends on several factors, none of which feel (to me at least) very much like leadership.
Scenario #1: Free Speech
Suppose that our leader is free to say whatever he likes. He can lie about the state, exaggerating the political threat when it is, in fact, low or do the reverse, reassuring followers that there is little to worry about. Or something in between, saying that he's not sure. Or our leader can stonewall, give his standard stump speech, shake hands and kiss babies, Purell his hands and lips afterward, and go home.
So what does he do? To answer this question, we need to make certain assumptions about what, exactly, the followers know. Suppose they know that the leader indeed knows the state and, importantly, they also know that the leader wants them to do more of the activity in each state than they themselves prefer. In the happy scenario, the leader only wants the followers to do a little more in each state, so he informs them about the state truthfully and then harangues them "exceed themselves" or to "go beyond" or something like that. He gets his applause and leaves, satisfied at a good night's work.
Game theory, however, offers the exceptionally dreary conclusion that, no matter how powerful the words of inspiration, no matter that our leader is a Shakespeare or a Churchill, the followers do precisely what they had initially planned to do in each state. They are grateful for the information, but they can hardly be said to be inspired. Ironically, this situation is, in fact, the best our leader can hope for.
Let's rerun the speech but now imagine that the leader's vaunting ambitions create a vast gulf between his preferred activity level in each state and their own. So our leader steps up to the microphone to the hushed crowd and proceeds to speak of crisis--the threat level is high, the stakes are huge, and it's all up to you, the supporters to make the difference. This address, Shakespearean in its majestic, soaring phrases, send chills down the spines of the audience. The crowd roars. They will do it. They will rise to the challenge. They will be the difference-makers. No activity is too much. Our leader, drenched in sweat from the effort, steps down from the lectern and is congratulated for his remarkably moving address. The lights in the auditorium go down, and everyone goes home.
When his supporters get up the next morning, they do...exactly what they would have done if the leader had never shown up in the first place. In game theory land, people are cynics. While the audience may have been moved in the moment, on reflection they realize that the leader makes this same speech everywhere, to all his followers, whether the state is high or low. The talk, for all its pageantry, rings hollow--full of sound and fury, but signifying nothing. Why such an uncharitable view of the leader? The answer is that his own aspirations get in the way. Since he wants a high level of action regardless of the state, the speech lacks all credibility and, since those living in game theory land are not simpletons nor dupes, it is roundly and universally disbelieved.
As a logical analysis, the above is impeccable. As a description of leadership and persuasion, it seems to mis the boat completely. But, sadly, this analysis, or something similar, quite genuinely represents the state of the art, the research frontier, if you will. Is it fixable? Yes, in a way. We can add fools who believe everything the leader says into the mix. We can add some sort of inspiration variable that magically changes tastes so that followers work harder. But none of it really gets to the heart of what makes some leaders persuasive and others not. Indeed, we learn nothing if we simply assume that leader A can change minds and leader B cannot. The whole point of using our tools is to get at the deeper, and ultimately more interesting and important question as to why some leaders are persuasive.
Scenario #2: Factual Speech
Perhaps we've accorded too much freedom to our leader. After all, exagerrations, dissembling, misrepresenting, or any of the myriad of polite words we have for lying can get a politician into terrible trouble. Claiming that the world is hanging by a thread when, in fact, every poll shows that you're 20 points ahead, catches up to most leader's eventually. So let's return to our setting, precisely as before, but with the added restriction that our leader cannot simply make up things that are not true. In academic terms, this moves us out of the world of "cheap talk" and into the world of "persuasion" proper. This nomenclature, by the way, has a lot of problems. First, talk is no less cheap in the sense of being costless to the leader when we add the no lying restriction. Second, why the heck is it "persuasion" when we restrict someone from lying outright. Lies can be an important tool in the arsenal of a persuasive individual. Indeed, criminals engaging in confidence schemes are the ultimate persuaders, but would be entirely crippled were they bound by the no lying restriction. But I digress.
So let's rewind once again and send our leader back to the lectern, but with the following restriction--the heart of his speech can be either the truth or a stonewall, where he says nothing whatever about the state. One might imagine that this changes little. After all, when conflict was low, our leader did not wish to lie even when he could, so the restriction matters not a whit. When conflict was high, our leader wanted to lie about the state, but no one believed him anyway, so the effect is identical to stonewalling. Indeed, our leader in scenario #1 would have been quite happy to make the stonewalling speech instead of what I laid out.
In the case where conflict is low, the above supposition is exactly correct. Our leader steps to the lectern and offers a fact-laden speech truthfully revealing the state. But in the second case, this is wrong. Indeed, remarkably and perhaps absurdly, game theory offers the startling prediction that, no matter how bad the conflict between leader and follower, the leader always makes the truthful speech!
Why in blazes would our leader do that? Let's start with the situation where the threat is high. Here, the leader can do no better than to report the truth. He'd like more effort from his followers to be sure, but there is simply no way to motivate them to work any harder than by revealing the high state. What about the low state? Surely our leader will stonewall here? He might, but it will do no good since, knowing that the leader would have announced high were the state indeed high, our followers treat the stonewall speech as, in effect, a report that the state is low. And they act accordingly. That being the case, the leader might as well report honestly and at least gain the credit, however small, for being straight with followers.
Now, one may suspect that this logic takes too much advantage of the fact that there are exactly two states. What if there were three, or twenty, or a thousand. It turns out that none of it matters because of something called unravelling. Here's the argument: Suppose that there are twenty states in which the leader stonewalls while revealing in the rest. Then, in the highest of these 20 states, he'd be better off revealing than stonewalling since, by stonewalling, followers assume that the average state is lower than the highest state. Repeat this argument ad nauseum to obtain the truth-telling result. In my own work on the topic, I showed how this argument could be extended to virtually any configuration of preferences between leader and follower.
The problem is that the conclusion seems completely absurd. Irrespective of the conflict between leader and follower, the leader will always tell the truth sounds very much unlike the world in which I live. Again, this problem is fixable, but the main fix is even more bizarre than the result. It turns out that the key to credible stonewalling is...drumroll please...stupidity!! Or, more precisely the possibility of stupidity. The idea here is that, if the leader might possibly not know the state then stonewalling becomes believable. But this hardly seems like a satisfying resolution.
So Where Does This Leave Us?
This post, I'm afraid, is a downer. Game theory does lots of things well, but leadership, sadly, is not one of them. This has not stopped game theorists from trying, and perhaps making some headway. There is a clever paper by Dewan and Myatt looking at leadership through communications. In their model, one of the tradeoffs is between stupidity and incomprehensibility. They ask the following question (that only a game theorist would ask) about leaders: Is it better to be smart but incomprehensible or stupid but clear? The answer seems to be that it depends on the importance of doing the right thing versus doing the same thing. But, like all work in the field, the idea that leaders, with their words, could spark passion and devotion, is entirely missing.
Sometimes I despair about my love for game theory in a place devoted to, somehow, creating innovative leaders. I can, however, take some solace that we are no better at articulating how, exactly, that transformation takes place than we are in understanding leadership through game theory.
One of the most striking examples of leadership purely by the powers of persuasion was the rise of Barack Obama. Obama, for all his subsequent faults, was matchless in using words to inspire many thousands of young people who had never even voted previously to give him money, work for his organization, and persuade others to vote for him. Even more remarkably, he duplicated the feat again four years later, by most accounts, after spending much of that period by not leading by example. This no doubt overstates the power of his oratory for he had a remarkably savvy organization helping him to vacuum up all that money and effort, but others less gifted have had equally efficient teams, yet achieved nothing like Obama's success.
What can game theory tell us about the power of leaders to persuade? The answer, if we are to be entirely honest, is surprisingly little. A large part of the problem is that communication in the world of game theory is almost entirely informational, but persuasion, while it will certainly draw upon and convey some information, taps into something much deeper and less purely transactional than what one might learn from hearing the local weather forecast. This is not to say that information-centric communication, or persuasion, is uninteresting, rather that it somewhat misses the boat if we truly wish to understand why some individuals are hailed as visionaries while others, offering the same facts and conclusions, are not.
To get the flavor for the bloodless world of communications viewed through the lens of game theory, consider the following problem. A political leader wants to convince a supporter/follower to perform a certain action. The right action depends on something called a state variable, which you might think of as a shorthand description for a set of factors, political, economic, cultural, etc., that influence what a reasonable person would conclude is the correct action. To keep things simple, suppose that one state, which we will call low, represents a low political threat environment. Some action is needed to secure victory, but not too much. The other state, which we will call high threat, requires frenzied activity such as massive calling campaigns to get out the vote, and so on.
The follower wants to do the "right thing" for the leader, but knows little about political threats, and so on.
Knowing nothing about the state, our follower will elect some intermediate range of activity, imperfect for either state but somewhat helpful in both.
Now for the rub or, as we in the profession write, the "central tension of the model." The leader too wants the follower to do the right thing, but prefers that she do more political activity in either state. The degree of difference in views about how much activity to perform in each state represents the conflict between the two. Our leader's job, then, is to inspire his followers to do more than they otherwise would, but this will prove difficult since, in game theory land, the only trump card the leader holds is his knowledge of the state.
So, to inspire his supporters, our leader comes to town and makes a speech attempting to rally them to, in the leader's eyes, the right amount of activity. How does this speech go? What should our leader say? The answer, it turns out, depends on several factors, none of which feel (to me at least) very much like leadership.
Scenario #1: Free Speech
Suppose that our leader is free to say whatever he likes. He can lie about the state, exaggerating the political threat when it is, in fact, low or do the reverse, reassuring followers that there is little to worry about. Or something in between, saying that he's not sure. Or our leader can stonewall, give his standard stump speech, shake hands and kiss babies, Purell his hands and lips afterward, and go home.
So what does he do? To answer this question, we need to make certain assumptions about what, exactly, the followers know. Suppose they know that the leader indeed knows the state and, importantly, they also know that the leader wants them to do more of the activity in each state than they themselves prefer. In the happy scenario, the leader only wants the followers to do a little more in each state, so he informs them about the state truthfully and then harangues them "exceed themselves" or to "go beyond" or something like that. He gets his applause and leaves, satisfied at a good night's work.
Game theory, however, offers the exceptionally dreary conclusion that, no matter how powerful the words of inspiration, no matter that our leader is a Shakespeare or a Churchill, the followers do precisely what they had initially planned to do in each state. They are grateful for the information, but they can hardly be said to be inspired. Ironically, this situation is, in fact, the best our leader can hope for.
Let's rerun the speech but now imagine that the leader's vaunting ambitions create a vast gulf between his preferred activity level in each state and their own. So our leader steps up to the microphone to the hushed crowd and proceeds to speak of crisis--the threat level is high, the stakes are huge, and it's all up to you, the supporters to make the difference. This address, Shakespearean in its majestic, soaring phrases, send chills down the spines of the audience. The crowd roars. They will do it. They will rise to the challenge. They will be the difference-makers. No activity is too much. Our leader, drenched in sweat from the effort, steps down from the lectern and is congratulated for his remarkably moving address. The lights in the auditorium go down, and everyone goes home.
When his supporters get up the next morning, they do...exactly what they would have done if the leader had never shown up in the first place. In game theory land, people are cynics. While the audience may have been moved in the moment, on reflection they realize that the leader makes this same speech everywhere, to all his followers, whether the state is high or low. The talk, for all its pageantry, rings hollow--full of sound and fury, but signifying nothing. Why such an uncharitable view of the leader? The answer is that his own aspirations get in the way. Since he wants a high level of action regardless of the state, the speech lacks all credibility and, since those living in game theory land are not simpletons nor dupes, it is roundly and universally disbelieved.
As a logical analysis, the above is impeccable. As a description of leadership and persuasion, it seems to mis the boat completely. But, sadly, this analysis, or something similar, quite genuinely represents the state of the art, the research frontier, if you will. Is it fixable? Yes, in a way. We can add fools who believe everything the leader says into the mix. We can add some sort of inspiration variable that magically changes tastes so that followers work harder. But none of it really gets to the heart of what makes some leaders persuasive and others not. Indeed, we learn nothing if we simply assume that leader A can change minds and leader B cannot. The whole point of using our tools is to get at the deeper, and ultimately more interesting and important question as to why some leaders are persuasive.
Scenario #2: Factual Speech
Perhaps we've accorded too much freedom to our leader. After all, exagerrations, dissembling, misrepresenting, or any of the myriad of polite words we have for lying can get a politician into terrible trouble. Claiming that the world is hanging by a thread when, in fact, every poll shows that you're 20 points ahead, catches up to most leader's eventually. So let's return to our setting, precisely as before, but with the added restriction that our leader cannot simply make up things that are not true. In academic terms, this moves us out of the world of "cheap talk" and into the world of "persuasion" proper. This nomenclature, by the way, has a lot of problems. First, talk is no less cheap in the sense of being costless to the leader when we add the no lying restriction. Second, why the heck is it "persuasion" when we restrict someone from lying outright. Lies can be an important tool in the arsenal of a persuasive individual. Indeed, criminals engaging in confidence schemes are the ultimate persuaders, but would be entirely crippled were they bound by the no lying restriction. But I digress.
So let's rewind once again and send our leader back to the lectern, but with the following restriction--the heart of his speech can be either the truth or a stonewall, where he says nothing whatever about the state. One might imagine that this changes little. After all, when conflict was low, our leader did not wish to lie even when he could, so the restriction matters not a whit. When conflict was high, our leader wanted to lie about the state, but no one believed him anyway, so the effect is identical to stonewalling. Indeed, our leader in scenario #1 would have been quite happy to make the stonewalling speech instead of what I laid out.
In the case where conflict is low, the above supposition is exactly correct. Our leader steps to the lectern and offers a fact-laden speech truthfully revealing the state. But in the second case, this is wrong. Indeed, remarkably and perhaps absurdly, game theory offers the startling prediction that, no matter how bad the conflict between leader and follower, the leader always makes the truthful speech!
Why in blazes would our leader do that? Let's start with the situation where the threat is high. Here, the leader can do no better than to report the truth. He'd like more effort from his followers to be sure, but there is simply no way to motivate them to work any harder than by revealing the high state. What about the low state? Surely our leader will stonewall here? He might, but it will do no good since, knowing that the leader would have announced high were the state indeed high, our followers treat the stonewall speech as, in effect, a report that the state is low. And they act accordingly. That being the case, the leader might as well report honestly and at least gain the credit, however small, for being straight with followers.
Now, one may suspect that this logic takes too much advantage of the fact that there are exactly two states. What if there were three, or twenty, or a thousand. It turns out that none of it matters because of something called unravelling. Here's the argument: Suppose that there are twenty states in which the leader stonewalls while revealing in the rest. Then, in the highest of these 20 states, he'd be better off revealing than stonewalling since, by stonewalling, followers assume that the average state is lower than the highest state. Repeat this argument ad nauseum to obtain the truth-telling result. In my own work on the topic, I showed how this argument could be extended to virtually any configuration of preferences between leader and follower.
The problem is that the conclusion seems completely absurd. Irrespective of the conflict between leader and follower, the leader will always tell the truth sounds very much unlike the world in which I live. Again, this problem is fixable, but the main fix is even more bizarre than the result. It turns out that the key to credible stonewalling is...drumroll please...stupidity!! Or, more precisely the possibility of stupidity. The idea here is that, if the leader might possibly not know the state then stonewalling becomes believable. But this hardly seems like a satisfying resolution.
So Where Does This Leave Us?
This post, I'm afraid, is a downer. Game theory does lots of things well, but leadership, sadly, is not one of them. This has not stopped game theorists from trying, and perhaps making some headway. There is a clever paper by Dewan and Myatt looking at leadership through communications. In their model, one of the tradeoffs is between stupidity and incomprehensibility. They ask the following question (that only a game theorist would ask) about leaders: Is it better to be smart but incomprehensible or stupid but clear? The answer seems to be that it depends on the importance of doing the right thing versus doing the same thing. But, like all work in the field, the idea that leaders, with their words, could spark passion and devotion, is entirely missing.
Sometimes I despair about my love for game theory in a place devoted to, somehow, creating innovative leaders. I can, however, take some solace that we are no better at articulating how, exactly, that transformation takes place than we are in understanding leadership through game theory.
Thursday, October 9, 2014
The Limits of Experimentation and Big Data
Experimentation represents a critical tool for business decision making, especially in online environments. It is said that Google runs about a quarter of a million experiments on its users every day. Sophisticated online firms will have a built in toolset allowing managers to quickly and easily code in the experiment they wish to run (within certain limits), and even spit out standard analysis of the results. As a result of these experiments, firms continually innovate, mostly in small ways, to increase engagement, conversion rates, items purchased and so on.
The basic idea is simple. Firms with millions of visitors each day will choose a small percentage of these to be (unwitting) subjects for the experiment they wished to run. These individuals might be picked at random from all individuals or, more likely, they will be selected on the basis of some predetermined characteristics of individuals who are hypothesized to respond to the experimental treatment. In the simplest design, half of the selected subjects, chosen at random (or chosen at random within each strata) will receive the control, the experimental baseline, which will, most often, be simply business as usual. The other half will receive the treatment, which could be an alteration of the look and feel of the site, but might also be things like exposure to promotions, changed shipping terms, special offers of after sales service, or a host of other possibilities. Online sites rarely experiment with price, at least in this treatment-control way, owing to the bad publicity suffered by Amazon in the late 90s and late 00s from such experiments.
Following this, the data is analyzed by comparing various metrics under treatment and control. These might be things like the duration of engagement during the session in which the treatment occurred or the frequency or amount of sales during a session, which are fairly easy to measure. They might also be things like long-term loyalty or other time-series aspects of consumer behavior that are a bit more delicate. The crudest tests are nothing more than two sample t-tests with equal variances, but analysis can be far more sophisticated involving complicated regression stuctures containing many additional correlates besides the experimental treatment.
When the experiment indicates that the treatment is successful (or at least more successful than unsuccessful), these innovations are often adopted and incorporated into the user experience. Mostly, such innovations are small UX things like the color of the background or the size of the fonts used, but they are occasionally for big things as well like the amount of space to be devoted to advertising or even what information the user sees.
After all this time and all the successes that have been obtained, were we to add up the amounts of improvement in various metrics from all the experiments, we would conclude that consumers are spending in excess of 24 hours per day engaging with certain sites and that sales to others will exceed global wealth by a substantial amount. Obviously, the experiments, no matter how well done, are missing something important.
The answer, of course, is game theory, or at least the consideration of strategic responses by rivals, in assessing the effect of certain innovations.
At first blush, this answer seems odd and self-serving (the latter part is correct) in that I made no mention of other firms in any of the above. The experiments were purely about the relationship between a firm and its consumers/browsers/users/visitors etc. Since there are zillions of these users and since they are very unlikely to coordinate on their own, there seems little scope for game theory at all. Indeed, these problems look like classic decision problems. But while rivals are not involved in any of this directly, they are presently indirectly and strategically by affecting the next best use of a consumer's time or money, and changes to their site to improve engagement, lift, revenue and so on will be reflected in our own relationship to customers.
To understand this idea, it helps to get inside the mind of the consumer. When visiting site X, a consumer chooses X over some alternative Z. Perhaps the choice is conscious--the consumer has tried both X and Y, knows their features, and has found X to be better. Perhaps the choice is unconscious. The point is simply that the consumer has a choice. Let us now imagine the site X is experimenting between two user experiences, x and x' while firm Z presently offers experience z.. The consumer's action, y then depends not just on x but also on z or at least the perception of z. Thus, we predict some relationship
The basic idea is simple. Firms with millions of visitors each day will choose a small percentage of these to be (unwitting) subjects for the experiment they wished to run. These individuals might be picked at random from all individuals or, more likely, they will be selected on the basis of some predetermined characteristics of individuals who are hypothesized to respond to the experimental treatment. In the simplest design, half of the selected subjects, chosen at random (or chosen at random within each strata) will receive the control, the experimental baseline, which will, most often, be simply business as usual. The other half will receive the treatment, which could be an alteration of the look and feel of the site, but might also be things like exposure to promotions, changed shipping terms, special offers of after sales service, or a host of other possibilities. Online sites rarely experiment with price, at least in this treatment-control way, owing to the bad publicity suffered by Amazon in the late 90s and late 00s from such experiments.
Following this, the data is analyzed by comparing various metrics under treatment and control. These might be things like the duration of engagement during the session in which the treatment occurred or the frequency or amount of sales during a session, which are fairly easy to measure. They might also be things like long-term loyalty or other time-series aspects of consumer behavior that are a bit more delicate. The crudest tests are nothing more than two sample t-tests with equal variances, but analysis can be far more sophisticated involving complicated regression stuctures containing many additional correlates besides the experimental treatment.
When the experiment indicates that the treatment is successful (or at least more successful than unsuccessful), these innovations are often adopted and incorporated into the user experience. Mostly, such innovations are small UX things like the color of the background or the size of the fonts used, but they are occasionally for big things as well like the amount of space to be devoted to advertising or even what information the user sees.
After all this time and all the successes that have been obtained, were we to add up the amounts of improvement in various metrics from all the experiments, we would conclude that consumers are spending in excess of 24 hours per day engaging with certain sites and that sales to others will exceed global wealth by a substantial amount. Obviously, the experiments, no matter how well done, are missing something important.
The answer, of course, is game theory, or at least the consideration of strategic responses by rivals, in assessing the effect of certain innovations.
At first blush, this answer seems odd and self-serving (the latter part is correct) in that I made no mention of other firms in any of the above. The experiments were purely about the relationship between a firm and its consumers/browsers/users/visitors etc. Since there are zillions of these users and since they are very unlikely to coordinate on their own, there seems little scope for game theory at all. Indeed, these problems look like classic decision problems. But while rivals are not involved in any of this directly, they are presently indirectly and strategically by affecting the next best use of a consumer's time or money, and changes to their site to improve engagement, lift, revenue and so on will be reflected in our own relationship to customers.
To understand this idea, it helps to get inside the mind of the consumer. When visiting site X, a consumer chooses X over some alternative Z. Perhaps the choice is conscious--the consumer has tried both X and Y, knows their features, and has found X to be better. Perhaps the choice is unconscious. The point is simply that the consumer has a choice. Let us now imagine the site X is experimenting between two user experiences, x and x' while firm Z presently offers experience z.. The consumer's action, y then depends not just on x but also on z or at least the perception of z. Thus, we predict some relationship
y = a + b x + c z + error
when presented with the control and a similar relationship but with x' replacing x under the treatment. If we then regress y on x, we suffer from omitted variable bias: z should have been in the regression but was not; however so long as z is uncorrelated with the treatment (and there is no reason it should be), then our regression coefficient on the treatment dummy will correctly tell us the change in y from the change in x, which is, of course, precisely what we want to know.
Thus, buoyed by our experiment, we confidently implement x' since the statistics tell us it will raise y by 15% (say).
But notice that this analysis is the statistical equivalent of inward thinking. Despite its scientific garb, it is no more valid an analysis than a strategic analysis hypothesizing that the rival will make no changes to its strategy regardless of what we might do. When we think about large decisions, like mergers, such a hypothesis is obviously silly. If Walmart acquired eBay tomorrow, no one would claim that Amazon would have no reaction whatever, that it would keep doing what it had been doing. It would, of course, react, and, were we representing Walmart, we would want to take that reaction into account when deciding how much to pay for eBay.
But it is no less silly to think that a major business innovation undertaken by X will lead to no response from rivals either. To see the problem, imagine we were interested in long run consumer behavior in response to innovation x'. Our experiment tells us the effect of such a change, conditional on the rivals' strategies, but says nothing about the long-term effect once our rivals respond. To follow through with our example, suppose that switching to x' on a real rather than experimental basis will trigger a rival response that changes z to z'. Then the correct measure of the effect of our innovation on y is
Change in y = b(x' - x) + c(z' - z)
The expression divides readily into two terms. The first term represents the inward thinking effect. In a world where others are not strategic, this measures the effect of the change in x. The second part represents the outward thinking strategic effect. This is the rival's reaction to the changed relationship that firm X has with its customer. No experiment can get at this term, no matter how large is the dataset. This failure is not a matter of insufficient power or the lack of metrics to measure y or even z, its the problem of identifying a counterfactual, z', that will only come to pass if X adopts the innovation.
Now, all is not lost. There are many strategies one can use to forecast z', but one needs to be open to things that the data can never tell us, like the effect of a hypothetical rival reaction to a hypothetical innovation when viewed through the lens of consumer choice. This is not a problem that statistics or machine learning can ever solve. Game theory is not simply the best analysis for such situations, it is the only analysis available.
Thursday, October 2, 2014
The Folk Theorem
We have talked a fair bit about coordination games and the forces shaping just what happens to be coordinated upon. In that light, it's important to realize how the presence of dynamic considerations, i.e. repeated game settings, fundamentally transforms essentially all games into coordination games. In our usual example to illustrate how to take advantage of dynamics to build relational contracts (self-enforcing repeated games equilibrium), we studied a simplified version of Bertrand competition. Firms could price high or low, low was a dominant strategy in the one-shot game, yet, with the right implicit contract, we can maintain high prices. The key, if you'll recall, was to balance a sufficient future punishment against the temptation to defect.
While coordinating on full cooperation seems the obvious course of action, if available, it is far from the only equilibrium to this game. For instance, it is fairly obvious that coordinating on the low price in every period is also an equilibrium--a really simple one, it turns out. Unlike the high price equilibrium with its nice and fight feedback loops, the no cooperation equilibrium requires no such machinery. The equilibrium consists of simply choosing a low price in every period regardless of past actions, and that's it.
There are many more equilibria in this game. For instance, if maintaining high prices yields each player $3 per period whereas fighting in every period yields $2 per period, then payoffs of any amount in between can also be sustained merely by interweaving the two. Suppose we wished to support payoffs that are high 5/6 of the time and low 1/6. In that case, we need only follow our high priced strategy whenever the period is not a multiple of 6 or if we're in a punishment phase and follow the low price strategy for periods that are a multiple of 6. Any fraction of high and low payoffs may be similarly supported.
This observation that just about any set of per period payoffs can be supported as an equilibrium is known as the Folk Theorem. It is so-named since, like many folk tales, no one was quite sure who first had the idea, but a Nobel winning game theorist, Robert Aumann, was the first to write the argument down (in a much more general and abstract way than my simple sketch above. Notice what the folk theorem implies in terms of our list of archetypal games:
While coordinating on full cooperation seems the obvious course of action, if available, it is far from the only equilibrium to this game. For instance, it is fairly obvious that coordinating on the low price in every period is also an equilibrium--a really simple one, it turns out. Unlike the high price equilibrium with its nice and fight feedback loops, the no cooperation equilibrium requires no such machinery. The equilibrium consists of simply choosing a low price in every period regardless of past actions, and that's it.
There are many more equilibria in this game. For instance, if maintaining high prices yields each player $3 per period whereas fighting in every period yields $2 per period, then payoffs of any amount in between can also be sustained merely by interweaving the two. Suppose we wished to support payoffs that are high 5/6 of the time and low 1/6. In that case, we need only follow our high priced strategy whenever the period is not a multiple of 6 or if we're in a punishment phase and follow the low price strategy for periods that are a multiple of 6. Any fraction of high and low payoffs may be similarly supported.
This observation that just about any set of per period payoffs can be supported as an equilibrium is known as the Folk Theorem. It is so-named since, like many folk tales, no one was quite sure who first had the idea, but a Nobel winning game theorist, Robert Aumann, was the first to write the argument down (in a much more general and abstract way than my simple sketch above. Notice what the folk theorem implies in terms of our list of archetypal games:
All repeated games are coordination gamesThe theorem tells us that, regardless of the original form of the game, be it prisoner's dilemma, hawk-dove, matching pennies and so on, its repeated version amounts to a pure coordination game.
Sunday, September 28, 2014
Game Theory and Fleetwood Mac
Rumours by Fleetwood Mac is, in my opinion, one of the best rock albums ever released. The sales charts bear out this appraisal, Rumours, like Dark Side of the Moon, set records that may never be broken (especially with the advent of digital music) for album sales. But what has this got to do with game theory?
One song on Rumours, which became Bill Clinton's campaign theme song many years after, is Don't Stop Thinkin' About Tomorrow. The song offers the upbeat message that the future always holds something better than the past and so one ought to concentrate on its possibility rather than dwelling in the shortcomings of the day before. Perhaps useful advice for a heart broken teenager but hardly the stuff of deep insight. (Indeed, almost bitter advice for someone like me, whose joints and musculature are progressively being turned to Swiss cheese by an autoimmune illness.) But anyway, back to our story.
The broader point of this song is that the actions of the present ought properly to be considered in light of the future. In other words, moving back one step, one choosing how to act, one ought properly never to stop "thinkin' about tomorrow" since it is the consequences of tomorrow that determine the costs and benefits of today's actions.
And, indeed, no better or deeper point can be made about the most important insight in all of game theory that, by harnessing the future, the present may be tamed.
Rewind to our one-off prisoners dilemma situation. This situation seems utterly hopeless and, in the vacuum outside time, is hopeless. It makes not difference the timing of moves nor the sophistication of opponents, the inexorable conclusion in such situations is that both parties are doomed to defect and thereby receive the lower rather than thre higher payoff. But if we place this situation back in
time, where there is a future, then a more palatable (and sensible) conclusion obtains. So long as both
parties don't stop Thinkin' about tomorrow, and so long as tomorrow is important enough, cooperation is possible. Indeed, this simple insight is at the heart of the vast majority of "contracting" occurring in the world.
While the word contract brings to mind the formality of legal documents, its underlying idea is not so formal. A contract is any agreement willingly undertaken by two parties. While we may think of such informal contracts as arm's length "handshake agreements" they need not be. Spouses deciding on a rotation of chores or child care is no less a contract for its informality. Roughly speaking, anything that trades off a present benefit for one in either another form (money, cooking, etc.) or time (tomorrow, a year from now) is a contract.
What game theory says is that, even if such contracts hold no water in any court of law, they still might be fulfilled so long as they are "self-enforcing" which is a fancy way of saying that both sides find it in their interests to execute on the contract rather than renege. Much of game theory is casting
about for circumstances in which contracts
The key, in many instances, is that both sides don't stop thinkin' about tomorrow, which disciplines their behavior today. While defecting on a relational contract might be a fine idea today, when it is your turn to give up value/spend cost, such behavior seems less good in light of what is given up over many tomorrows where no one is willing to engage in future relational contracts.
Curiously, social psychologists independently discovered this idea, albeit without the help of game theory. They talk of "equity theory", the idea that we each keep a mental account of favors granted and favors received for each acquaintance. According to this theory, when the accounts fall too far out of balance, relationships are "liquidated" ---in effect declaring bankruptcy on the friendship.
The point though is really the same. If it is better not to honor a relational contract than to do so, such agreements cease to be self-enforcing and breach becomes inevitable. Where psychologists would
differ concerns favors never asked for in the first place. For instance, Ann is sick and so Bob makes her pots and pots of chicken soup, which Ann abhors and has never requested. To an economist, Bob's offering Places Ann under no particular obligation to repay whereas a psychologist (and certainly my grandmother) would see this as an odious debt accrued by Bob from Ann's care and attention. Under the psych theory, Ann and Bob's relationship will likely founder over the unrequited chicken soup whereas an economist might see this as tangential, and indeed, irrelevant, to Ann and Bob's other dealings with one another.
Who do you think is corrcf? Our generic economist or my grandmother?
One song on Rumours, which became Bill Clinton's campaign theme song many years after, is Don't Stop Thinkin' About Tomorrow. The song offers the upbeat message that the future always holds something better than the past and so one ought to concentrate on its possibility rather than dwelling in the shortcomings of the day before. Perhaps useful advice for a heart broken teenager but hardly the stuff of deep insight. (Indeed, almost bitter advice for someone like me, whose joints and musculature are progressively being turned to Swiss cheese by an autoimmune illness.) But anyway, back to our story.
The broader point of this song is that the actions of the present ought properly to be considered in light of the future. In other words, moving back one step, one choosing how to act, one ought properly never to stop "thinkin' about tomorrow" since it is the consequences of tomorrow that determine the costs and benefits of today's actions.
And, indeed, no better or deeper point can be made about the most important insight in all of game theory that, by harnessing the future, the present may be tamed.
Rewind to our one-off prisoners dilemma situation. This situation seems utterly hopeless and, in the vacuum outside time, is hopeless. It makes not difference the timing of moves nor the sophistication of opponents, the inexorable conclusion in such situations is that both parties are doomed to defect and thereby receive the lower rather than thre higher payoff. But if we place this situation back in
time, where there is a future, then a more palatable (and sensible) conclusion obtains. So long as both
parties don't stop Thinkin' about tomorrow, and so long as tomorrow is important enough, cooperation is possible. Indeed, this simple insight is at the heart of the vast majority of "contracting" occurring in the world.
While the word contract brings to mind the formality of legal documents, its underlying idea is not so formal. A contract is any agreement willingly undertaken by two parties. While we may think of such informal contracts as arm's length "handshake agreements" they need not be. Spouses deciding on a rotation of chores or child care is no less a contract for its informality. Roughly speaking, anything that trades off a present benefit for one in either another form (money, cooking, etc.) or time (tomorrow, a year from now) is a contract.
What game theory says is that, even if such contracts hold no water in any court of law, they still might be fulfilled so long as they are "self-enforcing" which is a fancy way of saying that both sides find it in their interests to execute on the contract rather than renege. Much of game theory is casting
about for circumstances in which contracts
The key, in many instances, is that both sides don't stop thinkin' about tomorrow, which disciplines their behavior today. While defecting on a relational contract might be a fine idea today, when it is your turn to give up value/spend cost, such behavior seems less good in light of what is given up over many tomorrows where no one is willing to engage in future relational contracts.
Curiously, social psychologists independently discovered this idea, albeit without the help of game theory. They talk of "equity theory", the idea that we each keep a mental account of favors granted and favors received for each acquaintance. According to this theory, when the accounts fall too far out of balance, relationships are "liquidated" ---in effect declaring bankruptcy on the friendship.
The point though is really the same. If it is better not to honor a relational contract than to do so, such agreements cease to be self-enforcing and breach becomes inevitable. Where psychologists would
differ concerns favors never asked for in the first place. For instance, Ann is sick and so Bob makes her pots and pots of chicken soup, which Ann abhors and has never requested. To an economist, Bob's offering Places Ann under no particular obligation to repay whereas a psychologist (and certainly my grandmother) would see this as an odious debt accrued by Bob from Ann's care and attention. Under the psych theory, Ann and Bob's relationship will likely founder over the unrequited chicken soup whereas an economist might see this as tangential, and indeed, irrelevant, to Ann and Bob's other dealings with one another.
Who do you think is corrcf? Our generic economist or my grandmother?
The IAWLP in auctions
The OPEC auctions are curious in a certain way. While they are conducted a standard English auctions, they differ in that no one goes home without a prize (i.e. a country). Thus, even the loser of these auctions wins something. How does this change bidding?
We know from the IAWLP that the best strategy in a private values Vickey/English setting is simply to bid up to your value. But for a variety of reasons, our OPEC setting is not this setting at all. For one thing, values aren't really private--your estimate of the value of the auction is actually useful information for others seeking to determine the value. For another, this is not a one-off auction. Losing a given auction implies that there are a number of other countries up for auction. Finally. There are only finitely many opportunities, so we can apply LFRB to anticipate how things might proceed. All of this makes the usual strategy of bidding "truthfully" whatever that means, a dead letter.
Let's start at the end, our usual game theory zen strategy. If there is only one auction left and only two bidders, how should you bid? Clearly, wat matters is not how much you value the UAE, but rather how much more you value it than Nigeria. This, of course, will depend on what you gleaned from earlier winning auction bids and bidders. The higher is the expected price over the course of the game, the larger the value of the gap between the production capacity of the UAE v Nigeria. At the same time, to achieve this level of production might require some curtailment of production on UAE v Nigeria that lessens this gap. None of this invalidates the usual strategy of bidding up to the point where you are indifferent between winning and losing, though it does affect where this indifference value lies.
So who wins this auction? Obviously, whoever thinks the UAE is worth more--whatever team is more confident that the price will be high and that the UAE can maintain high production while
maintaining this high price will get the item. Otherwise, it will be a toss up--the value will be the same to both players.
Now let's work our way back. The closer to the beginning of the auctions, the more moving parts on play. Early bidders need to worry about both the vagaries of valuations relative to the "numeraire" item, Nigeria, and in the face if ever declining future competition. Both of these factors make these earlier bids more risky. But there is another wild card in the mix--leadership. Unlike beanie babies or even oil tracts, the items for bid here have values that are determined enormously by leadership activities. If you win Saudi, will you be able to convince others to refrain from producing and hence benefit? How much production will you yourself have to curtail to make these agreements work? In short, value is socially constructed by the winning bidder. In that sense, such auctions are neither private not publicly valued, rather the value of each country is interdependent--the winner of each country potentially determines the value of all.
In that respect, valuing a country in OPEC has, as its closest analog, valuing a company in a takeover. While the acquired company has a set of assets and ip which might be utilized, exactly how it is utilized and whether this is effective has everything to do with the acquirer. For example, the company Digg was bought by Yahoo some years ago. Digg was a very cool company at the forefront of crowd sourcing content and well ahead of its competitors. The synergies with Yahoo, a content curation company above all else, were obvious and important. The market raved over this acquisition. But Yahoo treated Digg like it did all its other properties. Rather than integrating its technology to make for better curation, as the market anticipated, Digg was left to fend for itself as an independent revenue generating property, something it was never especially good at. As a result, it languished.
While I mention Digg to make a point, it is far from an isolated incidence. When valuing acquisitions, leadership in putting this asset to good use is central to this valuation Saudi in the hands of a poor leader is not a good bet at almost any (reasonable) price. In the hands of a master strategist, it is a bargain at almost any price. The game theory lesson (and it is a tough one to put into practice until you're near the top of the company) is that leadership plays a huge role in dictating the value of an acquisition, no matter what the cash flow or valuation multiple says.
We know from the IAWLP that the best strategy in a private values Vickey/English setting is simply to bid up to your value. But for a variety of reasons, our OPEC setting is not this setting at all. For one thing, values aren't really private--your estimate of the value of the auction is actually useful information for others seeking to determine the value. For another, this is not a one-off auction. Losing a given auction implies that there are a number of other countries up for auction. Finally. There are only finitely many opportunities, so we can apply LFRB to anticipate how things might proceed. All of this makes the usual strategy of bidding "truthfully" whatever that means, a dead letter.
Let's start at the end, our usual game theory zen strategy. If there is only one auction left and only two bidders, how should you bid? Clearly, wat matters is not how much you value the UAE, but rather how much more you value it than Nigeria. This, of course, will depend on what you gleaned from earlier winning auction bids and bidders. The higher is the expected price over the course of the game, the larger the value of the gap between the production capacity of the UAE v Nigeria. At the same time, to achieve this level of production might require some curtailment of production on UAE v Nigeria that lessens this gap. None of this invalidates the usual strategy of bidding up to the point where you are indifferent between winning and losing, though it does affect where this indifference value lies.
So who wins this auction? Obviously, whoever thinks the UAE is worth more--whatever team is more confident that the price will be high and that the UAE can maintain high production while
maintaining this high price will get the item. Otherwise, it will be a toss up--the value will be the same to both players.
Now let's work our way back. The closer to the beginning of the auctions, the more moving parts on play. Early bidders need to worry about both the vagaries of valuations relative to the "numeraire" item, Nigeria, and in the face if ever declining future competition. Both of these factors make these earlier bids more risky. But there is another wild card in the mix--leadership. Unlike beanie babies or even oil tracts, the items for bid here have values that are determined enormously by leadership activities. If you win Saudi, will you be able to convince others to refrain from producing and hence benefit? How much production will you yourself have to curtail to make these agreements work? In short, value is socially constructed by the winning bidder. In that sense, such auctions are neither private not publicly valued, rather the value of each country is interdependent--the winner of each country potentially determines the value of all.
In that respect, valuing a country in OPEC has, as its closest analog, valuing a company in a takeover. While the acquired company has a set of assets and ip which might be utilized, exactly how it is utilized and whether this is effective has everything to do with the acquirer. For example, the company Digg was bought by Yahoo some years ago. Digg was a very cool company at the forefront of crowd sourcing content and well ahead of its competitors. The synergies with Yahoo, a content curation company above all else, were obvious and important. The market raved over this acquisition. But Yahoo treated Digg like it did all its other properties. Rather than integrating its technology to make for better curation, as the market anticipated, Digg was left to fend for itself as an independent revenue generating property, something it was never especially good at. As a result, it languished.
While I mention Digg to make a point, it is far from an isolated incidence. When valuing acquisitions, leadership in putting this asset to good use is central to this valuation Saudi in the hands of a poor leader is not a good bet at almost any (reasonable) price. In the hands of a master strategist, it is a bargain at almost any price. The game theory lesson (and it is a tough one to put into practice until you're near the top of the company) is that leadership plays a huge role in dictating the value of an acquisition, no matter what the cash flow or valuation multiple says.
Thursday, September 25, 2014
A/B Test
I am up in Seattle hanging out with the folks at Amazon today and talking about A/B tests, experiments that websites conduct in order to improve the user experience (and profitability too, sometimes). Coming home tonight, I hit upon the idea for a great Hollywood screenplay.
(Aside: I don't write screenplays, but, if any game theory alums do, I'd be happy to collaborate on this idea.)
Twins A and B (girls) have just come off bad breakups. They live in NYC and Boston but are otherwise alike in every way. They talk on the phone, as they do every night, feeling bad about their miserable love lives and determined to fix it. Twin A suggests they visit a popular dating website she read about. Immediately after the call, they each turn on their iPads and bring up the website. It turns out, however, that the site is conducting an A/B test on its matching algorithm just as they query it.
(Sidenote: as each twin calls up the website, a clock ticks down to the 1/1000 of a second as they query it. Twin A lands on an odd numbered time while twin B lands on an even numbered time since she started a tiny bit later than the other twin. This produces the A/B test, which is rigged to the 1/1000 of a second time a session starts.
Back scene: techies in some Silicon Valley startup. Techie 1 talks about how the website has succeeded by having likes attract. Techie 2 tells a story about how, with his girlfriend, opposites attract. What if that strategy actually produces better matches. Techie 1 says that data talks and bulllshit walks, so only an A/B test can settle things for sure. He proposes that, on Sunday night, they run such a test on the east coast and then track all the matches that result to see how love really works. Techie 2, confident that opposites attract, agrees and bets $100 that he's right. Techie 1 shakes hands on it.)
Back to our twins. Twin A is matched with someone just like her. He's outdoorsy and easygoing, ruggedly handsome and with a steady, albeit boring job. Twin B is matched with her opposite. She likes the outdoors while he is an urban creature favoring clubs, bars, jazz, etc. She is an all-American clean cut girl while he is rather grungy. After the first date, twin A has found her perfect match while Twin B is appalled, but yet fascinated somehow. Both go on subsequent dates and eventually fall in love.
The rest of the story traces out the arc of their lives. Since this is Hollywood, both have to run into terrible problems. If this is a PG film, then both will turn out to be with the right guys in the end. If rated R, then the person who seems to be alike will become a different, and controlling person, altogether. And violence will ensue, terrifying and possibly hurting Twin A. The person who seems to be opposite of twin B will turn out to be alike in terms of his heart, so the outside bits don't count for much. He will also be the person to rescue twin A from her fate.
Or we can go rated R in the 70s. In this case, both guys turn out to be Mr. Wrong and wreck the twins lives. Divorced and with children to take care of, alone, the twins call each other in the ending scene to commiserate their fate.
The drawback to this idea is that it is a bit like Sliding Doors, a film from a few years ago, but different enough, I suspect, to be interesting.
No idea what this has to do with game theory, but it seemed interesting to me.
(Aside: I don't write screenplays, but, if any game theory alums do, I'd be happy to collaborate on this idea.)
Twins A and B (girls) have just come off bad breakups. They live in NYC and Boston but are otherwise alike in every way. They talk on the phone, as they do every night, feeling bad about their miserable love lives and determined to fix it. Twin A suggests they visit a popular dating website she read about. Immediately after the call, they each turn on their iPads and bring up the website. It turns out, however, that the site is conducting an A/B test on its matching algorithm just as they query it.
(Sidenote: as each twin calls up the website, a clock ticks down to the 1/1000 of a second as they query it. Twin A lands on an odd numbered time while twin B lands on an even numbered time since she started a tiny bit later than the other twin. This produces the A/B test, which is rigged to the 1/1000 of a second time a session starts.
Back scene: techies in some Silicon Valley startup. Techie 1 talks about how the website has succeeded by having likes attract. Techie 2 tells a story about how, with his girlfriend, opposites attract. What if that strategy actually produces better matches. Techie 1 says that data talks and bulllshit walks, so only an A/B test can settle things for sure. He proposes that, on Sunday night, they run such a test on the east coast and then track all the matches that result to see how love really works. Techie 2, confident that opposites attract, agrees and bets $100 that he's right. Techie 1 shakes hands on it.)
Back to our twins. Twin A is matched with someone just like her. He's outdoorsy and easygoing, ruggedly handsome and with a steady, albeit boring job. Twin B is matched with her opposite. She likes the outdoors while he is an urban creature favoring clubs, bars, jazz, etc. She is an all-American clean cut girl while he is rather grungy. After the first date, twin A has found her perfect match while Twin B is appalled, but yet fascinated somehow. Both go on subsequent dates and eventually fall in love.
The rest of the story traces out the arc of their lives. Since this is Hollywood, both have to run into terrible problems. If this is a PG film, then both will turn out to be with the right guys in the end. If rated R, then the person who seems to be alike will become a different, and controlling person, altogether. And violence will ensue, terrifying and possibly hurting Twin A. The person who seems to be opposite of twin B will turn out to be alike in terms of his heart, so the outside bits don't count for much. He will also be the person to rescue twin A from her fate.
Or we can go rated R in the 70s. In this case, both guys turn out to be Mr. Wrong and wreck the twins lives. Divorced and with children to take care of, alone, the twins call each other in the ending scene to commiserate their fate.
The drawback to this idea is that it is a bit like Sliding Doors, a film from a few years ago, but different enough, I suspect, to be interesting.
No idea what this has to do with game theory, but it seemed interesting to me.
Thursday, September 18, 2014
The Paradox of Commitment
Commitment, in the form of renouncing or eliminating certain strategic possibilities ahead of time to gain an advantage, is one of the fundamental insights of game theory. Unlike decision problems, where more options can never leave one worse off> In interactive problems, games, where reactions come, not from nature, but rather from the exercise of free will on the part of others, more options, capabilities, capacity, client reach, and so on, is not always better. Thus, game theory offers numerous paradoxical situations where discarding or destroying assets, making certain strategies impossible to undertake and other such seemingly destructive behavior is, in fact, correct strategy.
One never, of course, does such things for one's own benefit. Rather, they are done to influence others playing the game to alter their course of action or reaction to a more favorable path. As such, these negative actions, the dismantling of a plant or the entering into of a binding contract removing a strategic possibility, must be done publicly. It must be observed and understood by others playing the game to have any effect.
One of the most famous situations illustrating the folly of commitment in private appears in the movie Dr. Strangelove where, in the climactic scene, it is revealed that the Russians have built a doomsday machine set to go off in the event of nuclear attack. Moreover, to complete the commitment, the Soviets have added a self-destruct mechanism to the device. It also goes off if tampered with or turned off. Since the machine will destroy the Earth, it ought properly to dissuade all countries to engage in nuclear combat.
But there's a problem--the US is only made aware of the machine after non-recallable bombers have been launched to deliver a devastating nuclear attack at the behest of a berserk air force commander. Why did the Soviets not tell the US and the world about the machine, asks Doctor Strangelove to the Soviet ambassador?
While paradoxical initially, the idea that fewer options can improve one's strategic position is intuitive once grasped and was understood at an intuitive level long before the invention of game theory.
But I want to talk about a less well-known paradox: if such commitment strategies succeed by altering others' play in a more favorable direction from the perspective of the committing party, why would these others choose to observe the commitment in the first place? Shouldn't they commit not to observe in an effort to frustrate the commitment efforts of others? It turns out that this second level of commitment is unnecessary, at least in the formal argument, all that is needed is a small cost to observe the choices made by the committing party for the value of commitment to be nullified.
For example, two players are playing a WEGO game where they choose between two strategies, S and C (labels will become clear shortly). The equilibrium in this game turns out to be (C, C), but player 1 would gain an advantage if she could commit to strategy S, which would provoke S in response, and raise her payoff. Thus, strategy C can be thought of as the Cournot strategy while S represents the Stackelberg strategy in terms of archetypal quantity setting games. Suppose further that, if 1 could be sure that 2 played S, she would prefer to play C in response, so (S, S) is not an equilibrium in the WEGO game.
The usual way player 1 might go about achieving the necessary commitment is by moving first and choosing S. Player 2 moves second, chooses S in response, and lo and behold, IGOUGO beats WEGO as per our theorem. Player 2 is perhaps worse off, but player 1 has improved her lot by committing to S.
But now let us slightly complicate the game by paying attention not just to the transmission of information but also its receipt. After player 1 moves, player 2 can choose to pay an observation cost, c, to observe 1's choice perfectly. This cost is very small but, if not paid, nothing about 1's choice is revealed to 2. After deciding on whether to observe or not, player 2 then chooses between C and S and payoffs are determined.
Looking forward and reasoning back, consider the situation where 2 chooses to observe 1's move. In that case, she chooses S if player 1 chose S and C if player 1 chose C. So far, so good. If she does not observe, then she must guess what player 1 might have chosen. If she's sufficiently confident that player 1 has chosen S, then S is again the best choice. Otherwise, C is best.
So should she observe or not? If commitment is successful, then player 2 will anticipate that player 1 has chosen S. Knowing this, there is no point in observing since, in equilibrium, player 2 will choose the same action, S, regardless of whether she looks or not. Thus, the value of information is zero while the cost gathering and interpreting the information is not, so, behaving optimally, player 2 never observes and thereby economizes (a little) by avoiding the cost c.
But then what should player 1 do? Anticipating that player 2 won't bother to observe her action, there is now no point in playing S since C was always the better choice. Thus, player 1 will choose C and it is now clear that the whole commitment posture was, in fact, mere stagecraft by player 1.
Of course, player 2 is no fool and will anticipate player 1's deception; therefore, if the equilibrium involves no observation, player 1 must have chosen C, and hence player 2 chooses C. Since we know that player 2 never pays the (wasteful) observation cost in equilibrium, the only equilibrium is (C, C), precisely as it was in the WEGO game. In other words, so long as there is any friction to observing player 1's choice, i.e. to receiving the information, first-mover commitment is impossible.
The issue would seem to be the conflict between players 1 and 2 where the latter has every incentive to frustrate the commitment efforts of the former since, if successful, 2 is worse off. But consider this game: Suppose that (S, S) yields each player a payoff of 3. (C, C), on the other hand, yields each player only 2. If player 2 chooses C in response to player 1's choice of S, both players earn zero while if the reverse occurs, player 2 chooses S and player 1 chooses C, then player 1 earns 4 while player 2 only earns 1. This fits our game above: C is a dominant strategy for player 1 while 2 prefers to match whatever 1 does.
This game has some of the flavor of a prisoner's dilemma. It is a one-sided trust game. By playing the WEGO game, both players lose out compared to the socially optimal (S, S), yet (S, S) is unsustainable because 1 will wish to cheat on any deal by selecting C. One-sidedness arises from that fact that, while player 1 can never be trusted to play S on her own initiative, player 2 can be trusted so long as he is confident about 1's choice of S.
Player 1 seeks to overcome her character flaw by moving first and committing to choose S, anticipating that 2 will follow suit. Surely now 2 will bother to observe if the costs are sufficiently low? Unfortunately, he will not. Under the virtuous (S, S) putative equilibrium, player 2 still has no reason to pay to observe player 1's anticipated first move since, again, 2 will choose the same action, S, regardless. Knowing this, 1 cannot resist the temptation to cheat and again we are back to (C, C) for the same reasons as above. Here the problem is that, to overcome temptation, 1 must be held to account by an observant player 2. But 2 sees no point in paying a cost merely to confirm what he already knows, so observation is impossible.
What is needed is a sort of double commitment--2 must first commit to observe, perhaps by prepaying the observation cost or by some other device. Only then can 1 commit to play S, and things play out nicely.
While paradoxical and logically correct, it seems quite silly to conclude that effective commitment is impossible. After all, people and firms do commit in various ways, their choices are observed, and these commitments have their intended effect. So what gives?
One answer is that, in reality-land, strategies are not so stark as simply S or C. There are many versions of S and likewise of C and the particular version chosen might depend on myriad environmental factors not directly observable to player 2. Now the information may be valuable enough that observation is optimal.
Seeds of doubt about the other's rationality can also fix the commitment problem. Ironically, this cure involves envisaging the possibility of a pathologically evil version of player 1. These evil types always act opportunistically by choosing C. Now there is a reason for 2 to look since she cannot be certain of 1's virtue.
A third possibility is that observing is simply unavoidable or that observation costs are negative. Curiosity is a trait common to many animals including humans. We derive joy from learning new things even if there is no direct economic value associated with this learning. Thus, individuals pay good money for intro texts on literary theory even though, for most of us, learning about Derrida's theories of literary deconstruction is of dubious economic value. Obviously, if the cost c were negative, i.e. a benefit, the problem vanishes and commitment is restored.
So if the theory is so implausible, then why bother bringing it up? One answer is to point out some countermeasures to commitment strategies. After all, if player 1 can "change the game" by committing to a strategy first, why can't player 2 change the game by committing to be on a boat in Tahoe and hence out of touch with what 1 is up to? A better answer is that it highlights the fact that commitment is a two-way street. Effective commitment requires not just that player 1 transmit the commitment information but that player 2 receive (and correctly decode) this information. Game theorists and others have spent endless hours thinking up different strategies for creating transmittable information, but precious little time thinking about its receipt. My own view is that this is a mistake since deafness on the part of other players destroys the value of commitment just as effectively as muteness on the part of the committing party.
Returning to Strangelove, it's not enough that the Soviet premiere transmit the information about the doomsday device ahead of time, for commitment to be effective such information must be heard and believed. This suggests the following alternative problem--even if the premiere had disclosed the existence of the doomsday machine, would the US have believed it? If not, Slim Pickens might still be waving his cowboy hat while sitting atop a nuclear bomb plummeting down to end all life. Yee-hah!
One never, of course, does such things for one's own benefit. Rather, they are done to influence others playing the game to alter their course of action or reaction to a more favorable path. As such, these negative actions, the dismantling of a plant or the entering into of a binding contract removing a strategic possibility, must be done publicly. It must be observed and understood by others playing the game to have any effect.
One of the most famous situations illustrating the folly of commitment in private appears in the movie Dr. Strangelove where, in the climactic scene, it is revealed that the Russians have built a doomsday machine set to go off in the event of nuclear attack. Moreover, to complete the commitment, the Soviets have added a self-destruct mechanism to the device. It also goes off if tampered with or turned off. Since the machine will destroy the Earth, it ought properly to dissuade all countries to engage in nuclear combat.
But there's a problem--the US is only made aware of the machine after non-recallable bombers have been launched to deliver a devastating nuclear attack at the behest of a berserk air force commander. Why did the Soviets not tell the US and the world about the machine, asks Doctor Strangelove to the Soviet ambassador?
The premiere likes surprises.comes the pitiful answer. And so unobserved commitment is, in effect, no commitment at all.
While paradoxical initially, the idea that fewer options can improve one's strategic position is intuitive once grasped and was understood at an intuitive level long before the invention of game theory.
But I want to talk about a less well-known paradox: if such commitment strategies succeed by altering others' play in a more favorable direction from the perspective of the committing party, why would these others choose to observe the commitment in the first place? Shouldn't they commit not to observe in an effort to frustrate the commitment efforts of others? It turns out that this second level of commitment is unnecessary, at least in the formal argument, all that is needed is a small cost to observe the choices made by the committing party for the value of commitment to be nullified.
For example, two players are playing a WEGO game where they choose between two strategies, S and C (labels will become clear shortly). The equilibrium in this game turns out to be (C, C), but player 1 would gain an advantage if she could commit to strategy S, which would provoke S in response, and raise her payoff. Thus, strategy C can be thought of as the Cournot strategy while S represents the Stackelberg strategy in terms of archetypal quantity setting games. Suppose further that, if 1 could be sure that 2 played S, she would prefer to play C in response, so (S, S) is not an equilibrium in the WEGO game.
The usual way player 1 might go about achieving the necessary commitment is by moving first and choosing S. Player 2 moves second, chooses S in response, and lo and behold, IGOUGO beats WEGO as per our theorem. Player 2 is perhaps worse off, but player 1 has improved her lot by committing to S.
But now let us slightly complicate the game by paying attention not just to the transmission of information but also its receipt. After player 1 moves, player 2 can choose to pay an observation cost, c, to observe 1's choice perfectly. This cost is very small but, if not paid, nothing about 1's choice is revealed to 2. After deciding on whether to observe or not, player 2 then chooses between C and S and payoffs are determined.
Looking forward and reasoning back, consider the situation where 2 chooses to observe 1's move. In that case, she chooses S if player 1 chose S and C if player 1 chose C. So far, so good. If she does not observe, then she must guess what player 1 might have chosen. If she's sufficiently confident that player 1 has chosen S, then S is again the best choice. Otherwise, C is best.
So should she observe or not? If commitment is successful, then player 2 will anticipate that player 1 has chosen S. Knowing this, there is no point in observing since, in equilibrium, player 2 will choose the same action, S, regardless of whether she looks or not. Thus, the value of information is zero while the cost gathering and interpreting the information is not, so, behaving optimally, player 2 never observes and thereby economizes (a little) by avoiding the cost c.
But then what should player 1 do? Anticipating that player 2 won't bother to observe her action, there is now no point in playing S since C was always the better choice. Thus, player 1 will choose C and it is now clear that the whole commitment posture was, in fact, mere stagecraft by player 1.
Of course, player 2 is no fool and will anticipate player 1's deception; therefore, if the equilibrium involves no observation, player 1 must have chosen C, and hence player 2 chooses C. Since we know that player 2 never pays the (wasteful) observation cost in equilibrium, the only equilibrium is (C, C), precisely as it was in the WEGO game. In other words, so long as there is any friction to observing player 1's choice, i.e. to receiving the information, first-mover commitment is impossible.
The issue would seem to be the conflict between players 1 and 2 where the latter has every incentive to frustrate the commitment efforts of the former since, if successful, 2 is worse off. But consider this game: Suppose that (S, S) yields each player a payoff of 3. (C, C), on the other hand, yields each player only 2. If player 2 chooses C in response to player 1's choice of S, both players earn zero while if the reverse occurs, player 2 chooses S and player 1 chooses C, then player 1 earns 4 while player 2 only earns 1. This fits our game above: C is a dominant strategy for player 1 while 2 prefers to match whatever 1 does.
This game has some of the flavor of a prisoner's dilemma. It is a one-sided trust game. By playing the WEGO game, both players lose out compared to the socially optimal (S, S), yet (S, S) is unsustainable because 1 will wish to cheat on any deal by selecting C. One-sidedness arises from that fact that, while player 1 can never be trusted to play S on her own initiative, player 2 can be trusted so long as he is confident about 1's choice of S.
Player 1 seeks to overcome her character flaw by moving first and committing to choose S, anticipating that 2 will follow suit. Surely now 2 will bother to observe if the costs are sufficiently low? Unfortunately, he will not. Under the virtuous (S, S) putative equilibrium, player 2 still has no reason to pay to observe player 1's anticipated first move since, again, 2 will choose the same action, S, regardless. Knowing this, 1 cannot resist the temptation to cheat and again we are back to (C, C) for the same reasons as above. Here the problem is that, to overcome temptation, 1 must be held to account by an observant player 2. But 2 sees no point in paying a cost merely to confirm what he already knows, so observation is impossible.
What is needed is a sort of double commitment--2 must first commit to observe, perhaps by prepaying the observation cost or by some other device. Only then can 1 commit to play S, and things play out nicely.
While paradoxical and logically correct, it seems quite silly to conclude that effective commitment is impossible. After all, people and firms do commit in various ways, their choices are observed, and these commitments have their intended effect. So what gives?
One answer is that, in reality-land, strategies are not so stark as simply S or C. There are many versions of S and likewise of C and the particular version chosen might depend on myriad environmental factors not directly observable to player 2. Now the information may be valuable enough that observation is optimal.
Seeds of doubt about the other's rationality can also fix the commitment problem. Ironically, this cure involves envisaging the possibility of a pathologically evil version of player 1. These evil types always act opportunistically by choosing C. Now there is a reason for 2 to look since she cannot be certain of 1's virtue.
A third possibility is that observing is simply unavoidable or that observation costs are negative. Curiosity is a trait common to many animals including humans. We derive joy from learning new things even if there is no direct economic value associated with this learning. Thus, individuals pay good money for intro texts on literary theory even though, for most of us, learning about Derrida's theories of literary deconstruction is of dubious economic value. Obviously, if the cost c were negative, i.e. a benefit, the problem vanishes and commitment is restored.
So if the theory is so implausible, then why bother bringing it up? One answer is to point out some countermeasures to commitment strategies. After all, if player 1 can "change the game" by committing to a strategy first, why can't player 2 change the game by committing to be on a boat in Tahoe and hence out of touch with what 1 is up to? A better answer is that it highlights the fact that commitment is a two-way street. Effective commitment requires not just that player 1 transmit the commitment information but that player 2 receive (and correctly decode) this information. Game theorists and others have spent endless hours thinking up different strategies for creating transmittable information, but precious little time thinking about its receipt. My own view is that this is a mistake since deafness on the part of other players destroys the value of commitment just as effectively as muteness on the part of the committing party.
Returning to Strangelove, it's not enough that the Soviet premiere transmit the information about the doomsday device ahead of time, for commitment to be effective such information must be heard and believed. This suggests the following alternative problem--even if the premiere had disclosed the existence of the doomsday machine, would the US have believed it? If not, Slim Pickens might still be waving his cowboy hat while sitting atop a nuclear bomb plummeting down to end all life. Yee-hah!
Secrets and Lies
There are many instances where players in a game can control the timing and transparency of their moves. Indeed, a fundamental question firms face when making key strategic decisions is whether to keep them a secret or to reveal. Geographic decisions like plant openings or closings, strategic alliances with foreign partners, or overseas initiatives are often revealed, sometimes with great fanfare. Other decisions, such as the details of a product design or merger target, are closely guarded secrets. Firms also tell lies (or at least exaggerations) at times, for instance, in the schedule for a software release or in plans to acquire targets done to jack up its price to a competitor. What can game theory tell us about secrets, lies, and timing.
One way of thinking about secrets is to imagine a game whose timing is fixed but where disclosure is at the discretion of the participants. For instance, firm 1 moves first, followed by firm 2. When firm 1 moves, it can choose to (truthfully) disclose its action to the world or not. This amounts to the choice between a WEGO game versus an IGOUGO game from the perspective of firm 1. The key question then, is when should firm 1 disclose its strategy. For a large class of situations, game theory offers a sharp answer to this question:
On to the specifics: Suppose that both firms are playing a game with finite strategies and payoffs and complete information; that is, both firms know precisely the game that they are playing. Let x denote the firm 1's choice and y denote firm 2's. Let y(x) denote how firm 2 would respond if it thought firm 1 were choosing strategy x. Let x(y) be similarly defined for firm 1. A pure strategy equilibrium is a pair (x', y') where y' = y(x') and x'=x(y'). Let P(x, y) be firm 1's payoff when x and y are selected. Thus, in the above equilibrium, firm 1 earns P(x', y') or, equivalently, P(x', y(x')).
Now consider an IGOUGO situation. Here, firm 1 can choose from all possible x. Using look forward, reason back (LFRB), firm 1 anticipates that 2 will play y(x) when 1 plays x. Thus, 1's payoffs in this situation are P(x, y(x)). Notice that, by simply choosing x = x', 1 earns exactly the same payoff as in the WEGO game. OTOH, firm 1 typically will have some other choice x* that produces even higher payoffs P(x*,y(x*)). Thus, IGOUGO is generically better than WEGO and certainly never worse.
But this begs the question of secrets and lies. Why doesn't firm 1 simply try to persuade firm 2 that it will play x* and thereby induce y* =y(x*), thus replicating the ideal setting of the IGOUGO game? One reason is that, generically x(y*), i.e., 1's best response to 2's playing y*, is not to play x*. One might view this as an opportunity since this means that available to firm 1 is some strategy x** = x(y*) with the property P(x**,y*) > P(x*,y*). That is, if 1 can persuade 2 to play y*, it can do even better than playing x*, the IGOUGO strategy.
The problem, of course, is that firm 2 will only play y* if it is convinced 1 will play x*. Since 1 will never play x* if this persuasion is successful, then 1 can never convince 2 of its intentions to play x*. As a result, the whole situation unravels to the original WEGO outcome of (x', y'), which is worse for firm 1 than IGOUGO. Put differently, firm 1's promises to play x* are simply never credible.
This story about the unraveling of non-credible pronouncements, however, has a profound and undesirable implication. It implies that, in game theory land, it is impossible to trick or deceive the other party (at least in our class of games). Deceptions will always be seen through and hence rendered ineffective. But we see all sorts of attempts at deception in business and in life. Surely people would not spend so much time and effort constructing deceptions if they never worked. Here again, full rationality is the culprit. Our game theory land firms are hard headed realists. They never naively took the other's words at face value. Instead, they viewed them through the cynical lens that implied that promises which were not in 1's self-interest would never be honored.
Life is sometimes like this, but thankfully not always. People do keep their word even when keeping it is not in their self-interest. Moreover, others believe these "honey words," acting as though they are true, even if they are possibly not credible when put to the test. Moreover, we teach our children precisely this sort of rationality---to honor promises made, even if we don't want to. As usual, game theory can accommodate this sort of thing simply by amending the game to allow a fraction of firm 2's to be naive, believing firm 1's promises and to allow a fraction of "honorable" firm 1's who keep promises made, even when better off not doing so. Once we admit this possibility, bluffing becomes an acceptable, and occasionally profitable, strategy. A deceitful and selfish player 1 may well spend time trying to convince 2 that he will play x* and therefore 2 should play y* since there is a chance that 2 will act upon this.
This does, however, muddy our result a bit. We now need to adjust this to:
With enough naifs in the firm 2 population, 1 may well prefer to take its chances on trickery and deceit rather than committing to the ex post unappealing action x*.
One way of thinking about secrets is to imagine a game whose timing is fixed but where disclosure is at the discretion of the participants. For instance, firm 1 moves first, followed by firm 2. When firm 1 moves, it can choose to (truthfully) disclose its action to the world or not. This amounts to the choice between a WEGO game versus an IGOUGO game from the perspective of firm 1. The key question then, is when should firm 1 disclose its strategy. For a large class of situations, game theory offers a sharp answer to this question:
Disclosure is always better than secrecy.
On to the specifics: Suppose that both firms are playing a game with finite strategies and payoffs and complete information; that is, both firms know precisely the game that they are playing. Let x denote the firm 1's choice and y denote firm 2's. Let y(x) denote how firm 2 would respond if it thought firm 1 were choosing strategy x. Let x(y) be similarly defined for firm 1. A pure strategy equilibrium is a pair (x', y') where y' = y(x') and x'=x(y'). Let P(x, y) be firm 1's payoff when x and y are selected. Thus, in the above equilibrium, firm 1 earns P(x', y') or, equivalently, P(x', y(x')).
Now consider an IGOUGO situation. Here, firm 1 can choose from all possible x. Using look forward, reason back (LFRB), firm 1 anticipates that 2 will play y(x) when 1 plays x. Thus, 1's payoffs in this situation are P(x, y(x)). Notice that, by simply choosing x = x', 1 earns exactly the same payoff as in the WEGO game. OTOH, firm 1 typically will have some other choice x* that produces even higher payoffs P(x*,y(x*)). Thus, IGOUGO is generically better than WEGO and certainly never worse.
But this begs the question of secrets and lies. Why doesn't firm 1 simply try to persuade firm 2 that it will play x* and thereby induce y* =y(x*), thus replicating the ideal setting of the IGOUGO game? One reason is that, generically x(y*), i.e., 1's best response to 2's playing y*, is not to play x*. One might view this as an opportunity since this means that available to firm 1 is some strategy x** = x(y*) with the property P(x**,y*) > P(x*,y*). That is, if 1 can persuade 2 to play y*, it can do even better than playing x*, the IGOUGO strategy.
The problem, of course, is that firm 2 will only play y* if it is convinced 1 will play x*. Since 1 will never play x* if this persuasion is successful, then 1 can never convince 2 of its intentions to play x*. As a result, the whole situation unravels to the original WEGO outcome of (x', y'), which is worse for firm 1 than IGOUGO. Put differently, firm 1's promises to play x* are simply never credible.
This story about the unraveling of non-credible pronouncements, however, has a profound and undesirable implication. It implies that, in game theory land, it is impossible to trick or deceive the other party (at least in our class of games). Deceptions will always be seen through and hence rendered ineffective. But we see all sorts of attempts at deception in business and in life. Surely people would not spend so much time and effort constructing deceptions if they never worked. Here again, full rationality is the culprit. Our game theory land firms are hard headed realists. They never naively took the other's words at face value. Instead, they viewed them through the cynical lens that implied that promises which were not in 1's self-interest would never be honored.
Life is sometimes like this, but thankfully not always. People do keep their word even when keeping it is not in their self-interest. Moreover, others believe these "honey words," acting as though they are true, even if they are possibly not credible when put to the test. Moreover, we teach our children precisely this sort of rationality---to honor promises made, even if we don't want to. As usual, game theory can accommodate this sort of thing simply by amending the game to allow a fraction of firm 2's to be naive, believing firm 1's promises and to allow a fraction of "honorable" firm 1's who keep promises made, even when better off not doing so. Once we admit this possibility, bluffing becomes an acceptable, and occasionally profitable, strategy. A deceitful and selfish player 1 may well spend time trying to convince 2 that he will play x* and therefore 2 should play y* since there is a chance that 2 will act upon this.
This does, however, muddy our result a bit. We now need to adjust this to:
When rivals are sophisticated, disclosure is always best.
With enough naifs in the firm 2 population, 1 may well prefer to take its chances on trickery and deceit rather than committing to the ex post unappealing action x*.
Corporate Culture
What does culture, corporate culture no less, have to do with game theory? At first blush the two worlds could not seem further apart. Game theory, with its bizarre concern for turning every single social interaction, from exchanged smiles in the morning to (at least in Mad Men world) exchanged gropings after hours, into a series of tables, trees, and math, seems a faintly ridiculous place to look for insights about something as amorphous, fluid, and nuanced as culture. Yet, perhaps in spite of itself, game theory has something (maybe a lot) to say about this topic.
Ask an economist about culture in the workplace and he or she (mostly he) will respond with stories about the importance of getting incentives right. Such a response seems faintly insulting as culture is obviously much more than a series of carrots and sticks used to induce corporate donkeys to trod some dreary path carrying their pack.
Yet this is correct, to an extent, for cultures, however noble, meaningful or even godly, founder quickly in the face of bad pecuniary incentives. Happily for us (though not necessarily for the individuals so described), tens of thousands of individuals in the US during the first half of the 19th century became unwitting test subjects for examining this hypothesis. At that time, as apparently at all times, people were sure that civilization was going to hell in a handbasket, that godliness, respect for others, politeness, manners, and the work ethic were all pale shadows of their former selves. In short, many were convinced that civilized society was breaking down or in crisis. They were absolutely convinced that their sons and daughters, high on the freedom (or perhaps stronger spirits) of America's rapidly advancing frontiers, were in the process of sending taking a giant step backwards in terms of civilized society.
(A sidenote: These sons and daughters were, in all likelihood, drunk a great deal of the time. The combination of perishable grains, long winters, bad roads, and knowledge of distilling proved a potent "cocktail" for individuals living in such wild frontier places as Ohio, western New York or western Pennsylvania, far away from the Land that Time Forgot, which lies to the east in the Keystone state. Having a grog--or several--before breakfast, at meals, and at breaks was considered perfectly normal. Rather than having a coffee break, workers would stop for "dram breaks" several times a day. This no doubt contributed to the violence, especially domestic violence, of frontier life, as well as the prevalence of workplace accidents, and possibly, to some degree, the remarkable fecundity of the population, which grew at 4% per annum without any considerable influx of immigrants.)
Anyway, back to our story. Rather than merely bemoaning the sad state of civilization, many people formed utopias--societies set apart and usually dedicated to some prescription or other for the well-led life or for a right and just society. Often these prescriptions were religious as their was a tremendous religious revival occurring at the time, mainly consisting of the formation of new Christian sects. Sometimes the prescriptions were the product of reasoning and "science" by (mainly pseudo) intellectuals . Many of these utopias saw money and, more broadly, property as the root of the problem and banished it from their communities. All property was joint. All production was to be shared. Individuals should take as they need and work according to their ability and in whatever field they thought best. There were also stringent social rules tight control over sex, drinking, profane language, and other behaviors deemed as societal ills. Incentives, so far as any existed, relied purely on social and godly rewards and punishments. Such societies would have little use for our typical economist above, except possibly as a strong back to contribute to crop growing.
Overall, the results of these utopias were...awful. Most fell apart within less than five years of their founding, often ending in bankruptcy, numerous legal battles, and sharp acrimony. Utopias frequently ended up starving since most members preferred to write manifestos about the great deeds of their utopia rather than engaging in (still labor intensive) food production, house production, animal husbandry or any other task likely to produce the necessities of life. Laziness in general was a constant problem as many people would happily do nothing whatever whenever possible.People ate too much from what little food was produced. And worst of all people bickered constantly about what (or in some cases who) was theirs. The absence of currency or formal property rights did not mean that individuals gave up the pursuit of "stuff." Quite the contrary as individuals spent a great deal of time scheming and conniving to acquire squatting rights more and better shelter, food, furnishings, and so on.
There were, however, some successes. In New England, one utopia was, in fact if not in name, nothing more than a company mill town run entirely from the labor of young women. Their parents entrusted them to the manager/mayor/chief priest of the utopia (an older and richer man, of course), to keep them out of trouble and earn some money for their families until they married. To be precise, these women worked hard--very hard--and were given food and board in the utopia in exchange for their labor. They also had to adhere to rules on pain of being "sent home." The most important rule was that, other than the manager/mayor/priest, there were to be no men in the community, either as members or visitors. A cynic might see this "utopia" as little more than a scheme to take advantage of cheap, unskilled labor under a mere facade of societal improvement. Nonetheless, it clearly met some sort of need for New England families to make some money and not have to worry about protecting their daughter's "virtue."
Another notable success were the Shakers. They were a more traditional religious utopia founded by a woman who believed herself to be the second incarnation of Jesus Christ. Her charisma, combined with a strong practical streak---well spelled out rules and the ultimate sanction, banishment and damnation by eternal hellfire---caused the place to run pretty effectively. Ultimately, it was undone by a central rule--celibacy--no individual, under any circumstance, could reproduce. Punishment for doing so, or even performing certain activities that might, if the stars were aligned, lead to reproduction, was exile from the community, stringently enforced. The Shakers lasted surprisingly long given this stricture. They also left us with a nice style of furniture.
But back to the main plot---ask an economist about culture and hear about incentives. Well, it seems not to be entirely nonsense. Incentives do matter a great deal to culture, despite the claims of many intellectuals even today. .
Ask a specialist in organizational behavior (usually trained as a psychologist) about culture and you'll receive a much different answer. She will talk about the importance of empathy, transparency, safety in the frank exchange of feelings, trust, and other such behaviors. This is not to say that these people do not believe in incentives, they do, but tend to call them by different names than economists. To give but one example, most social psychologists believe in a theory called "relational equity" and argue that good cultures are marked by relative balance in relational equity accounts of the key actors. According to this theory, each side keeps an account of the good deeds done for the other (and presumably offsets these debits with credits for bad deeds though this is not much talked about). A relational equity account is balanced when goodness, measured somehow, is approximately equal. Things break down when inequalities persist or grow. It seems intuitive that I might stop being such close friends with someone to whom I grant a string of gifts and favors while receiving nothing back in return. But psychologists think things run the other way as well. I may wish to divorce a friend who is "too nice," someone who endlessly does me good turns at such a rate that I cannot possibly keep up in reciprocation. Thus, both not nice enough and too nice present problems. At any rate, some version of incentives runs through much of the literature on leadership and culture though the incentives tend to be of a more amorphous and personal character rather than the rough and ready dollar variety that economists like.
So where does this leave game theory? One central insight of game theory is that the same set of external and internal incentives can produce wildly different cultures depending on beliefs about the choices (and feelings) of others. Put differently, the same game (i.e. set of actions, payoffs, preferences, and information) can give rise to multiple cultures/equilibria depending on beliefs. Moreover, many things influence these beliefs. Past history, the shared set of company values, the types of individuals recruited, social norms, and so on are all influencers. In game theory, these features are sometimes referred to by the shorthand of focality, circumstances that make one set of beliefs about the actions others will take more likely than some other set of beliefs. While the list of focality influencers offered above are all concepts our OB specialist would readily recognize and endorse, our economist can readily support the idea of changing the game, or even just changing how the game is presented.
Let's make this concrete: Consider the archetypal game Stag Hunt: Two parties (or perhaps many parties) must choose between an ambitious but risky action that requires them to work together to pull off or a safe, but lower payoff action that does not. Since innovation lies at the heart of the long-term success of nearly any company, most leaders would probably wish to encourage their employees and business units to choose ambitious actions at least some of the time. One school of managerial thought suggests that we treat employees as "owner/operators" and hold them closely accountable for their financial performance, typically measured at quarterly time intervals. Thus, if an employee or group/division/branch, etc. does well---meets its numbers---it is rewarded, if it "falls down" it suffers some sanction, and if it does something really great, perhaps an extra award might accrue. Trouble arises in determining how much this extra award might be. While firms have a good sense of the value of normal operations and can reward accordingly, the value of an innovation is rarely immediately apparent and, consequently, a firm might, with good reason, be hesitant to reward it lavishly. Returning to our stag hunt, this implies that gap between the payoff from meeting the numbers versus missing is likely larger than the gap between the payoff from successful innovation versus meeting the numbers, at least in the short run.
But note what our firm, following "best practices" has done---they've unwittingly made it very risky to undertake that ambitious project. If the project requires cooperation across several managers, then the firm has, in effect, given veto power to their most risk averse manager. Not exactly a prescription for innovation.
It needn't be this way and, in fact, probably was not this way back in the firm's formative years. At that time, the firm was small, everyone knew one another closely, working together nearly all the time in the firm's startup phase, and, moreover, ambitious projects were undertaken and were successful, perhaps those projects are the reason the firm is now big and faces this problem in the first place. One could simply write off the two situations as a difference in the personalities of the managers and leave it at that, arguing that those early manager/founders were extraordinary and so they pulled the ambitious projects off whereas the current crop are not made from the same stern stuff. This might be true, but is hardly useful to a firm that wishes to be innovative.
Finally, we are to the heart of the matter. There seems nothing wrong with the incentives of the firm as success is rewarded to differing degrees and failure punished. There may be something wrong elsewhere in the culture, the wrong people, the wrong way of fostering interaction on intergroup projects, and so on, but we have little guide where to look. Treating the cultural situation as a game, however, offers some prospects. From this analysis, we might discover that our incentive system is set up to make safe play "risk dominant" and endeavor to fix this. Curiously, one fix would be to make the incentives less high-powered, to make it okay to fail, perhaps even rewarding failure . We might discover that managers across groups are simply too busy, too productive, to get to know each other so as to form sufficient confidence that promises made by the other will be carried through. This suggests a different set of fixes including retreats, high performer training, or even something as simple as social events for managers.
The point is that, by thinking of key challenges in terms of a game, we gain deeper insights into the pathology of the problem and hence a much better idea of which of the many solutions on offer-- monetary, social, coaching, trust, transparency, and so on--to choose. In short, game theory provides a lens through which to understand our culture better.
Ask an economist about culture in the workplace and he or she (mostly he) will respond with stories about the importance of getting incentives right. Such a response seems faintly insulting as culture is obviously much more than a series of carrots and sticks used to induce corporate donkeys to trod some dreary path carrying their pack.
Yet this is correct, to an extent, for cultures, however noble, meaningful or even godly, founder quickly in the face of bad pecuniary incentives. Happily for us (though not necessarily for the individuals so described), tens of thousands of individuals in the US during the first half of the 19th century became unwitting test subjects for examining this hypothesis. At that time, as apparently at all times, people were sure that civilization was going to hell in a handbasket, that godliness, respect for others, politeness, manners, and the work ethic were all pale shadows of their former selves. In short, many were convinced that civilized society was breaking down or in crisis. They were absolutely convinced that their sons and daughters, high on the freedom (or perhaps stronger spirits) of America's rapidly advancing frontiers, were in the process of sending taking a giant step backwards in terms of civilized society.
(A sidenote: These sons and daughters were, in all likelihood, drunk a great deal of the time. The combination of perishable grains, long winters, bad roads, and knowledge of distilling proved a potent "cocktail" for individuals living in such wild frontier places as Ohio, western New York or western Pennsylvania, far away from the Land that Time Forgot, which lies to the east in the Keystone state. Having a grog--or several--before breakfast, at meals, and at breaks was considered perfectly normal. Rather than having a coffee break, workers would stop for "dram breaks" several times a day. This no doubt contributed to the violence, especially domestic violence, of frontier life, as well as the prevalence of workplace accidents, and possibly, to some degree, the remarkable fecundity of the population, which grew at 4% per annum without any considerable influx of immigrants.)
Anyway, back to our story. Rather than merely bemoaning the sad state of civilization, many people formed utopias--societies set apart and usually dedicated to some prescription or other for the well-led life or for a right and just society. Often these prescriptions were religious as their was a tremendous religious revival occurring at the time, mainly consisting of the formation of new Christian sects. Sometimes the prescriptions were the product of reasoning and "science" by (mainly pseudo) intellectuals . Many of these utopias saw money and, more broadly, property as the root of the problem and banished it from their communities. All property was joint. All production was to be shared. Individuals should take as they need and work according to their ability and in whatever field they thought best. There were also stringent social rules tight control over sex, drinking, profane language, and other behaviors deemed as societal ills. Incentives, so far as any existed, relied purely on social and godly rewards and punishments. Such societies would have little use for our typical economist above, except possibly as a strong back to contribute to crop growing.
Overall, the results of these utopias were...awful. Most fell apart within less than five years of their founding, often ending in bankruptcy, numerous legal battles, and sharp acrimony. Utopias frequently ended up starving since most members preferred to write manifestos about the great deeds of their utopia rather than engaging in (still labor intensive) food production, house production, animal husbandry or any other task likely to produce the necessities of life. Laziness in general was a constant problem as many people would happily do nothing whatever whenever possible.People ate too much from what little food was produced. And worst of all people bickered constantly about what (or in some cases who) was theirs. The absence of currency or formal property rights did not mean that individuals gave up the pursuit of "stuff." Quite the contrary as individuals spent a great deal of time scheming and conniving to acquire squatting rights more and better shelter, food, furnishings, and so on.
There were, however, some successes. In New England, one utopia was, in fact if not in name, nothing more than a company mill town run entirely from the labor of young women. Their parents entrusted them to the manager/mayor/chief priest of the utopia (an older and richer man, of course), to keep them out of trouble and earn some money for their families until they married. To be precise, these women worked hard--very hard--and were given food and board in the utopia in exchange for their labor. They also had to adhere to rules on pain of being "sent home." The most important rule was that, other than the manager/mayor/priest, there were to be no men in the community, either as members or visitors. A cynic might see this "utopia" as little more than a scheme to take advantage of cheap, unskilled labor under a mere facade of societal improvement. Nonetheless, it clearly met some sort of need for New England families to make some money and not have to worry about protecting their daughter's "virtue."
Another notable success were the Shakers. They were a more traditional religious utopia founded by a woman who believed herself to be the second incarnation of Jesus Christ. Her charisma, combined with a strong practical streak---well spelled out rules and the ultimate sanction, banishment and damnation by eternal hellfire---caused the place to run pretty effectively. Ultimately, it was undone by a central rule--celibacy--no individual, under any circumstance, could reproduce. Punishment for doing so, or even performing certain activities that might, if the stars were aligned, lead to reproduction, was exile from the community, stringently enforced. The Shakers lasted surprisingly long given this stricture. They also left us with a nice style of furniture.
But back to the main plot---ask an economist about culture and hear about incentives. Well, it seems not to be entirely nonsense. Incentives do matter a great deal to culture, despite the claims of many intellectuals even today. .
Ask a specialist in organizational behavior (usually trained as a psychologist) about culture and you'll receive a much different answer. She will talk about the importance of empathy, transparency, safety in the frank exchange of feelings, trust, and other such behaviors. This is not to say that these people do not believe in incentives, they do, but tend to call them by different names than economists. To give but one example, most social psychologists believe in a theory called "relational equity" and argue that good cultures are marked by relative balance in relational equity accounts of the key actors. According to this theory, each side keeps an account of the good deeds done for the other (and presumably offsets these debits with credits for bad deeds though this is not much talked about). A relational equity account is balanced when goodness, measured somehow, is approximately equal. Things break down when inequalities persist or grow. It seems intuitive that I might stop being such close friends with someone to whom I grant a string of gifts and favors while receiving nothing back in return. But psychologists think things run the other way as well. I may wish to divorce a friend who is "too nice," someone who endlessly does me good turns at such a rate that I cannot possibly keep up in reciprocation. Thus, both not nice enough and too nice present problems. At any rate, some version of incentives runs through much of the literature on leadership and culture though the incentives tend to be of a more amorphous and personal character rather than the rough and ready dollar variety that economists like.
So where does this leave game theory? One central insight of game theory is that the same set of external and internal incentives can produce wildly different cultures depending on beliefs about the choices (and feelings) of others. Put differently, the same game (i.e. set of actions, payoffs, preferences, and information) can give rise to multiple cultures/equilibria depending on beliefs. Moreover, many things influence these beliefs. Past history, the shared set of company values, the types of individuals recruited, social norms, and so on are all influencers. In game theory, these features are sometimes referred to by the shorthand of focality, circumstances that make one set of beliefs about the actions others will take more likely than some other set of beliefs. While the list of focality influencers offered above are all concepts our OB specialist would readily recognize and endorse, our economist can readily support the idea of changing the game, or even just changing how the game is presented.
Let's make this concrete: Consider the archetypal game Stag Hunt: Two parties (or perhaps many parties) must choose between an ambitious but risky action that requires them to work together to pull off or a safe, but lower payoff action that does not. Since innovation lies at the heart of the long-term success of nearly any company, most leaders would probably wish to encourage their employees and business units to choose ambitious actions at least some of the time. One school of managerial thought suggests that we treat employees as "owner/operators" and hold them closely accountable for their financial performance, typically measured at quarterly time intervals. Thus, if an employee or group/division/branch, etc. does well---meets its numbers---it is rewarded, if it "falls down" it suffers some sanction, and if it does something really great, perhaps an extra award might accrue. Trouble arises in determining how much this extra award might be. While firms have a good sense of the value of normal operations and can reward accordingly, the value of an innovation is rarely immediately apparent and, consequently, a firm might, with good reason, be hesitant to reward it lavishly. Returning to our stag hunt, this implies that gap between the payoff from meeting the numbers versus missing is likely larger than the gap between the payoff from successful innovation versus meeting the numbers, at least in the short run.
But note what our firm, following "best practices" has done---they've unwittingly made it very risky to undertake that ambitious project. If the project requires cooperation across several managers, then the firm has, in effect, given veto power to their most risk averse manager. Not exactly a prescription for innovation.
It needn't be this way and, in fact, probably was not this way back in the firm's formative years. At that time, the firm was small, everyone knew one another closely, working together nearly all the time in the firm's startup phase, and, moreover, ambitious projects were undertaken and were successful, perhaps those projects are the reason the firm is now big and faces this problem in the first place. One could simply write off the two situations as a difference in the personalities of the managers and leave it at that, arguing that those early manager/founders were extraordinary and so they pulled the ambitious projects off whereas the current crop are not made from the same stern stuff. This might be true, but is hardly useful to a firm that wishes to be innovative.
Finally, we are to the heart of the matter. There seems nothing wrong with the incentives of the firm as success is rewarded to differing degrees and failure punished. There may be something wrong elsewhere in the culture, the wrong people, the wrong way of fostering interaction on intergroup projects, and so on, but we have little guide where to look. Treating the cultural situation as a game, however, offers some prospects. From this analysis, we might discover that our incentive system is set up to make safe play "risk dominant" and endeavor to fix this. Curiously, one fix would be to make the incentives less high-powered, to make it okay to fail, perhaps even rewarding failure . We might discover that managers across groups are simply too busy, too productive, to get to know each other so as to form sufficient confidence that promises made by the other will be carried through. This suggests a different set of fixes including retreats, high performer training, or even something as simple as social events for managers.
The point is that, by thinking of key challenges in terms of a game, we gain deeper insights into the pathology of the problem and hence a much better idea of which of the many solutions on offer-- monetary, social, coaching, trust, transparency, and so on--to choose. In short, game theory provides a lens through which to understand our culture better.
Subscribe to:
Posts (Atom)