Wednesday, October 29, 2014

If a tree falls in the forest...

There's an old conundrum that asks whether, when a tree falls in the forest, and know one is there to hear it, does it make a sound? Millions of creators of social media must constantly ask themselves the same question. How many thousands of tweets, facebook posts, youtube videos, and blog entries pass through the ether, unheard, unread, and unknown.

In the words of Thomas Gray (Elegy Written in a Country Churchyard)

Full many a flower is born to blush unseen, /And waste its sweetness on the desert air.
 And so it is with social media. How many Ansel Addams' or Auguste Monet's are bound to rest, undisturbed and undiscovered, amidst the detritus of the communication explosion.

At Berkeley-Haas, the Dean's suite often coaxes a reluctant professoriate to embrace the age of social media, to interact with our students outside the classroom in these social spaces. We are advised that the millenials we teach are especially receptive to such bite-sized portions of wisdom, that, in fact, they prefer them to the more traditional long-form of the lecture hall or the textbook. We are advised to turn our classes upside-down, to engage in all possible ways for the ever elusive mindshare.

What we are not offered, however, is evidence. Does any of this flailing outside the classroom matter? Do the students even want it?

I conducted an A/B test to measure this. Before each of my last two Wednesday classes, I wrote a blog entry. I viewed the entries as similarly interesting to my class. If anything, the treatment entry is more interesting. The key treatment was announcing the presence of the entry in one case, and saying nothing in the other.

Here are the results of the experiment:

Blog entry with no announcement: +1, 0 comments, 14 views.
Blog entry with announcement, +1, 0 comments, 14 views.

It takes no statistics to see that awareness that a blog entry has been written makes no difference whatsoever.

What should we make of this experiment? My take is the following: All the hype about social is a bunch of hooey. Individuals want well-produced solid entertainment. There may, at one point, have been novelty value in the power of individuals to create content, but that point has long passed. What millenials want is the same thing that all previous generations want, solid amusement for their out of class hours. So far as I know, this is the first experiment to test the desires of MBA millenials to read the random thoughts of their blogging social professors. Regardless, it's a finding worthy of wider circulation. Simply put, calls to embrace social as an important information stream are, quite simply, nonsense.

It's a sad conclusion, and it won't stop me from writing since I derive joy from the process myself, but it suggests a refocus in pedagogy away from the "flavor of the month" and back toward the heart of the matter, which is providing great experiences for students in the classroom. 

A lovely blog/youtude/facebook/twitter steam is all well and good, but it should be seen for what it is, entirely peripheral and largely wasted motion, at least so far as my sample is representative.

Tuesday, October 28, 2014

Pregnant MBAs

When most people think of game theory, they think of situations in which two or more individuals compete in some situation or game. Chess is the quintessential example. Yet the thinking underlying formulating a good plan when playing against others is just as important (and useful) when thinking about your own future path.

An episode of Seinfeld captures this idea beautifully when "morning Jerry" curses aloud the bad choices made by "evening Jerry." Evening Jerry stays out too late and indulges too much making life difficult for morning Jerry, who has to pay the piper for this excess. He then muses that morning Jerry does have a punishment available to evening Jerry. Were morning Jerry to lose his job, evening Jerry would be without funds or friends to pursue his wild lifestyle. Not mentioned is that this punishment is also costly to afternoon Jerry, who is likewise in no position to pay the bill at Monk's Cafe for meals with Elaine and George.

While the Seinfeld routine is meant to be funny, for many individuals, the problems of the activities of their many selves are no laughing matter. Anyone struggling with their weight curses their night self or their stressed out self for lack of willpower. That giant piece of cake that stressed-out self saw as deliverance means a week of salads and many extra hours at the gym for the other selves.

I bring this up because we all suffer from the evening Jerry problem, but for MBAs, it's one of the most serious problems they'll ever face. For most MBAs, the two years spent getting this degree represents the final time in life when complete attention can be paid to learning and single-mindedly building human and social capital. The constraints of work and family offer vastly less time for such activities in the future. While there may be occasionally breaks for internal training or executive education, such breaks are rare. Moreover, for busy managers, even these times may not constitute a break. Work does not stop simply because you are absent. Crises still need to be dealt with and deadlines met.

Moreover, the gains made during this time can profoundly affect an MBA's career. They can be the difference between the C-suite and merely marking time in middle management. They can be the difference between the success and failure of a startup. They can be the difference between getting a position offering a desired work-life balance and accepting a position that does not. They can be all the difference in life.

Yet, inward thinking would have us see the choices we make quite narrowly. Will an extra weekend in Tahoe be the difference between an A- and a B+? Will it be the difference between making or missing a class? Will it be the difference between actively participating in a case discussion or not? Will they be the difference between seeing or missing some outside speaker in an industry of interest? Viewed in this light, such choices are of decidedly small caliber. An extra day in Tahoe can hardly be the difference in anything of consequence.

Wilhelm Steinetz, the great chess champion, averred that success in chess is a consequence of the accumulation of small advantages. Viewed narrowly, inwardly one might say, the differences Steinetz had in mind seem trivial. To most chess players, the board, and one's chances of winning, are not appreciably different with a slightly better versus a slightly worse pawn formation. Yet a grandmaster does not see things this way at all. Such individuals are consummate outward thinkers, constantly planning for the endgame, for the win, which will only be determined much later. From that perspective, such trivial things are of utmost importance. And indeed they are as, more often than not, they mark the difference between victory and defeat in championship play.

Outward thinking also allows one to take a longer-term view on the time spent while pursuing an MBA. The apparent difference between the talent at the middle and the top tier of many organizations is often remarkably small.  The CFO does not have appreciably more IQ points than someone lower down. She does not work vastly more hours or have vastly better social capital. Rather, her rise commonly represents an accumulation of small advantages--a win early in her career that distinguished her as high potential, a management style that avoided or defused a conflict that might have derailed her progress, an unexpected opportunity because a head at some other firm remembered her as standing out. In the end, it is clear that she is CFO material while the middle manager is not, but it didn't start out that way.

We might be tempted to chalk all this up to luck. She got the breaks while her sister, unhappily struggling as a middle manager somewhere, didn't. Admittedly, luck plays a role, but chance, somehow, seems to mysteriously favor some more than others. Consider the situation of poker or blackjack players. To win, one needs to have a good hand, i.e. one needs to be lucky. Yet some people consistently win at these games and others consistently lose. Luck evens out over many hands, or over many events in the course of a lifetime, yet there are genuine poker all-stars.

Much like evening Jerry, MBA life is filled with temptations, filled with vast stress-easing slices of chocolate cake. We might think, "What's the harm? We all work hard. I deserve this." But unlike the dieter, who can learn from his mistake and say no to the cake the next time around, an MBA gets only one chance to get it right. Mess it up this time and there is no tomorrow to make amends. There are no do-overs. The MBA equivalent of chocolate cake today may not mean a week of salads and workouts, but a lifetime of them.

So the lesson here is our usual one: look forward, reason back. How will our future selves perceive the choices we are making today? Will they be happy?If not, then it's time to make different choices.

So how does any of this relate to the title of this post, Pregnant MBAs. When a woman learns she is pregnant, she will often alter her lifestyle, sometimes radically, After all, she's now responsible for her unborn baby as well as herself, so she eats better, stops smoking and drinking, exercises more, and so on. She wants her baby to have its best chance in the world. In a sense, every MBA is pregnant. You are also responsible for two---your present self and your future self.  Outward thinking is nothing more than this awareness, so obvious to a pregnant woman,,but submerged beneath the surface at all other times. Give your future self his or her best chance in the world.

Friday, October 24, 2014

(Not) Solving Social Dilemmas

The secret ingredient to solving social dilemmas is punishment. Without the ability to punish transgressors, good behavior must rely entirely on individual self-control in abstaining from the temptation to cheat and obtain a reward. Some individuals are up to this challenge. Their consciences are so strongly developed to chasten them for engaging in dishonorable behavior that this is not a problem. But for most, it is. Indeed, a fundamental human weakness is to go for today's reward over tomorrow's, a failing which destroys most attempts at dieting let along social dilemmas.

Sufficient punishment fixes this problem by adding the force of extrinsic incentives to intrinsic. Given sufficient reinforcement, even those more "pragmatic" consciences can be persuaded to do the right thing. Yet, even when such punishments are available, a literal-minded view of game theory suggests that they must always be available. Individuals must never perceive themselves as getting off scot free. The usual version of this argument uses look forward, reason back type reasoning. Suppose the game will end in period N. Then clearly bad behavior in that period is unpunishable and hence everyone will be bad. But this destroys punishment for bad behavior in the penultimate period, and so on. The pure logic of the situation implies that any game known to end at a fixed time, however far into the future, will see cooperation remorselessly break down right from the start.

But, while logically sound, this is a very silly prediction when the number of periods is long. Despite the logic of LFRB, bad behavior in early periods will be punished and good behavior rewarded with reciprocity in early periods of the game, only breaking down when the endgame is sufficiently close. Here again we see the friendly face of irrationality. Suppose there is some chance that others playing the game don't "get it." They, instead, think they are playing the infinitely repeated version of the game whereby cooperation can be sustained by threats of punishment, and play accordingly.

What is a rational soul to do? The answer is, when in Rome, do as Romans do. In other words, pretending that you too don't get the game is a perfectly logical and sensible response, at least until some point deep in the game where knowledge that punishment won't work becomes too hard to ignore. As with some of our other examples, only a seed of doubt about the rationality of others is needed so long as the endgame is sufficiently far off into the future. Such a seed of doubt seems to me eminently more reasonable than the maintained assumption in the standard model, that everyone understands the game perfectly,

But if we're playing a short game consisting of, say, only 8 periods. Now the logic of LFRB has real force, and we would seem to be genuinely doomed. Running such a game in the lab reveals that things are not quite as bad as all that, but they are definitely pretty bad.

Or maybe not. Let's change the game a little. Suppose, at the end of each period, each player has a chance to punish one or more of the others. Punishment involves destroying some of their payoffs. But this punishment is not free, it costs the punishing individual something as well. This would seem of immense help since the whole problem was that we lacked an ability to punish in the last period and, this wrecked the incentives in all the earlier periods. Now, we no longer have this problem. If someone misbehaves in the last period, we punish them via value destruction and all is once again well in the world. But such a plan only works if the punishment is credible, if we can count on the punishment to be delivered should an individual try to test societal resolve. There are two problems with credibility. First, there is a social dilemma in who delivers the punishment. While we all might agree that cheaters should be punished, each of us would rather that someone else delivers the punishment and hence takes the hit to their own payoffs. But even if we sorted this out by agreeing on whose job it was to punish, we might still be in trouble. In the last period of the game, who would actually carry through with the punishment?

This is a version of our earlier problem. In the last period, there is no future reward for today's good behavior nor punishment for bad. Since the whole point of punishing is to obtain future good behavior, what is the point of punishing anyone in the last period of the game? Worse yet, not only is punishment pointless, but it is also costly. So why would anyone believe that misbehavior will be punished in the last period? By the same logic, there is no point in punishing in the penultimate period either and again the endgame casts its long shadow back to the first period. Irrationality seems to work less well as a way out of this particular logical bind. The endgame is simply too near.

Yet, remarkably, such schemes seem to work, at least in western cultures. Economists Ernst Fehr, Simon Gaechter, and a number of other co-authors performed versions of this experiment in labs in the US and Europe. Remarkably, they found the such a scheme proved excellent at producing cooperation. Some individuals tested societal resolve early in the game and invariably received bloody noses for this insolence.

But why does it work? The answer, in this case, seems to be morality. While conscience is weak in resisting temptation, it is rather stronger when we have a chance to unsheathe the sword of justice to be used on others. That is, even though there was a personal cost to punishment, it seemed to be more than offset by a personal benefit in administering justice.

[A curious sidenote to this stories, in Middle Eastern, Greek, and North African cultures, the sword of justice operated in the other direction---those who did not cheat were punished. That is, individuals cooperating in a prisoners dilemma were hit with penalties from those who did. This soon produced conformity in a failure to solve social dilemmas.]

This solution is all well and good when the punishment can be surgically directed at transgressors, but suppose instead that there is collateral damage--to punish the guilty, some innocents must be punished as well. In work in progress by myself and former Berkeley undergrad Seung-Keun Martinez, we investigate this possibility. We first consider the opposite extreme where, to punish one person, you must punish everyone. One might think that such a scheme would be either marginally useful or utterly useless. In fact, it is neither. Compared to the world in which no punishment was allowed, punishment of this sort actually reduces cooperation. This is not simply from the value destroyed by punishment (though there is some of that), but rather from a reaction to the possibility of such punishment. When individuals fear suffering punishment when doing the "right thing," they seem to become despondent and to cooperate less than those under no such threat.

We are now in the process of investigating intermediate levels of "collateral damage" from punishment to determine where the 'tipping point" might lie.

Tuesday, October 21, 2014

The Costs of Coordination

Large organizations face a fundamental dilemma. They want their employees to coordinate on doing the right thing, on the firm's strategy, but they also want them to coordinate with one another. The two may differ in their importance depending on the organization. For some, coordination with one another may be of little consequence while coordination at large may be crucial--think of creative industries where artists labor alone to achieve some vision consistent with the roadmap of the company. For others, employees working in tandem is the important thing--think of iPhone production lines in Shenzen.

For game theorists and economists, the first trouble that comes to mind is what is known as agency problems, employees might not have the right incentives in mind to perform in conjunction with the firm and so they diverge in their actions and things go wrong. Let us set aside these possibilities and imagine that somehow these issues have been solved. Employees have nothing more than the company's interest at heart. This would suggest that all of our problems are solved and things are well with the world. Or are they? The world is full of miscommunication. While I try my best to articulate various ideas as clearly and carefully as I can, the sad fact is that I am doomed to failure, as is most anyone else caring to do likewise.

For CEOs and other leaders, communicating their vision effectively is central to their quality of leadership.

What can game theory tell us about the problem of imperfect communication of a leader's vision on the firm's prospects, even when incentives are well-aligned? Unlike our sad stories about understanding persuasion, things are a bit more fruitful here.

As usual, we make a simple model to describe the complex world of a leader seeking to impart her vision. To be precise, suppose that the leader's vision can be thought of as a normally distributed random variable with mean equal to 0 known variance, equal to 1 say.  The idea here is that, under average conditions, a leader has a standard, long-run vision, which we will normalize at zero. However, the business climate changes, which requires some alteration of this vision. New rivals emerge, acquisitions are made, employees innovate new business areas. Our normal distribution represents these alterations.

Employees are perfectly aware of the long-run vision. It's part of the firm's core DNA. They also know that it changes with conditions, but don't know precisely how it changes. Indeed, understanding how to translate changes in the business landscape into vision is a large part of the leader's value.

But knowing the right vision is only half the battle. Our leader must also articulate it. So our CEO makes a speech, or composes a set of leadership principles, or does any number of things to express the current vision. All of this is transmitted to employees. Employees, however, only imperfectly understand their leader's wishes. Instead of learning the vision exactly, each gets a signal of the vision, which equals the truth plus a standard normal error term. 

If this sounds like a statistics problem so far, that's because it is. Indeed, we'll tear a page out of our notebook on regressions to assess what the leader would like the employees to do in a moment. Meanwhile, note that employees know that they understand the vision only imperfectly, so they form posteriors about the true vision. These consist of placing some weight on the prior--the long-run visions, with the rest going to the signal. The weight on the signal (optimally) consists of a ratio of the variance in vision divided by the sum of the variance of the vision term and the error term, i.e. a version of the signal to noise ratio. In our example, the weight is 50-50. 

Employees then choose an action to undertake. We will assume there are many employees and that actions are chosen simultaneously. Of course, this is not really the case, but it simulates the idea that it is a big organization and the actions of others are not readily observed. 

How are these actions chosen? Suppose that the payoffs of employees depend on matching the strategy, with a quadratic loss function with 50% weight (importance) , and matching each other, with another quadratic loss function with the complementary weight, also 50%. To be precise, each employee wishes to match the average action of the others. Employees seek to maximize their payoffs.

Now, this would seem the most ordinary of coordination problems--everyone has the same goal and, on average, gets the same signal. Better yet, the signal is unbiased and equal to the truth. But let's see how things play out. 

Before proceeding, let's simplify things still further. Suppose that, from the perspective of the company as a whole, mismatched actions are of no consequence. What matters is simply the coordination of action to strategy. To reconcile this with the individual incentives above, suppose that the payoffs from miscoordination among employees is normalized whereby deviations among employees in terms of coordination add up to zero when summed over the entire firm. (It's not hard to do this, but the exact math is beyond the scope of a blog.)

So what would the firm desire of its employees? The answer is simple and is, in fact, equivalent to a regression. The firm wishes to minimize the sum of squared error between an employee's action and the true visision. The only piece of data an employee has is her signal. Simple statistics will confirm that the "regression coefficient" on this piece of information is equal to 1/2, i.e. the signal-noise statistic from above. That is, under ideal circumstances, each employee will merely do her best to conform her action to the expected value of the vision conditional on her signal.

So far, so good, but what will employees actually do? Also from basic statistics, it is apparent that the best choice for an employee is to selection an action that places half the weight on the expected vision conditional on each employee's signal, s, and the other half on the expectation of the other employees' actions. The expected state, as we saw above, is nothing more than half the signal. The latter is more complicated, but becomes much simpler if we assume the firm is large--indeed so large that we can guess the average action using the law of large numbers, which tells us that sum of others' signals converges to the underlying vision, which, as we saw, was just half of the signal.

Finally, let us suppose, in equilibrium, an individual chooses an action equal to w times her own signal, where w is a mnemonic for weight. Thus, the equilibrium equation becomes:

w s = 0.5 x (s/2) + 0.5 x w (s/2)
where x denotes the "times" symbol. The left-hand side is the equilibrium action while the right-hand side is the weighted average of the expected value of the vision conditional on the signal and the expected average of others' conditional on the signal. Since all employees are alike, we suppose they all play the same strategy. 

Solving this equation yields the equilibrium weight:
w* = s/3

or, equivalently, individuals place a weight equal to one-third on their signal with the remaining 2/3rds weight on the long-term vision (or, more formally, their prior belief) equal to zero. 

The result, then, is shockingly bad. the weak communication skills of the leader combined with the general noise of the business environment meant that, optimally, an employee should only give weight equal to 1/2 on her signal. But the strategic interaction of employees trying to coordinate with one another creates an "echo chamber" wherein employees place even less weight, only one-third, on their signals. As a consequence, the company suffers. 

Intuitively, since employees seek to coordinate with one another, their initial conservatism, placing weight one-half on the long-run vision, creates a focal point for coordinating actions. Thus, a best response when all other employees are placing between one-third and one-half weight on their signals is to place a bit less weight on one's own signal. This creates a kind of "race to the bottom" which only ends when everyone places one-third weight. In short, coordination produces conservatism in the firm. Put differently, by encouraging coordination amongst employees, any organization builds in a type of cultural conservatism that makes the organization resistant to change. This does not mean that coordination, or incentives for coordination are, per se, bad, only that the inertial incentives created thereby and not much appreciated--or even understood. 

Is this something real or merely the fanciful calculations of a game theorist? My own experience suggests that the little model captures something that is true of the world. While working as a research scientist at Yahoo, four CEOs came and went during my time there. Each had a markedly different vision. Yet the needs of coordination at Yahoo were such that, despite changes in CEO, new articulations of vision, and so on, the organization was surprisingly little affected. Employees, based on the need to work in harness with others, willfully discounted the new vision of an incoming CEO. The model seems to capture some, but certainly not all, of these forces. For instance, part of the reason for discounting, absent in the model, was that employees grew skeptical of the likely tenure of any CEO. 

For the record, if we let R denote the weight on matching vision and f the signal-noise statistic from above, the general formula is: 

w* = R f/(1 - (1 - R) f)

whereas the optimal weight is f. It may be easily verified that w* <  f. Also, from this equation, it is clear, and intuitive, that the problem is lessened the smaller the importance of coordinating with other employees (i.e. the larger is R) and the more articulate the leader (i.e. the larger is f).

Some academic housekeeping: The model described above is a version of one due to Morris and Shin, American Economic Review, 2003. They give a rather different interpretation to the model though. Moreover, their version of the model assumes that individuals have no prior beliefs. Under this (peculiar) assumption, the gap between the equilibrium weight on signals and the statistically optimal weight disappears, but this its absence is purely an artifact of their setup. The observations in this blog piece come from theory and experiments I'm currently writing about in a working paper with Don Dale at Muhlenberg College. 

Thursday, October 16, 2014

Inspiring Words

Leadership is, to a great extent, the gentle art of persuasion. Leaders inspire others to follow them, to work for them, sometimes even to give up their own lives for them. How do they do it? Partially by example to be sure, but even here persuasion has a role to play. When we say that Jeff Bezos lives the leadership principles articulated and promulgated at Amazon, it makes the valid point that individuals credit others for how they behave, but conveniently ignores the fact that it was Bezos who articulated and promulgated the principles in the first place.

One of the most striking examples of leadership purely by the powers of persuasion was the rise of Barack Obama. Obama, for all his subsequent faults, was matchless in using words to inspire many thousands of young people who had never even voted previously to give him money, work for his organization, and persuade others to vote for him. Even more remarkably, he duplicated the feat again four years later, by most accounts, after spending much of that period by not leading by example. This no doubt overstates the power of his oratory for he had a remarkably savvy organization helping him to vacuum up all that money and effort, but others less gifted have had equally efficient teams, yet achieved nothing like Obama's success.

What can game theory tell us about the power of leaders to persuade? The answer, if we are to be entirely honest, is surprisingly little. A large part of the problem is that communication in the world of game theory is almost entirely informational, but persuasion, while it will certainly draw upon and convey some information, taps into something much deeper and less purely transactional than what one might learn from hearing the local weather forecast. This is not to say that information-centric communication, or persuasion, is uninteresting, rather that it somewhat misses the boat if we truly wish to understand why some individuals are hailed as visionaries while others, offering the same facts and conclusions, are not.

To get the flavor for the bloodless world of communications viewed through the lens of game theory, consider the following problem. A political leader wants to convince a supporter/follower to perform a certain action. The right action depends on something called a state variable, which you might think of as a shorthand description for a set of factors, political, economic, cultural, etc., that influence what a reasonable person would conclude is the correct action. To keep things simple, suppose that one state, which we will call low, represents a low political threat environment. Some action is needed to secure victory, but not too much. The other state, which we will call high threat, requires frenzied activity such as massive calling campaigns to get out the vote, and so on.

The follower wants to do the "right thing" for the leader, but knows little about political threats, and so on.
Knowing nothing about the state, our follower will elect some intermediate range of activity, imperfect for either state but somewhat helpful in both.

Now for the rub or, as we in the profession write, the "central tension of the model." The leader too wants the follower to do the right thing, but prefers that she do more political activity in either state. The degree of difference in views about how much activity to perform in each state represents the conflict between the two. Our leader's job, then, is to inspire his followers to do more than they otherwise would, but this will prove difficult since, in game theory land, the only trump card the leader holds is his knowledge of the state.

So, to inspire his supporters, our leader comes to town and makes a speech attempting to rally them to, in the leader's eyes, the right amount of activity. How does this speech go? What should our leader say? The answer, it turns out, depends on several factors, none of which feel (to me at least) very much like leadership.

Scenario #1: Free Speech
Suppose that our leader is free to say whatever he likes. He can lie about the state, exaggerating the political threat when it is, in fact, low or do the reverse, reassuring followers that there is little to worry about. Or something in between, saying that he's not sure. Or our leader can stonewall, give his standard stump speech, shake hands and kiss babies, Purell his hands and lips afterward, and go home.

So what does he do? To answer this question, we need to make certain assumptions about what, exactly, the followers know. Suppose they know that the leader indeed knows the state and, importantly, they also know that the leader wants them to do more of the activity in each state than they themselves prefer. In the happy scenario, the leader only wants the followers to do a little more in each state, so he informs them about the state truthfully and then harangues them "exceed themselves" or to "go beyond" or something like that. He gets his applause and leaves, satisfied at a good night's work.

Game theory, however, offers the exceptionally dreary conclusion that, no matter how powerful the words of inspiration, no matter that our leader is a Shakespeare or a Churchill, the followers do precisely what they had initially planned to do in each state. They are grateful for the information, but they can hardly be said to be inspired. Ironically, this situation is, in fact, the best our leader can hope for.

Let's rerun the speech but now imagine that the leader's vaunting ambitions create a vast gulf between his preferred activity level in each state and their own. So our leader steps up to the microphone to the hushed crowd and proceeds to speak of crisis--the threat level is high, the stakes are huge, and it's all up to you, the supporters to make the difference. This address, Shakespearean in its majestic, soaring phrases, send chills down the spines of the audience. The crowd roars. They will do it. They will rise to the challenge. They will be the difference-makers. No activity is too much. Our leader, drenched in sweat from the effort, steps down from the lectern and is congratulated for his remarkably moving address. The lights in the auditorium go down, and everyone goes home.

When his supporters get up the next morning, they do...exactly what they would have done if the leader had never shown up in the first place. In game theory land, people are cynics. While the audience may have been moved in the moment, on reflection they realize that the leader makes this same speech everywhere, to all his followers, whether the state is high or low. The talk, for all its pageantry, rings hollow--full of sound and fury, but signifying nothing. Why such an uncharitable view of the leader? The answer is that his own aspirations get in the way. Since he wants a high level of action regardless of the state, the speech lacks all credibility and, since those living in game theory land are not simpletons nor dupes, it is roundly and universally disbelieved.

As a logical analysis, the above is impeccable. As a description of leadership and persuasion, it seems to mis the boat completely. But, sadly, this analysis, or something similar, quite genuinely represents the state of the art, the research frontier, if you will. Is it fixable? Yes, in a way. We can add fools who believe everything the leader says into the mix. We can add some sort of inspiration variable that magically changes tastes so that followers work harder. But none of it really gets to the heart of what makes some leaders persuasive and others not. Indeed, we learn nothing if we simply assume that leader A can change minds and leader B cannot. The whole point of using our tools is to get at the deeper, and ultimately more interesting and important question as to why some leaders are persuasive.

Scenario #2: Factual Speech
Perhaps we've accorded too much freedom to our leader. After all, exagerrations, dissembling, misrepresenting, or any of the myriad of polite words we have for lying can get a politician into terrible trouble. Claiming that the world is hanging by a thread when, in fact, every poll shows that you're 20 points ahead, catches up to most leader's eventually. So let's return to our setting, precisely as before, but with the added restriction that our leader cannot simply make up things that are not true. In academic terms, this moves us out of the world of "cheap talk" and into the world of "persuasion" proper. This nomenclature, by the way, has a lot of problems. First, talk is no less cheap in the sense of being costless to the leader when we add the no lying restriction. Second, why the heck is it "persuasion" when we restrict someone from lying outright. Lies can be an important tool in the arsenal of a persuasive individual. Indeed, criminals engaging in confidence schemes are the ultimate persuaders, but would be entirely crippled were they bound by the no lying restriction. But I digress.

So let's rewind once again and send our leader back to the lectern, but with the following restriction--the heart of his speech can be either the truth or a stonewall, where he says nothing whatever about the state. One might imagine that this changes little. After all, when conflict was low, our leader did not wish to lie even when he could, so the restriction matters not a whit. When conflict was high, our leader wanted to lie about the state, but no one believed him anyway, so the effect is identical to stonewalling. Indeed, our leader in scenario #1 would have been quite happy to make the stonewalling speech instead of what I laid out.

In the case where conflict is low, the above supposition is exactly correct. Our leader steps to the lectern and offers a fact-laden speech truthfully revealing the state. But in the second case, this is wrong. Indeed, remarkably and perhaps absurdly, game theory offers the startling prediction that, no matter how bad the conflict between leader and follower, the leader always makes the truthful speech!

Why in blazes would our leader do that? Let's start with the situation where the threat is high. Here, the leader can do no better than to report the truth. He'd like more effort from his followers to be sure, but there is simply no way to motivate them to work any harder than by revealing the high state. What about the low state? Surely our leader will stonewall here? He might, but it will do no good since, knowing that the leader would have announced high were the state indeed high, our followers treat the stonewall speech as, in effect, a report that the state is low. And they act accordingly. That being the case, the leader might as well report honestly and at least gain the credit, however small, for being straight with followers.

Now, one may suspect that this logic takes too much advantage of the fact that there are exactly two states. What if there were three, or twenty, or a thousand. It turns out that none of it matters because of something called unravelling. Here's the argument: Suppose that there are twenty states in which the leader stonewalls while revealing in the rest. Then, in the highest of these 20 states, he'd be better off revealing than stonewalling since, by stonewalling, followers assume that the average state is lower than the highest state. Repeat this argument ad nauseum to obtain the truth-telling result. In my own work on the topic, I showed how this argument could be extended to virtually any configuration of preferences between leader and follower.

The problem is that the conclusion seems completely absurd. Irrespective of the conflict between leader and follower, the leader will always tell the truth sounds very much unlike the world in which I live. Again, this problem is fixable, but the main fix is even more bizarre than the result. It turns out that the key to credible stonewalling is...drumroll please...stupidity!! Or, more precisely the possibility of stupidity. The idea here is that, if the leader might possibly not know the state then stonewalling becomes believable. But this hardly seems like a satisfying resolution.

So Where Does This Leave Us?
This post, I'm afraid, is a downer. Game theory does lots of things well, but leadership, sadly, is not one of them. This has not stopped game theorists from trying, and perhaps making some headway. There is a clever paper by Dewan and Myatt looking at leadership through communications. In their model, one of the tradeoffs is between stupidity and incomprehensibility. They ask the following question (that only a game theorist would ask) about leaders: Is it better to be smart but incomprehensible or stupid but clear? The answer seems to be that it depends on the importance of doing the right thing versus doing the same thing. But, like all work in the field, the idea that leaders, with their words, could spark passion and devotion, is entirely missing.

Sometimes I despair about my love for game theory in a place devoted to, somehow, creating innovative leaders. I can, however, take some solace that we are no better at articulating how, exactly, that transformation takes place than we are in understanding leadership through game theory.

Thursday, October 9, 2014

The Limits of Experimentation and Big Data

Experimentation represents a critical tool for business decision making, especially in online environments. It is said that Google runs about a quarter of a million experiments on its users every day. Sophisticated online firms will have a built in toolset allowing managers to quickly and easily code in the experiment they wish to run (within certain limits), and even spit out standard analysis of the results. As a result of these experiments, firms continually innovate, mostly in small ways, to increase engagement, conversion rates, items purchased and so on.

The basic idea is simple. Firms with millions of visitors each day will choose a small percentage of these to be (unwitting) subjects for the experiment they wished to run. These individuals might be picked at random from all individuals or, more likely, they will be selected on the basis of some predetermined characteristics of individuals who are hypothesized to respond to the experimental treatment. In the simplest design, half of the selected subjects, chosen at random (or chosen at random within each strata) will receive the control, the experimental baseline, which will, most often, be simply business as usual. The other half will receive the treatment, which could be an alteration of the look and feel of the site, but might also be things like exposure to promotions, changed shipping terms, special offers of after sales service, or a host of other possibilities. Online sites rarely experiment with price, at least in this treatment-control way, owing to the bad publicity suffered by Amazon in the late 90s and late 00s from such experiments.

Following this, the data is analyzed by comparing various metrics under treatment and control. These might be things like the duration of engagement during the session in which the treatment occurred or the frequency or amount of sales during a session, which are fairly easy to measure. They might also be things like long-term loyalty or other time-series aspects of consumer behavior that are a bit more delicate. The crudest tests are nothing more than two sample t-tests with equal variances, but analysis can be far more sophisticated involving complicated regression stuctures containing many additional correlates besides the experimental treatment.

When the experiment indicates that the treatment is successful (or at least more successful than unsuccessful), these innovations are often adopted and incorporated into the user experience. Mostly, such innovations are small UX things like the color of the background or the size of the fonts used, but they are occasionally for big things as well like the amount of space to be devoted to advertising or even what information the user sees.

After all this time and all the successes that have been obtained, were we to add up the amounts of improvement in various metrics from all the experiments, we would conclude that consumers are spending in excess of 24 hours per day engaging with certain sites and that sales to others will exceed global wealth by a substantial amount. Obviously, the experiments, no matter how well done, are missing something important.

The answer, of course, is game theory, or at least the consideration of strategic responses by rivals, in assessing the effect of certain innovations.

At first blush, this answer seems odd and self-serving (the latter part is correct) in that I made no mention of other firms in any of the above. The experiments were purely about the relationship between a firm and its consumers/browsers/users/visitors etc. Since there are zillions of these users and since they are very unlikely to coordinate on their own, there seems little scope for game theory at all. Indeed, these problems look like classic decision problems. But while rivals are not involved in any of this directly, they are presently indirectly and strategically by affecting the next best use of a consumer's time or money, and changes to their site to improve engagement, lift, revenue and so on will be reflected in our own relationship to customers.

To understand this idea, it helps to get inside the mind of the consumer. When visiting site X, a consumer chooses X over some alternative Z. Perhaps the choice is conscious--the consumer has tried both X and Y, knows their features, and has found X to be better. Perhaps the choice is unconscious. The point is simply that the consumer has a choice. Let us now imagine the site X is experimenting between two user experiences, x and x' while firm Z presently offers experience z.. The consumer's action, y then depends not just on x but also on z or at least the perception of z. Thus, we predict some relationship
y = a + b x + c z + error

when presented with the control and a similar relationship but with x' replacing x under the treatment. If we then regress y on x, we suffer from omitted variable bias: z should have been in the regression but was not; however so long as z is uncorrelated with the treatment (and there is no reason it should be), then our regression coefficient on the treatment dummy will correctly tell us the change in y from the change in x, which is, of course, precisely what we want to know. 

Thus, buoyed by our experiment, we confidently implement x' since the statistics tell us it will raise y by 15% (say). 

But notice that this analysis is the statistical equivalent of inward thinking. Despite its scientific garb, it is no more valid an analysis than a strategic analysis hypothesizing that the rival will make no changes to its strategy regardless of what we might do. When we think about large decisions, like mergers, such a hypothesis is obviously silly. If Walmart acquired eBay tomorrow, no one would claim that Amazon would have no reaction whatever, that it would keep doing what it had been doing. It would, of course, react, and, were we representing Walmart, we would want to take that reaction into account when deciding how much to pay for eBay. 

But it is no less silly to think that a major business innovation undertaken by X will lead to no response from rivals either. To see the problem, imagine we were interested in long run consumer behavior in response to innovation x'. Our experiment tells us the effect of such a change, conditional on the rivals' strategies, but says nothing about the long-term effect once our rivals respond. To follow through with our example, suppose that switching to x' on a real rather than experimental basis will trigger a rival response that changes z to z'. Then the correct measure of the effect of our innovation on y is
Change in y = b(x' - x) + c(z' - z)

The expression divides readily into two terms. The first term represents the inward thinking effect. In a world  where others are not strategic, this measures the effect of the change in x. The second part represents the outward thinking strategic effect. This is the rival's reaction to the changed relationship that firm X has with its customer. No experiment can get at this term, no matter how large is the dataset. This failure is not a matter of insufficient power or the lack of metrics to measure y or even z, its the problem of identifying a counterfactual, z', that will only come to pass if X adopts the innovation. 

Now, all is not lost. There are many strategies one can use to forecast z', but one needs to be open to things that the data can never tell us, like the effect of a hypothetical rival reaction to a hypothetical innovation when viewed through the lens of consumer choice. This is not a problem that statistics or machine learning can ever solve. Game theory is not simply the best analysis for such situations, it is the only analysis available. 

Thursday, October 2, 2014

The Folk Theorem

We have talked a fair bit about coordination games and the forces shaping just what happens to be coordinated upon. In that light, it's important to realize how the presence of dynamic considerations, i.e. repeated game settings, fundamentally transforms essentially all games into coordination games. In our usual example to illustrate how to take advantage of dynamics to build relational contracts (self-enforcing repeated games equilibrium), we studied a simplified version of Bertrand competition. Firms could price high or low, low was a dominant strategy in the one-shot game, yet, with the right implicit contract, we can maintain high prices. The key, if you'll recall, was to balance a sufficient future punishment against the temptation to defect.

While coordinating on full cooperation seems the obvious course of action, if available, it is far from the only equilibrium to this game. For instance, it is fairly obvious that coordinating on the low price in every period is also an equilibrium--a really simple one, it turns out. Unlike the high price equilibrium with its nice and fight feedback loops, the no cooperation equilibrium requires no such machinery. The equilibrium consists of simply choosing a low price in every period regardless of past actions, and that's it.

There are many more equilibria in this game. For instance, if maintaining high prices yields each player $3 per period whereas fighting in every period yields $2 per period, then payoffs of any amount in between can also be sustained merely by interweaving the two. Suppose we wished to support payoffs that are high 5/6 of the time and low 1/6. In that case, we need only follow our high priced strategy whenever the period is not a multiple of 6 or if we're in a punishment phase and follow the low price strategy for periods that are a multiple of 6. Any fraction of high and low payoffs may be similarly supported.

This observation that just about any set of per period payoffs can be supported as an equilibrium is known as the Folk Theorem. It is so-named since, like many folk tales, no one was quite sure who first had the idea, but a Nobel winning game theorist, Robert Aumann, was the first to write the argument down (in a much more general and abstract way than my simple sketch above. Notice what the folk theorem implies in terms of our list of archetypal games:

All repeated games are coordination games
The theorem tells us that, regardless of the original form of the game, be it prisoner's dilemma, hawk-dove, matching pennies and so on, its repeated version amounts to a pure coordination game.