Tuesday, October 21, 2014

The Costs of Coordination

Large organizations face a fundamental dilemma. They want their employees to coordinate on doing the right thing, on the firm's strategy, but they also want them to coordinate with one another. The two may differ in their importance depending on the organization. For some, coordination with one another may be of little consequence while coordination at large may be crucial--think of creative industries where artists labor alone to achieve some vision consistent with the roadmap of the company. For others, employees working in tandem is the important thing--think of iPhone production lines in Shenzen.

For game theorists and economists, the first trouble that comes to mind is what is known as agency problems, employees might not have the right incentives in mind to perform in conjunction with the firm and so they diverge in their actions and things go wrong. Let us set aside these possibilities and imagine that somehow these issues have been solved. Employees have nothing more than the company's interest at heart. This would suggest that all of our problems are solved and things are well with the world. Or are they? The world is full of miscommunication. While I try my best to articulate various ideas as clearly and carefully as I can, the sad fact is that I am doomed to failure, as is most anyone else caring to do likewise.

For CEOs and other leaders, communicating their vision effectively is central to their quality of leadership.

What can game theory tell us about the problem of imperfect communication of a leader's vision on the firm's prospects, even when incentives are well-aligned? Unlike our sad stories about understanding persuasion, things are a bit more fruitful here.

As usual, we make a simple model to describe the complex world of a leader seeking to impart her vision. To be precise, suppose that the leader's vision can be thought of as a normally distributed random variable with mean equal to 0 known variance, equal to 1 say.  The idea here is that, under average conditions, a leader has a standard, long-run vision, which we will normalize at zero. However, the business climate changes, which requires some alteration of this vision. New rivals emerge, acquisitions are made, employees innovate new business areas. Our normal distribution represents these alterations.

Employees are perfectly aware of the long-run vision. It's part of the firm's core DNA. They also know that it changes with conditions, but don't know precisely how it changes. Indeed, understanding how to translate changes in the business landscape into vision is a large part of the leader's value.

But knowing the right vision is only half the battle. Our leader must also articulate it. So our CEO makes a speech, or composes a set of leadership principles, or does any number of things to express the current vision. All of this is transmitted to employees. Employees, however, only imperfectly understand their leader's wishes. Instead of learning the vision exactly, each gets a signal of the vision, which equals the truth plus a standard normal error term. 

If this sounds like a statistics problem so far, that's because it is. Indeed, we'll tear a page out of our notebook on regressions to assess what the leader would like the employees to do in a moment. Meanwhile, note that employees know that they understand the vision only imperfectly, so they form posteriors about the true vision. These consist of placing some weight on the prior--the long-run visions, with the rest going to the signal. The weight on the signal (optimally) consists of a ratio of the variance in vision divided by the sum of the variance of the vision term and the error term, i.e. a version of the signal to noise ratio. In our example, the weight is 50-50. 

Employees then choose an action to undertake. We will assume there are many employees and that actions are chosen simultaneously. Of course, this is not really the case, but it simulates the idea that it is a big organization and the actions of others are not readily observed. 

How are these actions chosen? Suppose that the payoffs of employees depend on matching the strategy, with a quadratic loss function with 50% weight (importance) , and matching each other, with another quadratic loss function with the complementary weight, also 50%. To be precise, each employee wishes to match the average action of the others. Employees seek to maximize their payoffs.

Now, this would seem the most ordinary of coordination problems--everyone has the same goal and, on average, gets the same signal. Better yet, the signal is unbiased and equal to the truth. But let's see how things play out. 

Before proceeding, let's simplify things still further. Suppose that, from the perspective of the company as a whole, mismatched actions are of no consequence. What matters is simply the coordination of action to strategy. To reconcile this with the individual incentives above, suppose that the payoffs from miscoordination among employees is normalized whereby deviations among employees in terms of coordination add up to zero when summed over the entire firm. (It's not hard to do this, but the exact math is beyond the scope of a blog.)

So what would the firm desire of its employees? The answer is simple and is, in fact, equivalent to a regression. The firm wishes to minimize the sum of squared error between an employee's action and the true visision. The only piece of data an employee has is her signal. Simple statistics will confirm that the "regression coefficient" on this piece of information is equal to 1/2, i.e. the signal-noise statistic from above. That is, under ideal circumstances, each employee will merely do her best to conform her action to the expected value of the vision conditional on her signal.

So far, so good, but what will employees actually do? Also from basic statistics, it is apparent that the best choice for an employee is to selection an action that places half the weight on the expected vision conditional on each employee's signal, s, and the other half on the expectation of the other employees' actions. The expected state, as we saw above, is nothing more than half the signal. The latter is more complicated, but becomes much simpler if we assume the firm is large--indeed so large that we can guess the average action using the law of large numbers, which tells us that sum of others' signals converges to the underlying vision, which, as we saw, was just half of the signal.

Finally, let us suppose, in equilibrium, an individual chooses an action equal to w times her own signal, where w is a mnemonic for weight. Thus, the equilibrium equation becomes:

w s = 0.5 x (s/2) + 0.5 x w (s/2)
where x denotes the "times" symbol. The left-hand side is the equilibrium action while the right-hand side is the weighted average of the expected value of the vision conditional on the signal and the expected average of others' conditional on the signal. Since all employees are alike, we suppose they all play the same strategy. 

Solving this equation yields the equilibrium weight:
w* = s/3

or, equivalently, individuals place a weight equal to one-third on their signal with the remaining 2/3rds weight on the long-term vision (or, more formally, their prior belief) equal to zero. 

The result, then, is shockingly bad. the weak communication skills of the leader combined with the general noise of the business environment meant that, optimally, an employee should only give weight equal to 1/2 on her signal. But the strategic interaction of employees trying to coordinate with one another creates an "echo chamber" wherein employees place even less weight, only one-third, on their signals. As a consequence, the company suffers. 

Intuitively, since employees seek to coordinate with one another, their initial conservatism, placing weight one-half on the long-run vision, creates a focal point for coordinating actions. Thus, a best response when all other employees are placing between one-third and one-half weight on their signals is to place a bit less weight on one's own signal. This creates a kind of "race to the bottom" which only ends when everyone places one-third weight. In short, coordination produces conservatism in the firm. Put differently, by encouraging coordination amongst employees, any organization builds in a type of cultural conservatism that makes the organization resistant to change. This does not mean that coordination, or incentives for coordination are, per se, bad, only that the inertial incentives created thereby and not much appreciated--or even understood. 

Is this something real or merely the fanciful calculations of a game theorist? My own experience suggests that the little model captures something that is true of the world. While working as a research scientist at Yahoo, four CEOs came and went during my time there. Each had a markedly different vision. Yet the needs of coordination at Yahoo were such that, despite changes in CEO, new articulations of vision, and so on, the organization was surprisingly little affected. Employees, based on the need to work in harness with others, willfully discounted the new vision of an incoming CEO. The model seems to capture some, but certainly not all, of these forces. For instance, part of the reason for discounting, absent in the model, was that employees grew skeptical of the likely tenure of any CEO. 

For the record, if we let R denote the weight on matching vision and f the signal-noise statistic from above, the general formula is: 

w* = R f/(1 - (1 - R) f)

whereas the optimal weight is f. It may be easily verified that w* <  f. Also, from this equation, it is clear, and intuitive, that the problem is lessened the smaller the importance of coordinating with other employees (i.e. the larger is R) and the more articulate the leader (i.e. the larger is f).

Some academic housekeeping: The model described above is a version of one due to Morris and Shin, American Economic Review, 2003. They give a rather different interpretation to the model though. Moreover, their version of the model assumes that individuals have no prior beliefs. Under this (peculiar) assumption, the gap between the equilibrium weight on signals and the statistically optimal weight disappears, but this its absence is purely an artifact of their setup. The observations in this blog piece come from theory and experiments I'm currently writing about in a working paper with Don Dale at Muhlenberg College. 

No comments: