Wednesday, February 3, 2016

Did Trump Win or Lose in Iowa?

The results of the Iowa Caucuses showed Donald Trump in second place, with 24% of the vote, behind Ted Cruz with 28 and just ahead of Marco Rubio with 23%. The difference in terms of numbers, is about 6,000 votes. In other words, if 3000 Iowans would switch their votes from Cruz to Trump, the outcome would have changed. Pundits, and the GOP establishment, seem to view this result as containing the seeds of destruction for the Donald. They point out that part of his campaign persona is that he's a "winner" and yet, in Iowa, he didn't win. What can game theory say about the GOP presidential race?

Coordination and Duverger's Law

Duverger was a French philosopher in the field of politics. He noted that, in winner take all elections (sometimes call first past the post), there is a strong tendency for just two candidates to receive large vote shares. From this, he concluded that such voting rules tend to produce two party systems as in the US and, at the time, the UK. In proportional representation systems, many parties get votes.

Note that the Iowa caucus is actually proportional representation, at least to an extent. Multiple candidates can collect delegates in Iowa,  but there is overrepresentation of delegates among the top vote getters.

Duverger's Law, it turns out, can be understood using game theory. Here's the idea: The main reason that people vote is to help their candidate to get elected. Let's say that there are three candidates, A, B, and C. All voters have rankings over these candidates and, within these rankings, can feel different levels of passion for each. In other words, you and I might both rank the candidates A > B > C, but I feel very strongly for A whereas you are close to indifferent between A and B.

Now for whom should you vote if solely motivated by the outcome of the election? One possible answer is to vote truthfully choosing A if that is your top choice, or B, or C if those are on top. But now suppose a poll has been taken. It shows that C leads narrowly over B while A trails badly behind. Since I rank the candidates ABC this is very bad news. My least favorite candidate is ahead while my candidate trails badly.

So how should I vote? Since I only care about election outcomes, I should switch my vote from A to B. In a very real sense, A is a wasted vote for a voter who cares about outcomes. Of course, all A voters reason in a similar fashion and so A's vote share dwindles ever lower, a death spiral of switching away. Notice that there is a "snowball" nature to this logic very similar to the information cascade--once my candidate's chances grow sufficiently dim, my love for that candidate no longer influences my vote.

So from this, we can conclude that Carly and Jeb and all the others in the single digits in Iowa are effectively doomed. Votes for these candidates will be seen as purely "wasted" and so will dry up.

These votes make up about 15% of the votes in Iowa, probably similar to national rates as well. Where will they go?

Back to the Donald. From his perspective, he benefited from Iowa by strongly affirming what the polls showed--that a vote for the Donald is not a wasted vote. Thus, the missing 15% view him as plausible. But my suspicion is that they mostly will go elsewhere. The Donald is a polarizing figure, you love him or you hate him. He benefits from the passion of his supporters, they provide energy in getting themselves and others out to vote. But this same polarization makes him an unlikely second choice for voters whose first choice was someone else.



Monday, February 1, 2016

Landscaping

Job #1 in strategy is analyzing, and characterizing the business landscape in which the opportunity exists. The use of 5 forces, value net, etc. all represent frameworks for such analysis. Suppose you evaluated the opportunity of an incumbent, small market NBA team seeking to retain a superstar player? You would, of course, examine the rivalry for this all-important asset and sensibly conclude that rivalry is witheringly intense. From here, you would be forced to conclude that the prospects for making profits from such an opportunity are correspondingly small.

Put simply, the intense rivalry of the landscape will compete away all of the "rents" from the superstar player. And, from here, you might also conclude that the overall opportunity of being a small market NBA owner is not worth much in such a landscape.

If, however, you look at the available data, you'll find that teams like the San Antonio Spurs, the Cleveland Cavaliers, and the Indiana Pacers all more or less mint money with their NBA franchises. That is, far from the prediction of our landscape analysis, these are promising opportunities, not poor ones.

The difference is that the analysis presumes that just because rivals can compete all out, they will compete all out, to the detriment of the smaller teams and the overall opportunity. Yet, as we saw in the experiment, competition under the ROFR clause is quite subdued. Rival teams could enter and compete, but they know they won't be successful. Moreover, since competing itself is expensive, there's no point in doing so unless the prospects of success are decent. So, far from the prediction of our models, that unbridled rivalry will destroy value, the reality is more of a "gentleman's" labor market where the incumbent team faces little competition.

On the other hand, take away the ROFR and the labor market becomes as the models predict, brutal and difficult for the small market players. Superstars are retained by the incumbent team only by offering extremely favorable salaries, vastly reducing the quality of the opportunity from owning a small market team.

So the major lesson is one of landscaping. A business landscape may appear unfavorable in pure form, but the details matter. A ROFR clause essentially redoes the landscape of the NBA labor market in a massively important way. An outward thinker is alert to such things in doing strategic analysis of opportunities. The ROFR is an apparently small thing, put in place officially for entirely different reasons than to suppress competition. Yet it and other distortions in the NBA labor market make the opportunity far better than what a simple 5 forces would imply.

How to Think Outwardly

A good exercise in learning to think outwardly is to perform 5 forces or other similar analysis to opportunities of interest as you would in strategy. The twist, though, is to now pay attention to WITS type moments where the implicit assumptions of such analysis get altered, using your outward thinking.

Friday, January 29, 2016

The Interview Game, Choosing by Voting

At its core, the interview game asked you to make a decision of the following form: given a list of m good interviews and n bad interviews, should you hire the person or not. I suggested that the optimal strategy was to follow a voting rule: If the goods outnumbered the bads, then you should hire otherwise you should not. The reason such a simple rule works is that each piece of information conveys exactly the same amount of information.

Photo Credit: Jessica Martinez

Let's do this carefully for the first couple of interviews. In the first interview, a candidate is 50% likely to be competent. In this case, 2/3rds of her interviews are good and 1/3rd bad. Suppose you have a good interview. What is the chance the candidate is competent? Formally, what is:

Pr[Candidate is Good | Interview is Good]

Bayes' rule (Data and Decisions) tells us that

Pr[Candidate is Good | Interview is Good] = Pr[ Interview is Good | Candidate is Good] Pr[Candidate is Good] / Pr [Interview is Good]

The denominator, the chance of a good interview, is just the prior chance of granting a good interview prior to knowing whether the candidate is incompetent or not. This chance is 50-50. Thus,

Pr[Candidate is Good | Interview is Good] = 2/3  1/2 / 1/2 = 2/3

So we draw the obvious conclusion that the first manager should hire the candidate if she interviews well and not if she interviews poorly.

Now let's turn to the second manager. Suppose the candidate was hired by the first manager, but has a bad interview with the second. Then we are interested in:

Pr[Candidate is Good | One good and one bad interview], which I'll now abbreviate as Pr[G | gb]. The capital letters indicate the type of candidate, Good or Bad, and the small letters the type of interview. Again, using Bayes' rule, this amounts to the calculation:

Pr[G| gb] = Pr[ gb | G] Pr[G] / Pr[gb] = Pr[g|G] Pr[b|G] Pr[G] / Pr[gb]

and since Pr[g|G] = 2/3, Pr[b|G] = 1/3 and Pr[gb] = 2/9, we may easily deduce that the chance the candidate is good is 50-50 in this case.

What just happened? Since each piece of information carries the same weight, the bad interview completely cancelled the good one, leaving the second manager in the same position as when she had no information whatever.

But this is exactly like voting--each vote carries the same weight so a Gore vote cancels a Bush vote in Florida in 2000. And, taking the analogy further, we can see that a manager who knew that the candidate had two good interview and no bad ones will never gain enough evidence from her own interview result. If it's good, the vote count is 3-0 in favor of hiring. If bad, the vote count is 2-1. Either way, hire is the better choice.

And so, after only two "votes" have been cast/hire decisions have been made, the resume data completely overwhelms any interview data and we end up in a "cascade." If the candidate experienced initial success, she will be hired by everyone thereafter. If she had no initial success, she is doomed to never be given a chance.

One sees this type of thing all the time with technology platforms--the early success or failure of a platform more of less sets the course of affairs thereafter.

The key take away, and the whole point of performing the experiment, is to illustrate that choice data, what people did in response to information rather than the information itself, may contain very little value. Imagine a job candidate who was hired by the first 100 or the first 1000 managers. One might think it a sure thing that this candidate is competent based on the data. And if the data were non-strategic, you'd be right. But when strategic actors create the data by their actions, this intuition is completely wrong.

In the situation above, the chance the candidate is competent/good is simply

Pr[G | gg] = Pr[gg | G] Pr[G] / Pr[gg]

And this may be readily calculated to be 80%--a long way away from a sure thing.

The situation can be much worse when the data gets noisier. Suppose you are choosing a CEO. CEO talent is notoriously difficult to measure so, when the CEO is good, there is one a p% chance of a good interview. There is the same p% chance of a bad interview, when the CEO is incompetent. Once again, the voting rule describes optimal behavior and, once again, things snowball after a run of only two consecutive identical choices initially.

So suppose our CEO was hired twice initially and then "climbed the ladder" successfully being hired/promoted many times. What is the chance that we end up with a bad CEO? Again, this amounts to

1 - Pr[G | gg] = Pr[gg | G] Pr[G] / Pr[gg]

which we can compute as 1 - p p / ((p p) + (1 - p)(1 - p))

Here is a chart I drew in Excel showing the chance of a bum CEO as a function of p.
What you should notice is that, when the interview/hiring process is noisy, there is a very good chance of being trapped in a "bad" snowball--a situation where the person exhibits a stellar record and then badly underperforms.

Placing excess weight on data subject to this type of "herding" breeds one type of overconfidence, an increasingly common trap as we rely ever more on data driven decisionmaking. The data seem to make the hire a no-brainer, but this is far from the case.

What can you do about it?

If we stopped here, it would be a depressing conclusion--voting is the best decision rule, but it's a lousy decision rule, especially when the data is noisy. So what should you do? The most important thing is to realize you have this potential problem with your data in the first place. Once realized, make a rough estimate of how noisy each piece of data is, and hence the risk of a "bad" snowball outcome. From here, make a cost-benefit assessment of whether new data from other sources is needed before making a decision or not. Also, now being aware of the risk, you might link your decision with various sorts of hedging strategies to try to mitigate this risk.

But the bottom line is this: Without outward thinking, once a record has been established, it looks like no-brainer decision. Those attuned to outward thinking, however, recognize the risk, and incorporate it into their overall portfolio of decisions and forecast outcomes.




Friday, November 21, 2014

Altruistic Bacteria?

Biology student Derrick Grunwald told me about the following tale of the game theory of bacteria. It turns out that certain bacteria come in two varieties, altruistic and selfish. The classification relates to their reaction to emitting a certain molecule. The selfish types emit the molecule and then immediately claim it for themselves using a receptor in another part of the creature. The altruists emit the molecule to the colony and receive the average emissions available, sort of like a public good. Apparently, this process of emitting and receiving creates fitness for the bacteria. Derrick tells me that there is an ideal amount of the molecule to receive, m*. A bacteria exposed to too much is unhealthy as is one exposed to too little.

The puzzle to biologists is how their can be altruistic bacteria. While other-regarding or even eugenic preferences are possible in higher primates, it seems a stretch to consider such motives in bacteria.

So how can we resolve this puzzle using game theory and what does this tell us about the nature of these bacterial colonies? First off, why are there volunteers in the first place? What possible benefit is there from volunteering? When a bacteria emits the molecule, it doesn't get the amount exactly right. Sometimes it emits too much, sometimes too little. On average, it's the correct amount, but individually it is not. Thus, the volunteer bacteria are engaging in a bit of risk pooling. By emitting in general, all of these errors average out in the colony and each individual absorbs just the right amount of the molecule thanks to the magic of the law of large numbers. A colony of selfish bacteria are choosing not to insure. This is obviously less fit than insuring, but does have some advantages of reliability.

Let us study "equilibria" of this game. Suppose that a colony consists entirely of volunteers and it is invaded by a small number of selfish types. The volunteers will still absorb approximately m* of the molecule while the selfish will absorb 2m*--way too much of the molecule. Thus, the selfish are "killed with kindness" by the volunteers. Hence, all volunteers comprises an equilibrium.

What about a colony of all selfish types? While less fit than the volunteers, this colony is also immune to invasion. If a small number of volunteers show up, they hardly affect the absorption of the selfish, which is still approximately m* on average, but the volunteers themselves get almost none of the molecule. Thus, they too die out. So all selfish is an equilibrium despite its inferiority to the insurance scheme worked out by the volunteers.

What about mixed colonies? This is possible, but unstable. If the colony consists of a fraction f of volunteer types, a co-existing equilibrium can arise. In this situation, selfish types are systematically exposed to too much of the molecule since they absorb some of the production of the volunteers. Volunteers systematically have too little of the molecule since the selfish types are "stealing" some. And if the fitness of the two types is approximately equal, they can coexist.

The instability arises from the following problem. If the co-existing colony is invaded by selfish types, this improves the fitness of the selfish, since the emissions of the volunteers are now more diluted by the additional selfish types, but reduces the fitness of volunteers since there are now more selfish types "stealing" the molecule. Hence, such a perturbation would cause the colony to eventually drift to an all-selfish society. By contrast, if the colony is invaded by volunteers, just the opposite occurs--volunteers become more fit while selfish become less fit. Only if invasions of various types are occurring often enough to bring the population back to the equilibrium fraction f, will the colony continue to coexist. This is unlikely if invasions occur randomly, even if both types of invasions are equally likely.

Thus, using game theory, the puzzle of altruistic bacteria may be understood as purely selfish behavior.

Wednesday, October 29, 2014

If a tree falls in the forest...

There's an old conundrum that asks whether, when a tree falls in the forest, and know one is there to hear it, does it make a sound? Millions of creators of social media must constantly ask themselves the same question. How many thousands of tweets, facebook posts, youtube videos, and blog entries pass through the ether, unheard, unread, and unknown.

In the words of Thomas Gray (Elegy Written in a Country Churchyard)

Full many a flower is born to blush unseen, /And waste its sweetness on the desert air.
 And so it is with social media. How many Ansel Addams' or Auguste Monet's are bound to rest, undisturbed and undiscovered, amidst the detritus of the communication explosion.

At Berkeley-Haas, the Dean's suite often coaxes a reluctant professoriate to embrace the age of social media, to interact with our students outside the classroom in these social spaces. We are advised that the millenials we teach are especially receptive to such bite-sized portions of wisdom, that, in fact, they prefer them to the more traditional long-form of the lecture hall or the textbook. We are advised to turn our classes upside-down, to engage in all possible ways for the ever elusive mindshare.

What we are not offered, however, is evidence. Does any of this flailing outside the classroom matter? Do the students even want it?

I conducted an A/B test to measure this. Before each of my last two Wednesday classes, I wrote a blog entry. I viewed the entries as similarly interesting to my class. If anything, the treatment entry is more interesting. The key treatment was announcing the presence of the entry in one case, and saying nothing in the other.

Here are the results of the experiment:

Blog entry with no announcement: +1, 0 comments, 14 views.
Blog entry with announcement, +1, 0 comments, 14 views.

It takes no statistics to see that awareness that a blog entry has been written makes no difference whatsoever.

What should we make of this experiment? My take is the following: All the hype about social is a bunch of hooey. Individuals want well-produced solid entertainment. There may, at one point, have been novelty value in the power of individuals to create content, but that point has long passed. What millenials want is the same thing that all previous generations want, solid amusement for their out of class hours. So far as I know, this is the first experiment to test the desires of MBA millenials to read the random thoughts of their blogging social professors. Regardless, it's a finding worthy of wider circulation. Simply put, calls to embrace social as an important information stream are, quite simply, nonsense.

It's a sad conclusion, and it won't stop me from writing since I derive joy from the process myself, but it suggests a refocus in pedagogy away from the "flavor of the month" and back toward the heart of the matter, which is providing great experiences for students in the classroom. 

A lovely blog/youtude/facebook/twitter steam is all well and good, but it should be seen for what it is, entirely peripheral and largely wasted motion, at least so far as my sample is representative.

Tuesday, October 28, 2014

Pregnant MBAs

When most people think of game theory, they think of situations in which two or more individuals compete in some situation or game. Chess is the quintessential example. Yet the thinking underlying formulating a good plan when playing against others is just as important (and useful) when thinking about your own future path.

An episode of Seinfeld captures this idea beautifully when "morning Jerry" curses aloud the bad choices made by "evening Jerry." Evening Jerry stays out too late and indulges too much making life difficult for morning Jerry, who has to pay the piper for this excess. He then muses that morning Jerry does have a punishment available to evening Jerry. Were morning Jerry to lose his job, evening Jerry would be without funds or friends to pursue his wild lifestyle. Not mentioned is that this punishment is also costly to afternoon Jerry, who is likewise in no position to pay the bill at Monk's Cafe for meals with Elaine and George.

While the Seinfeld routine is meant to be funny, for many individuals, the problems of the activities of their many selves are no laughing matter. Anyone struggling with their weight curses their night self or their stressed out self for lack of willpower. That giant piece of cake that stressed-out self saw as deliverance means a week of salads and many extra hours at the gym for the other selves.

I bring this up because we all suffer from the evening Jerry problem, but for MBAs, it's one of the most serious problems they'll ever face. For most MBAs, the two years spent getting this degree represents the final time in life when complete attention can be paid to learning and single-mindedly building human and social capital. The constraints of work and family offer vastly less time for such activities in the future. While there may be occasionally breaks for internal training or executive education, such breaks are rare. Moreover, for busy managers, even these times may not constitute a break. Work does not stop simply because you are absent. Crises still need to be dealt with and deadlines met.

Moreover, the gains made during this time can profoundly affect an MBA's career. They can be the difference between the C-suite and merely marking time in middle management. They can be the difference between the success and failure of a startup. They can be the difference between getting a position offering a desired work-life balance and accepting a position that does not. They can be all the difference in life.

Yet, inward thinking would have us see the choices we make quite narrowly. Will an extra weekend in Tahoe be the difference between an A- and a B+? Will it be the difference between making or missing a class? Will it be the difference between actively participating in a case discussion or not? Will they be the difference between seeing or missing some outside speaker in an industry of interest? Viewed in this light, such choices are of decidedly small caliber. An extra day in Tahoe can hardly be the difference in anything of consequence.

Wilhelm Steinetz, the great chess champion, averred that success in chess is a consequence of the accumulation of small advantages. Viewed narrowly, inwardly one might say, the differences Steinetz had in mind seem trivial. To most chess players, the board, and one's chances of winning, are not appreciably different with a slightly better versus a slightly worse pawn formation. Yet a grandmaster does not see things this way at all. Such individuals are consummate outward thinkers, constantly planning for the endgame, for the win, which will only be determined much later. From that perspective, such trivial things are of utmost importance. And indeed they are as, more often than not, they mark the difference between victory and defeat in championship play.

Outward thinking also allows one to take a longer-term view on the time spent while pursuing an MBA. The apparent difference between the talent at the middle and the top tier of many organizations is often remarkably small.  The CFO does not have appreciably more IQ points than someone lower down. She does not work vastly more hours or have vastly better social capital. Rather, her rise commonly represents an accumulation of small advantages--a win early in her career that distinguished her as high potential, a management style that avoided or defused a conflict that might have derailed her progress, an unexpected opportunity because a head at some other firm remembered her as standing out. In the end, it is clear that she is CFO material while the middle manager is not, but it didn't start out that way.

We might be tempted to chalk all this up to luck. She got the breaks while her sister, unhappily struggling as a middle manager somewhere, didn't. Admittedly, luck plays a role, but chance, somehow, seems to mysteriously favor some more than others. Consider the situation of poker or blackjack players. To win, one needs to have a good hand, i.e. one needs to be lucky. Yet some people consistently win at these games and others consistently lose. Luck evens out over many hands, or over many events in the course of a lifetime, yet there are genuine poker all-stars.

Much like evening Jerry, MBA life is filled with temptations, filled with vast stress-easing slices of chocolate cake. We might think, "What's the harm? We all work hard. I deserve this." But unlike the dieter, who can learn from his mistake and say no to the cake the next time around, an MBA gets only one chance to get it right. Mess it up this time and there is no tomorrow to make amends. There are no do-overs. The MBA equivalent of chocolate cake today may not mean a week of salads and workouts, but a lifetime of them.

So the lesson here is our usual one: look forward, reason back. How will our future selves perceive the choices we are making today? Will they be happy?If not, then it's time to make different choices.

So how does any of this relate to the title of this post, Pregnant MBAs. When a woman learns she is pregnant, she will often alter her lifestyle, sometimes radically, After all, she's now responsible for her unborn baby as well as herself, so she eats better, stops smoking and drinking, exercises more, and so on. She wants her baby to have its best chance in the world. In a sense, every MBA is pregnant. You are also responsible for two---your present self and your future self.  Outward thinking is nothing more than this awareness, so obvious to a pregnant woman,,but submerged beneath the surface at all other times. Give your future self his or her best chance in the world.

Friday, October 24, 2014

(Not) Solving Social Dilemmas

The secret ingredient to solving social dilemmas is punishment. Without the ability to punish transgressors, good behavior must rely entirely on individual self-control in abstaining from the temptation to cheat and obtain a reward. Some individuals are up to this challenge. Their consciences are so strongly developed to chasten them for engaging in dishonorable behavior that this is not a problem. But for most, it is. Indeed, a fundamental human weakness is to go for today's reward over tomorrow's, a failing which destroys most attempts at dieting let along social dilemmas.

Sufficient punishment fixes this problem by adding the force of extrinsic incentives to intrinsic. Given sufficient reinforcement, even those more "pragmatic" consciences can be persuaded to do the right thing. Yet, even when such punishments are available, a literal-minded view of game theory suggests that they must always be available. Individuals must never perceive themselves as getting off scot free. The usual version of this argument uses look forward, reason back type reasoning. Suppose the game will end in period N. Then clearly bad behavior in that period is unpunishable and hence everyone will be bad. But this destroys punishment for bad behavior in the penultimate period, and so on. The pure logic of the situation implies that any game known to end at a fixed time, however far into the future, will see cooperation remorselessly break down right from the start.

But, while logically sound, this is a very silly prediction when the number of periods is long. Despite the logic of LFRB, bad behavior in early periods will be punished and good behavior rewarded with reciprocity in early periods of the game, only breaking down when the endgame is sufficiently close. Here again we see the friendly face of irrationality. Suppose there is some chance that others playing the game don't "get it." They, instead, think they are playing the infinitely repeated version of the game whereby cooperation can be sustained by threats of punishment, and play accordingly.

What is a rational soul to do? The answer is, when in Rome, do as Romans do. In other words, pretending that you too don't get the game is a perfectly logical and sensible response, at least until some point deep in the game where knowledge that punishment won't work becomes too hard to ignore. As with some of our other examples, only a seed of doubt about the rationality of others is needed so long as the endgame is sufficiently far off into the future. Such a seed of doubt seems to me eminently more reasonable than the maintained assumption in the standard model, that everyone understands the game perfectly,

But if we're playing a short game consisting of, say, only 8 periods. Now the logic of LFRB has real force, and we would seem to be genuinely doomed. Running such a game in the lab reveals that things are not quite as bad as all that, but they are definitely pretty bad.

Or maybe not. Let's change the game a little. Suppose, at the end of each period, each player has a chance to punish one or more of the others. Punishment involves destroying some of their payoffs. But this punishment is not free, it costs the punishing individual something as well. This would seem of immense help since the whole problem was that we lacked an ability to punish in the last period and, this wrecked the incentives in all the earlier periods. Now, we no longer have this problem. If someone misbehaves in the last period, we punish them via value destruction and all is once again well in the world. But such a plan only works if the punishment is credible, if we can count on the punishment to be delivered should an individual try to test societal resolve. There are two problems with credibility. First, there is a social dilemma in who delivers the punishment. While we all might agree that cheaters should be punished, each of us would rather that someone else delivers the punishment and hence takes the hit to their own payoffs. But even if we sorted this out by agreeing on whose job it was to punish, we might still be in trouble. In the last period of the game, who would actually carry through with the punishment?

This is a version of our earlier problem. In the last period, there is no future reward for today's good behavior nor punishment for bad. Since the whole point of punishing is to obtain future good behavior, what is the point of punishing anyone in the last period of the game? Worse yet, not only is punishment pointless, but it is also costly. So why would anyone believe that misbehavior will be punished in the last period? By the same logic, there is no point in punishing in the penultimate period either and again the endgame casts its long shadow back to the first period. Irrationality seems to work less well as a way out of this particular logical bind. The endgame is simply too near.

Yet, remarkably, such schemes seem to work, at least in western cultures. Economists Ernst Fehr, Simon Gaechter, and a number of other co-authors performed versions of this experiment in labs in the US and Europe. Remarkably, they found the such a scheme proved excellent at producing cooperation. Some individuals tested societal resolve early in the game and invariably received bloody noses for this insolence.

But why does it work? The answer, in this case, seems to be morality. While conscience is weak in resisting temptation, it is rather stronger when we have a chance to unsheathe the sword of justice to be used on others. That is, even though there was a personal cost to punishment, it seemed to be more than offset by a personal benefit in administering justice.

[A curious sidenote to this stories, in Middle Eastern, Greek, and North African cultures, the sword of justice operated in the other direction---those who did not cheat were punished. That is, individuals cooperating in a prisoners dilemma were hit with penalties from those who did. This soon produced conformity in a failure to solve social dilemmas.]

This solution is all well and good when the punishment can be surgically directed at transgressors, but suppose instead that there is collateral damage--to punish the guilty, some innocents must be punished as well. In work in progress by myself and former Berkeley undergrad Seung-Keun Martinez, we investigate this possibility. We first consider the opposite extreme where, to punish one person, you must punish everyone. One might think that such a scheme would be either marginally useful or utterly useless. In fact, it is neither. Compared to the world in which no punishment was allowed, punishment of this sort actually reduces cooperation. This is not simply from the value destroyed by punishment (though there is some of that), but rather from a reaction to the possibility of such punishment. When individuals fear suffering punishment when doing the "right thing," they seem to become despondent and to cooperate less than those under no such threat.

We are now in the process of investigating intermediate levels of "collateral damage" from punishment to determine where the 'tipping point" might lie.