Photo Credit: Jessica Martinez |
Pr[Candidate is Good | Interview is Good]
Bayes' rule (Data and Decisions) tells us that
Pr[Candidate is Good | Interview is Good] = Pr[ Interview is Good | Candidate is Good] Pr[Candidate is Good] / Pr [Interview is Good]
The denominator, the chance of a good interview, is just the prior chance of granting a good interview prior to knowing whether the candidate is incompetent or not. This chance is 50-50. Thus,
Pr[Candidate is Good | Interview is Good] = 2/3 1/2 / 1/2 = 2/3
So we draw the obvious conclusion that the first manager should hire the candidate if she interviews well and not if she interviews poorly.
Now let's turn to the second manager. Suppose the candidate was hired by the first manager, but has a bad interview with the second. Then we are interested in:
Pr[Candidate is Good | One good and one bad interview], which I'll now abbreviate as Pr[G | gb]. The capital letters indicate the type of candidate, Good or Bad, and the small letters the type of interview. Again, using Bayes' rule, this amounts to the calculation:
Pr[G| gb] = Pr[ gb | G] Pr[G] / Pr[gb] = Pr[g|G] Pr[b|G] Pr[G] / Pr[gb]
and since Pr[g|G] = 2/3, Pr[b|G] = 1/3 and Pr[gb] = 2/9, we may easily deduce that the chance the candidate is good is 50-50 in this case.
What just happened? Since each piece of information carries the same weight, the bad interview completely cancelled the good one, leaving the second manager in the same position as when she had no information whatever.
But this is exactly like voting--each vote carries the same weight so a Gore vote cancels a Bush vote in Florida in 2000. And, taking the analogy further, we can see that a manager who knew that the candidate had two good interview and no bad ones will never gain enough evidence from her own interview result. If it's good, the vote count is 3-0 in favor of hiring. If bad, the vote count is 2-1. Either way, hire is the better choice.
And so, after only two "votes" have been cast/hire decisions have been made, the resume data completely overwhelms any interview data and we end up in a "cascade." If the candidate experienced initial success, she will be hired by everyone thereafter. If she had no initial success, she is doomed to never be given a chance.
One sees this type of thing all the time with technology platforms--the early success or failure of a platform more of less sets the course of affairs thereafter.
The key take away, and the whole point of performing the experiment, is to illustrate that choice data, what people did in response to information rather than the information itself, may contain very little value. Imagine a job candidate who was hired by the first 100 or the first 1000 managers. One might think it a sure thing that this candidate is competent based on the data. And if the data were non-strategic, you'd be right. But when strategic actors create the data by their actions, this intuition is completely wrong.
In the situation above, the chance the candidate is competent/good is simply
Pr[G | gg] = Pr[gg | G] Pr[G] / Pr[gg]
And this may be readily calculated to be 80%--a long way away from a sure thing.
The situation can be much worse when the data gets noisier. Suppose you are choosing a CEO. CEO talent is notoriously difficult to measure so, when the CEO is good, there is one a p% chance of a good interview. There is the same p% chance of a bad interview, when the CEO is incompetent. Once again, the voting rule describes optimal behavior and, once again, things snowball after a run of only two consecutive identical choices initially.
So suppose our CEO was hired twice initially and then "climbed the ladder" successfully being hired/promoted many times. What is the chance that we end up with a bad CEO? Again, this amounts to
1 - Pr[G | gg] = Pr[gg | G] Pr[G] / Pr[gg]
which we can compute as 1 - p p / ((p p) + (1 - p)(1 - p))
Here is a chart I drew in Excel showing the chance of a bum CEO as a function of p.
What you should notice is that, when the interview/hiring process is noisy, there is a very good chance of being trapped in a "bad" snowball--a situation where the person exhibits a stellar record and then badly underperforms.
Placing excess weight on data subject to this type of "herding" breeds one type of overconfidence, an increasingly common trap as we rely ever more on data driven decisionmaking. The data seem to make the hire a no-brainer, but this is far from the case.
What can you do about it?
If we stopped here, it would be a depressing conclusion--voting is the best decision rule, but it's a lousy decision rule, especially when the data is noisy. So what should you do? The most important thing is to realize you have this potential problem with your data in the first place. Once realized, make a rough estimate of how noisy each piece of data is, and hence the risk of a "bad" snowball outcome. From here, make a cost-benefit assessment of whether new data from other sources is needed before making a decision or not. Also, now being aware of the risk, you might link your decision with various sorts of hedging strategies to try to mitigate this risk.
But the bottom line is this: Without outward thinking, once a record has been established, it looks like no-brainer decision. Those attuned to outward thinking, however, recognize the risk, and incorporate it into their overall portfolio of decisions and forecast outcomes.
No comments:
Post a Comment