Absence of Evidence is not Evidence of Absence


Abraham Wald, in his youth

Image via Wikipedia

Abraham Wald, an Austrian-Hungarian born mathematician applied his statistical skills to improve the armor of aircraft returning from battle during WWII. His approach was insightful. He examined where the  bullet holes and damages were on returning aircraft and recommended that armor be added to all the places where bullet holes and damage did not exist. His reasoning was simple. The aircraft that returned could take the flack where the damage was. The aircraft that did not return must have been hit elsewhere.

In our own analytical work, we sometimes ignore the lack of evidence, which is sometimes more important than what we can quantify. In the context of product development or leading change or any other endeavor, what are the things you cannot see? This is an indicator. Are we fixated on counting and measuring damage, or are we also thinking about why we don’t see any? Just because we cannot see it does not mean it is not there. Absence of evidence is not evidence of absence.

The metrics game is one area where we are vulnerable to make the mistake of counting what we can at the expense of identifying what is  important. If you are coaching a team, are you looking for the absence of evidence in addition to it? Are you looking at evidence from both perspectives? The notion of False Dichotomy and Confirmation bias are errors in thinking triggered, among other things, by absence of evidence or ignorance of evidence.

One of the challenges of leading change is to look for things which are not part of the model, not part of the picture, to focus on outliers and anomalies. The damage to the aircraft is relevant only because the aircraft returned safely. It’s the dark side of the moon and the iceberg below the surface. We cannot see it, but it is there.

What is it that you do not see?

Bad Decision Making


I tried an experiment with a few colleagues over the past couple of months based on Karl Popper’s famous experiment for confirmation bias. The results have big implications on how we can easily head down the path to making bad decisions.

The experiment goes like this. The experimenter gives the subjects a 3-digit pattern, “2 4 6” and asks them to write the rule that makes any other pattern conform to the same rule. The subjects may ask questions in the form of another pattern. For example, a subject may ask if the pattern 1 2 3 conforms, and the experimenter will answer either yes or no depending on whether or not the pattern conforms. After several attempts, the subjects write their “rule” for the pattern.

For example, subjects may ask:
“Does 4 8 12 conform?” The answer is yes.
“Does 6 8 10 conform?” Yes.
“Does 3 6 9 conform?” Yes. And so on…

Confirmation bias guarantees that you will ask more questions that get answered with a yes than with a no. The reason is that you start to build a model in your head of what the pattern conforms to, and you go about proving that you are right.

The answer to this puzzle is that the numbers must be increasing in value from left to right. (a < b < c). All someone has to do is offer the question “Does 2 3 1 conform?”, and you quickly start to converge because the answer is NO. One of my colleagues supplied negative integers (-2 -4 -6) and the answer was no. This led to some debate among the pair and they were far faster at converging than when they got yes answers.

I did the experiment with a pair of programmers and another pair of people, a programmer and a tester. The programmer-tester pair won. Why? Testers have the mindset of disproving rather than proving. Programmers like to prove their design works. Testers like to break features. Ironically, two testers in a pair don’t always do as well as a designer-tester pair. The diversity of the pair is what helps create the conditions for rapid convergence. Try this simple experiment yourself and share the results. The experiment is described in detail at the Developmental Psychology web site here.

Confirmation bias implies you filter and interpret data to support a belief you already have. The debate on Gun Control is an example of how the same data is interpreted different ways by people with differing perspectives. To increase government spending or reduce taxes to stimulate the economy is another debate where confirmation bias plays a role. There are huge implications for decision making. Human beings try to fit data to support their beliefs. Forcing yourself to disprove a hypothesis is powerful.

If you are responsible for delivering high quality products, what are the implications of confirmation bias on how you work? What can you do to create healthy team diversity and conflict which leads to better collaboration, and ultimately, better products?