In 1955, Daniel Kahneman was a psychologist in the Israeli army, charged with finding out which soldiers would make good officer material. He devised a simple test: divide the men into groups of eight, remove their insignia to hide rank, and tell them to lift a telephone pole over a six-foot wall. This would reveal who were the leaders, who were the followers, and who were the quitters (or thought lifting ridiculously heavy telephone poles over a wall was a waste of time). After each batch, Kahneman and his team would recommend those soldiers they thought had the right stuff for officer school.
Every now and then, Kahneman would get feedback from the school on how his recruits were doing. The news wasn’t all good. It seemed being talented at pole-lifting didn’t translate directly into being good officer material. In fact, according to Kahneman, “there was absolutely no relationship between what we saw and what people saw who examined them for six months in officer training school”.
Interestingly, this piece of information did not change Kahneman’s mind about the validity of his technique. “The next day after getting those statistics, we put them there in front of the wall, gave them a telephone pole, and we were just as convinced as ever that we knew what kind of officer they were going to be.”
Kahneman thought he was testing the soldiers – but he turned out to be testing himself. He was clinging to his theories, even when they were in conflict with data. He later coined a name for the phenomenon: “the illusion of validity.” We put too much faith in our power of judgement, and tend to ignore or downplay information that doesn’t agree.
Although it came too late to save the army careers of a number of soldiers, the insight would eventually lead to the creation of a new field of study – behavioural economics – that is now influencing behaviour at the highest level of governments, and reshaping our understanding of economics.