In his best-selling book Moneyball, Michael Lewis tells the true story of how the Oakland A’s general manager Billy Beane built a competitive team on a very limited budget by defying the conventional wisdom of baseball experts. Instead, Beane relied upon an unconventional data analytics approach to scout and evaluate players.
Despite the A’s relatively meagre resources, in the nineteen years Beane has used analytics to build his teams, the team have made the playoffs an impressive nine times. Beane’s innovative approach has literally become a game changer. Today, many of baseball’s leading clubs are employing the same key principle to build successful teams: Trust the evidence of data over the opinions of experts.
The story of Moneyball is intriguing because it calls into question the judgements of experts. After all, if the knowledge of experts in a relatively simple industry such as baseball could be significantly improved by an unconventional data analytical approach, could the same be true for more complex human activities? It was a question that many, if not most, of us had not considered before Lewis popularized this story.
After publication, Lewis was surprised to learn that the ideas in his book weren’t as original as he thought. A chiding review in the New Republic pointed out that Lewis did not seem to be aware of the deeper reason for the inherent flaws in expert judgements: the dynamics of thinking in the human mind. The discovery of the workings of human thinking and how they can enable inefficient and sometimes irrational expert judgments had been described, years prior, by a pair of psychologists, Daniel Kahneman - whose ground-breaking work resulted in a Nobel prize - and his long-time collaborator, Amos Tversky. This review would inspire Lewis to delve into the story of the work of these two psychologists, which would become the subject of his subsequent book, The Undoing Project.
Dual Thinking Modes
Kahneman and Tversky discovered that the human brain is a paradox. While it is capable of producing highly developed analytical and creative intelligence, it is also prone to make apparently senseless errors. Why is this so? The answer according to the psychologists is that people are nowhere near as rational as they think and are incredibly susceptible to unconscious biases that influence human decision-making to a far greater extent than we realize.
Kahneman and Tversky discovered that people engage in two different thinking modes in their day-to-day lives. They refer to these ways of thinking by the nondescript names System 1 and System 2. System 1 is fast thinking, which operates automatically with little or no effort. It is highly proficient at identifying causal connections between events, sometimes even when there is no empirical basis for the connection. System 2, on the other hand, is slow thinking and involves deliberate attention to understanding details and the complex web of relationships among various components. Whereas System 1 is inherently intuitive, deterministic and undoubting, System 2 is rational, probabilistic and highly aware of uncertainty. Needless to say, these two ways of thinking are contextually very different.
Both System 1 and System 2 thinking are distinctly human capabilities that have provided humanity with an immense evolutionary advantage. We are capable of developing complex intellectual structures such a mathematics, physics, and music via applications of System 2, and, thanks to System 1, humans have the unique capability to make judgements and decisions quickly from limited available information. However, in employing these two capabilities, Kahneman and Tversky found that, while we may perceive ourselves as predominately rational System 2 thinkers, the reality is most human decisions are based upon the more intuitive System 1.
System 1 thinking, with its inherent mental biases, is what likely guided the judgements of traditional baseball scouts. Similarly, despite their claims that they are following the science, mental biases may be informing the thinking of the public health experts who are influencing the shaping of public policy in the current Covid-19 pandemic. And if this is so, it raises the question of whether or not the guidance provided by the experts is another case of humans making confident decisions from limited available information.
What We Know and Don’t Know
For insight into this question, let’s begin with the data we know and the data we don’t know. We know the number of confirmed cases from those who qualified for one of the limited available tests. We also know the number of Covid-19 hospital admissions, the number of those admissions in intensive care units (ICU’s), and the number of deaths associated with the coronavirus.
We are also aware of critical key trends about who are most vulnerable to the invisible enemy. While people of all ages can be infected with Covid-19, those with an underlying health condition, such as diabetes, hypertension, asthma, heart disease, and obesity are most at risk. Elderly who have a chronic illness are at particularly high risk. A very important trend that has emerged is that rarely does anyone die from the virus alone; the vast majority of coronavirus deaths involve co-morbidities. This may explain why the young, who have far fewer chronic conditions, are generally not at high risk when infected with the virus.
While this information is very helpful in informing us about the behavior of the infection once it is confirmed, there is a critical data gap that prevents us from accurately gauging the extent of the actual threat of Covid-19 upon the general population. And this missing piece of data is perhaps the most important number needed to shape public policy for effectively combating the invisible enemy: the actual number of people infected.
This number is critical because it is the foundational denominator necessary for calculating accurate data. Without this number, we do not know the true infection rate, nor the asymptomatic incidence rate, the mild symptom rate, the severe symptom rate, and most importantly, the true death rate.
In a New York Times editorial in April, professor Louis Kaplow of Harvard University urged decision makers to take the most important step in filling this data gap: Perform large-scale national testing on a stratified random sample of the U.S. population. Until we know the actual infection rate, Kaplow argued, “We are flying blind in the fight against Covid-19.”
“Random population testing is the key to unlocking the mysteries surrounding Covid-19,” he added. It is also the key to enabling a rational System 2 solution to a very complex problem. As long as we make no effort to know the actual infection rate and continue to monitor only the biased confirmed case rate, we are indeed flying blind—a clear sign our decision-makers are informed by System 1.
Without the true denominator, we don’t know whether the death rate is 2.0 percent or 0.2 percent, which could make a big difference in how we make policy decisions. In the absence of this key data, there is understandably a natural bias on the part of the public health experts and government leaders to err on the safe side by promoting one-size-fits-all mandates.
Framing and Decision Making
Billy Beane was able to change the way he exercised leadership because he had the wherewithal to re-frame his thinking and create a game-changing solution for building a baseball team. The ability to think differently made all the difference because, as Kahneman and Tversky discovered, how we frame a situation heavily influences how we decide between alternative courses of action.
The two psychologists applied the label of “framing effects” to what they described as the unjustified influences of formulation on beliefs and preferences. Kahneman and Tversky noticed that people did not choose between things, but rather, they choose between descriptions of things. Thus, by simply changing the framing - the description of a situation - they could cause people to completely flip their attitude on how to respond to the situation.
For example, in an experiment conducted at the Harvard Medical School, Tversky divided physicians into two groups. Each group was given statistics about the five year survival rates for the outcomes of two treatments for lung cancer: surgery and radiation. While the five-year survival rates were clearly higher for those who received surgery, in the short term surgery was riskier than radiation.
The two groups were then given two different descriptions of the short term outcomes and were asked to choose the preferred treatment. The first group was given the survival rate: The one-month survival rate is 90%. The second group was given the corresponding 10% mortality rate. Although these two descriptions are logically equivalent, 84% of physicians in the first group choose surgery while the second group was split 50%/50% between the two options.
If the preferences were completely rational, the physicians would make the same choice regardless of how the descriptions were framed. However, System 1 thinking is not rational and can be swayed by emotional words. Thus, while 90% survival sounds promising, 10% mortality is shocking. This experiment showed that physicians were just as vulnerable to the framing effect as hospital patients and business school graduates. As Kahneman observed, “Medical training is, evidently, no defense against the power of framing.”
This troubled Kahneman who noted, “It is somewhat worrying that the officials who make decisions that affect everyone’s health can be swayed by such a superficial manipulation - but we must get used to the idea that even important decisions are influenced, if not governed, by System 1.”
Reframing the Coronavirus
When the coronavirus first erupted into our lives, it was immediately framed as a public health crisis, which is why the guiding expertise for handling the pandemic came from the public health community. But what if we had re-framed the problem differently and treated Covid-19 not as a public health crisis, but rather as a social system crisis? This reframing would have almost certainly given greater voice to advisors who were not public health experts.
If the primary advisors were a diversity of contributors that included intensive care medical professionals, social psychologists, sociologists, economists, mental health professionals, small and large business managers, and legal scholars as well as public health experts, there would have been a greater opportunity to formulate a holistic solution to address the many concurrent dimensions of this social system crisis. They almost certainly would have insisted upon performing the random sample that Kaplow suggested.
It is likely that a diversified group would have focused on balancing three goals. The first goal, of course, would be to minimize the number of deaths from Covid-19. A second goal would be to minimize the number of unintended deaths that might result from actions taken to curb the virus. And, finally, a third goal would almost certainly be to maintain social and economic cohesion to the fullest extent possible.
If we had entrusted the crafting of the strategy to solve what is certainly the greatest social system crisis of our lifetimes to a diversified group of contributors, we would have likely not defaulted to System 1 thinking and command-and-control management. We would have been able to avoid the hazards of experts by enabling System 2 thinking to understand all aspects of a complex social problem. More importantly, our leaders would have had the opportunity to put aside one-size-fits-all simplistic responses and formulate a more intelligent solution that trusts the evidence of data over the opinions of experts.