David Spiegelhalter is emeritus professor of statistics at the University of Cambridge, a former president of the Royal Statistical Society, and the author of the best-selling book The Art of Statistics. Knighted in 2014 for his services to medical statistics, he lives in Cambridge, UK.
What’s the big idea?
In The Art of Uncertainty, renowned statistician David Spiegelhalter reveals how understanding risk and probability can help us make smarter decisions in an unpredictable world. From medical choices to artificial intelligence, he shows how data can clarify the unknown, helping us distinguish real patterns from mere coincidences. Through storytelling and real-world examples—including historic miscalculations and surprising statistical truths—Spiegelhalter makes uncertainty less daunting and more navigable. The Art of Uncertainty teaches us how to embrace the unknown while staying humble about what we can never fully predict.
Below, David shares five key insights from his new book, The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk, and Luck. Listen to the audio version—read by David himself—in the Next Big Idea App.
1. Uncertainty is a relationship.
We all have to live with uncertainty. We may not know what will happen in the future, what is happening at the moment, what has happened in the past, or why it occurred. Uncertainty has been described as the “conscious awareness of ignorance,” which emphasizes that uncertainty is a relationship between ourselves and the outside world. It has a subject, an individual or perhaps a group of people, and an object, something we are uncertain about.
Suppose I take a coin out of my pocket and ask you, “If I flip this coin, what’s the probability it will come up heads?” You will probably say, “Fifty percent.” Then I flip it, cover it up, but look at it without showing you, and ask, “What’s the probability that this is heads?” At this point, people often become quite reluctant to give an opinion, but eventually, they might reluctantly say, “Fifty-fifty.”
I’ve done something quite major here. Before I flip the coin, we have “aleatory” uncertainty—dependent on chance, which you can’t know. After I flip the coin and cover it up, we have “epistemic” uncertainty—what you don’t know. In each case, our uncertainty is specific to us; I know how the coin landed, but you don’t.
When I do this in front of audiences, I often use a two-headed coin, so even before I flip it, I know what the answer will be. This simple example demonstrates that any assessment of uncertainty, in particular any use of probability, depends on our knowledge, our personal judgments, our assumptions, and our trust.
2. We should try and put our uncertainty into (maybe rough) numbers.
We often use words like “perhaps,” “maybe,” and “likely” in our general conversations. However, using such vague verbiage can be very dangerous when making important decisions. Back in 1961, after President Kennedy’s inauguration, he learned about the CIA plot to invade Castro’s Cuba at the Bay of Pigs with 1,500 Cuban exiles. Kennedy commissioned an intelligence report by the Joint Chiefs of Staff to evaluate the plan, and they believed that the chances of success were only 30:70, which means there was a 70 percent chance of failure.
But in the final report that went to Kennedy, these numbers were replaced by the phrase “a fair chance of success,” which meant not very good. Kennedy approved the raid, and it was a complete fiasco. Of course, the report was not the only reason for approval, but the military later regretted they had not been more precise about their doubts.
“In the UK, there’s a fixed scale for translating words into probability ranges.”
Modern intelligence services have learned their lessons. In the UK, there’s a fixed scale for translating words into probability ranges. If the term “likely” is used in an intelligence report, it must mean between 55 percent and 75 percent probability. A 30 percent probability of success, as in the Bay of Pigs, would be termed “unlikely.” Similar scales are used in climate change to translate between words and numbers.
Of course, not everyone will agree on the chances. In 2011, President Obama had multiple teams assess the probabilities of whether Osama bin Laden was in the compound in Abbottabad, and these estimates ranged from 30 – 40 percent to 80 – 90 percent. After a lengthy discussion, Obama determined it was essentially 50:50 and approved the raid. I believe it was beneficial for him to hear the significant variation in opinion from his expert advisors.
3. We should acknowledge the role of luck in our lives.
Luck has been called the “operation of chance, taken personally.” I don’t believe it’s an external force in our lives, but it’s a useful way of describing events that were unpredictable and not in our control, yet have an impact.
Philosophers have identified three main types of luck. The first, which I consider the most important, is constitutive luck—who you are born as, when in history, your genes, and your early environment, over which you had no control, but that has a huge influence on the rest of your life. Then there’s circumstantial luck—being in the right place at the right time or the wrong place at the wrong time. Finally, there’s outcome luck—just how it happened to work out at a particular instant.
I use my grandfather as an example. He had the bad constitutive luck of being born at the right time to join the British Army in World War 1. Then he had the terrible circumstantial luck of being appointed Brigade Gas Officer in the Ypres salient in January 1918, just after the battle of Passchendaele. This was a very dangerous job, and his diary records “narrow escape,” “lucky to get through in time,” and so on. He lasted three weeks in the job, and then on January 29th, 1918, a shell exploded close to him, and he was, as he later said, “blown up.”
But he had the extraordinarily good outcome luck of not being seriously injured, and being taken out of the front lines for the rest of the war. His unit suffered terribly later in 1918, and as a Second Lieutenant, he would have been first over the top in counter attacks, blowing his whistle and encouraging his men to follow him. And almost certainly, I would not be here now.
4. Have humility about statistical models.
A lot of work in science, climate change, risk analysis, gambling, and so on, is based to some extent on mathematical and statistical models. These are intended to represent the important parts of reality, but it is vital to remember that they are the map, not the territory. They’re always inaccurate. As statistician George Box famously said, “All models are wrong, but some are useful.”
It’s important to remember this when conducting statistical analysis, where the output from statistical packages provides us with estimates, confidence intervals, P-values, and so on. However, these are all, to some extent, incorrect, as they are based on statistical models that make assumptions which are never precisely correct. Nevertheless, many of the analyses remain useful.
“These are all, to some extent, incorrect, as they are based on statistical models that make assumptions which are never precisely correct.”
During the pandemic, the UK had eight statistical teams attempting to estimate the current average value of R, which represents the average number of people infected by someone with COVID-19. They employed 12 different models that produced somewhat different results – many of their uncertainty intervals did not even overlap, indicating that they could not all be correct. These intervals were too narrow because they assumed the accuracy of each model. Fortunately, all the teams met weekly and combined their results to produce a composite answer with significant uncertainty, reflecting the differences between the models. This was published alongside the 12 individual estimates.
This was an admirable demonstration of transparency in science. It also showed the value of having multiple independent teams tackle the same problem, just as Obama did when judging whether bin Laden was in the compound. Therefore, we need humility about statistical models.
5. Acknowledging uncertainty is part of being trustworthy.
Experts want to be trusted, but philosopher Onora O’Neill argues that this is the wrong aim. Instead, they should be trying to demonstrate trustworthiness. This is a duty in itself and may, of course, lead to increased trust. But how do we show our trustworthiness?
Of course, we need to be honest but also balanced, describing the pros and cons of any decision or the potential benefits and harms of any health-care intervention. Crucially, we need to look inside ourselves and decide whether we’re trying to persuade someone, manipulate them into thinking or doing something, or genuinely inform them, empowering them to make better decisions.
Another aspect of being trustworthy is to be open about your uncertainty and to clarify the quality of evidence, whether it is low, medium, or high. Finally, we should strive to pre-empt misunderstandings by explaining what the evidence means and what it does not mean, with the goal of preventing misinformation. So, for example, I believe we should not say vaccines are “safe and effective,” but that they are safe enough and effective enough to give to some people in some circumstances. We put this into practice when advising the UK government on their communication about the side effects of the AstraZeneca vaccine during the COVID pandemic.
Randomized trials of different modes of communication have shown that if we use a balanced format—trying to inform rather than persuade—we increase trust in those who are skeptical of, say, vaccines. Another way of looking at this is that by giving a one-sided persuasive message, authorities are actively decreasing trust in the very group they might want to reach.
To listen to the audio version read by author David Spiegelhalter, download the Next Big Idea App today: