The Primacy of Doubt: From Quantum Physics to Climate Change, How the Science of Uncertainty Can Help Us Understand Our Chaotic World
Magazine / The Primacy of Doubt: From Quantum Physics to Climate Change, How the Science of Uncertainty Can Help Us Understand Our Chaotic World

The Primacy of Doubt: From Quantum Physics to Climate Change, How the Science of Uncertainty Can Help Us Understand Our Chaotic World

Book Bites Politics & Economics Science
The Primacy of Doubt: From Quantum Physics to Climate Change, How the Science of Uncertainty Can Help Us Understand Our Chaotic World

Tim Palmer is a Royal Society Research Professor in the Department of Physics at the University of Oxford who pioneered the development of operational ensemble weather and climate forecasting. He is a Fellow of the Royal Society, an International Member of the US National Academy of Sciences, a recipient of the Institute of Physics Dirac Gold Medal and a lead author of the Intergovernmental Panel on Climate Change. His PhD was in general relativity theory.

Below, Tim shares 5 key insights from his new book, The Primacy of Doubt: From Quantum Physics to Climate Change, How the Science of Uncertainty Can Help Us Understand Our Chaotic World. Listen to the audio version—read by Tim himself—in the Next Big Idea App.

The Primacy of Doubt by Tim Palmer

1. Whoa, where did that come from?

Systems can be stable and predictable for much of the time and then suddenly become dangerously unpredictable. The economy proceeds from year to year with growth and inflation under control, until suddenly there’s an unpredicted global financial crash with economic mayhem lasting years. Similarly, global health gradually improves year on year, until out of the blue a pandemic strikes killing millions. The weather presents the most vivid example of this intermittent phenomenon. Most of the time it’s boring and predictable. Then out of the blue, a devastating extreme weather event hits whose intensity was not predicted at all.

This behavior is a feature of chaos theory. Most people have heard of the butterfly effect, that small uncertain causes can have big unpredictable effects. What is less well known is that the butterfly effect can be intermittent. For most of the time, the flaps of butterflies’ wings do not affect the weather, however, occasionally they do. With my colleagues, I developed a way to tame this chaotic demon, through ensemble prediction. Now standard in virtually all weather services, the weather model is run 50 or more times with similar but not quite identical starting conditions and with small amounts of noise added to the model equations. When the weather is in a benign, predictable state, all 50 members will have similar types of weather and the meteorologist can be confident in the prediction.

However, when the 50 members diverge rapidly from one another with some showing extreme weather, then the meteorologists know that we are heading for an intermittent, unpredictable moment with quantifiably possible disastrous consequences. Disaster relief agencies use these ensemble predictions to decide when to take anticipate reaction by providing regions at risk with emergency food, water, shelter, and medicine.

The ensemble technique is now being applied to economics, health, and conflict prediction. But in these fields, ensemble prediction techniques are still at the preliminary stages of development. Perhaps in a few years we’ll never be faced with unpredictable Whoa, where did that come from? moments again.

2. Climate change is not an issue of belief or disbelief.

If you were told there’s a 70 percent chance it will rain tomorrow, do you believe it will rain or do you believe it will be dry? It’s a silly question because you neither believe nor disbelieve. Instead, you try deciding whether a 70 percent chance of rain is large enough to make you change or adjust your plans. Similarly, when scientists predict a 40 percent chance that with continued burning of fossil fuels, the temperature of the globe will increase by at least four degrees in the next century, do you believe them or not? This is also a silly question.

“It’s about deciding whether the estimated risk is large enough to warrant reducing the risk.”

The issue is not about belief or disbelief. It’s about deciding whether the estimated risk is large enough to warrant reducing the risk. No beliefs are needed to answer this question, but how to do this in practice? One idea is to use the concept of the value of a statistical life (VSL). We can estimate VSL by looking at how much extra money individuals would accept to do a job which involved a significant increase in the risk of death or disability. The absolute value of VSL in dollars depends on which country the individual lives. An individual in a poorer country will typically accept less to do a dangerous job than an individual in a richer country. Indeed, an approximate value of 100 times the per capita GDP of a country is commonly accepted as a country independent estimate of VSL. Thus, about $10 million for somebody in the U.S. and about $200,000 for someone in Bangladesh.

By contrast, it’s generally accepted that the cost of mitigating climate change will be around 2 percent of GDP, or for an individual, 1/50th of the per capita GDP. If we accept that the weather associated with a four-degree warmer world would be such a hell on earth as to be equivalent to incurring a serious disability, then a 40 percent probability of such climate change makes it overwhelmingly worth taking action to reduce the risk. Forty percent of 100 is much greater than one over 50. Of course, making objective decisions about whether to take preventative measures depends critically on the reliability of estimates. This, in turn, depends on the accuracy of the models used. There is in fact a need to improve the accuracy of climate models.

If countries pooled resources to fund dedicated supercomputing, it would transform our ability to simulate climate change at the regional level and give greater certainty to estimates of risk.

3. Noise is not a nuisance.

Noise in the brain is a resource to be used constructively, despite it typically being thought of as a nuisance. We seek ways to maximize the signal to noise ratio by reducing noise—the very epitome of uncertainty—to a minimum. However, for many non-linear systems, noise can strengthen a signal. A non-linear system is one where outputs are not in proportion to inputs. If you win a million dollars, you’ll be very happy. But if you win $10 million, you will likely not be 10 times as happy, maybe only twice as happy. The climate system is a non-linear system.

The brain is a nonlinear system too, running on just 20 watts—enough to power a light bulb, and a million times less than is needed to power a supercomputer. When these 20 watts are spread across 80 billion neurons in the brain, the signals propagating along the neurons must compete with the thermal noise of the warm environment in which our cognitive processes operate. As a result, we have eureka moments where the noise helps our cognitive processes make connections that would never occur in the deterministic environment of a traditional computer. As many Nobel laureates will attest, eureka moments frequently occur when the recipients are relaxing, doing nothing in particular. States of relaxation are likely the ones in which cognitive processes are most susceptible to noise. Put this way, we may be a creative species because of the constructive effects of noise in the brain.

“We have eureka moments where the noise helps our cognitive processes make connections that would never occur in the deterministic environment of a traditional computer.”

Computers are designed to give the same answer every time we perform a particular computation, but such reproducibility costs energy. Perhaps this extra energy can be put to better use. Suppose for a unit of energy, you could choose between a computer which can do one reproducible calculation or 50 approximate, irreproducible calculations. For many scientific computations, the latter is preferable. Perhaps AI will become truly intelligent once we have utilize noise constructively in computers. As Alan Turing said, “If a machine is expected to be infallible, it cannot also be intelligent.”

4. Re-imagining the laws of physics for our holistic universe.

Quantum mechanics, our best theory for the physics of atoms and elementary particles, is typically considered counterintuitive. Take the notion of uncertainty. Next week’s weather is uncertain because we can’t observe every butterfly in the world. Uncertainty in quantum mechanics appears differently. According to quantum mechanics, uncertainty in the properties of atoms does not arise because we can’t observe them precisely, rather it arises because the laws of physics on the atomic scale are inherently uncertain. Einstein thought that this idea was preposterous, famously claiming God does not play dice.

In the half century since Einstein’s death, the physics of elementary particles has grown by leaps and bounds, culminating 10 years ago with the discovery of the Higgs boson. These physics are based on quantum mechanics. As a result, Einstein’s concerns about quantum mechanics have been dismissed as fundamentally incorrect, and most physicists believe that quantum uncertainty and chaotic uncertainty are different things. On the other hand, in the last 10 years, our understanding of fundamental physics has rather stalled. We still have no idea what dark matter is. Neither do we know the origin of the dark energy which seemingly causes the universe’s expansion to accelerate. Above all, we still haven’t synthesized quantum and gravitational physics into a unified hole. Some physicists believe we must ask whether primitive concepts about quantum physics are right after all.

Maybe the first place to look is a notion that has underpinned physical theory for centuries. That to understand the building blocks of nature, we have to look at smaller and smaller scales: molecules, atoms, nuclei, protons, and finally quarks. Philosophers call this idea methodological reductionism.

“The large scale nature of the universe may be just as important for understanding the structure of quarks, as quarks are for understanding the large scale structure of the universe.”

However, chaos theory provides a more holistic view of systems, one where methodological reductionism fails. If the universe can be thought of as a chaotic system, then the implications are profound. The large scale nature of the universe may be just as important for understanding the structure of quarks, as quarks are for understanding the large scale structure of the universe. We may need to reimagine the laws of physics to understand the nature of our holistic universe.

5. The mystery of free will.

The question of whether humans have free will must surely be the oldest problem in philosophy. If the laws of physics are deterministic, then by definition we don’t have the freedom to have done otherwise. What we do is determined by what the universe was like yesterday and what happened yesterday is determined by what happened the day before. This logic implies that all our decisions were determined by the cosmological initial conditions at the time of the Big Bang. The problem is that this seems to absolve us from moral responsibility. Does the judge have no alternative than to free the convict who says that if he hadn’t murdered his wife, then his actions would have been inconsistent with the cosmological initial conditions?

It hardly solves the problem to suppose that the laws of physics are indeterminate. Now the crime was not predetermined, but the convict can again claim he had no control. Indeterminate randomness doesn’t explain moral responsibility either.

There is, however, an alternative idea from chaos theory. Chaos theory explains why many of the systems in our daily lives are unpredictable. But underpinning chaos theory is a timeless geometric structure known as a fractal. Fractal geometries stay the same when you zoom in to reveal small scale structure. Meteorologist Ed Lorenz discovered the fractal geometry of chaos in 1963—one of the greatest discoveries of 20th century science.

Suppose the universe evolved on a fractal. One consequence would be that the laws of physics would be unpredictable, but not indeterminate. However, the timeless nature of the fractal means that the Big Bang initial conditions would be no more fundamental than the state of the universe when the murderer committed his crime. In a self-referential way, this means that the murder’s actions determine the structure of the fractal geometry as much as the fractal geometry determines the actions of the murderer. Going one step further, if the laws of physics describe this fractal geometry, then the murderer’s actions contribute as much to the laws of physics as the laws of physics contribute to the murderer’s actions. From this perspective, the concepts of determinism and moral responsibility can mutually coexist. The judge can finally send the convict to prison without worrying about metaphysical implications.

To listen to the audio version read by author Tim Palmer, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market
-->