Why You Should Care About Future People
Magazine / Why You Should Care About Future People

Why You Should Care About Future People

Happiness Podcast Politics & Economics
Why You Should Care About Future People

If the human race lasts as long as a typical mammalian species and our population continues at its current size, then there are 80 trillion people yet to come. Oxford philosophy professor William MacAskill says it’s up to us to protect them.

In his bold new book, What We Owe the Future, MacAskill makes a case for longtermism. He believes that how long we survive as a species may depend on the actions we take now.

Listen to Will’s appearance on the Next Big Idea podcast below, or read a few key highlights. And follow host Rufus Griscom on LinkedIn for behind-the-scenes looks into the show.

Rufus Griscom: Perhaps the most profound impact on your career path was an essay you read, at the age of 18, by Peter Singer, the ethicist, titled “Famine, Affluence, and Morality.” Do you want to share the thought experiment in that essay and how it affected you?

William MacAskill: The thought experiment is very simple. Imagine a man is walking past a shallow pond, and let’s say he’s on his way to a business interview and wearing a very nice suit that costs several thousand pounds. And while he’s walking past that pond, he sees that there’s a child drowning in that pond. He could run into that pond, save the child, and that child would live to see another day. However, the suit that he’s wearing would be ruined. Now imagine that that man sees the child, does the calculation, and thinks, Yep, I don’t wanna lose this few thousand dollars by losing my suit, so I’m gonna just walk on by now. It seems pretty intuitive, morally speaking, that that would be very badly wrong. Instead, we think it’s just obvious that if you’re weighing up the loss of a few thousand pounds or the loss of a child’s life, it is morally required of you to wade into that pond and save the child’s life. But then here’s the twist. If we think that that’s true in this study, then what’s the difference between most of us buying luxury goods and things that are just not necessary for our survival or even basic happiness, rather than donating that money to save the lives of the extreme poor where just a few thousand dollars can save the life of a child who would otherwise have died of malaria?

That’s the argument. I found it extremely compelling. In all my years of being a philosopher, I’ve never heard of a compelling counterargument. And so I think we should accept the conclusion that failing to donate when we know we could save lives by doing so is just as wrong as failing to wade into a shallow pond and save a child’s life.

Rufus: And so this thought experiment from Peter Singer changed the course of your career and your life, right? You made a pledge to live on, I think, £26,000 a year—about $30,000 a year—and give the rest to charitable causes. You’ve helped to develop this effective altruism movement, which has been remarkably successful at changing the behavior of a large number of people. One thing I find kind of beautiful about this is the power of an idea—of a piece of writing, an essay written in 1972—to change the course of a human’s life, and then, in turn, to cause a kind of copycat behavior, in a positive sense. You know, we at The Next Big Idea Club are all about this notion that ideas change people’s lives. And it seems that you have been in the business of trying to figure out how to replicate positive ideas, to create a kind of healthy contagion of good ideas. Is that accurate?

“We should accept the conclusion that failing to donate when we know we could save lives by doing so is just as wrong as failing to wade into a shallow pond and save a child’s life.”

Will: Yeah, that’s exactly right. When Giving What We Can started—this organization that encourages people to pledge 10% of their income—back in 2009, we had 23 people. It was a pretty small-scale affair. Now we have over 7,000 people who’ve committed at least 10% of their income. We’ve moved over a billion dollars to the most cost-effective nonprofits in the world, saving over a hundred thousand lives is my estimate. So we really have seen this argument leap from the philosophy seminar room to spreading around the world and changing behavior.


Rufus: Your focus has shifted in recent years towards the interests of future people. Can you tell us about longtermism?

Will: So I’m still extremely committed to effective altruism and plan to do as much good as I can with my time and money. The question is just how best to do that. With all the many problems that the world faces, what should we focus on? In particular, there are certain risks that we currently face: the risk of nuclear war, the risk from pandemics from engineered pandemics that could be far worse than even COVID-19, the risk from other new technologies like artificial intelligence. Some of these risks can get so bad, I think, that they would be utterly catastrophic not just for the present generation, but for many generations to come, maybe even all future generations.

We’re familiar with this in the case of climate change. Climate change is already causing major problems today. It will cause major problems for many generations into the future. But I came to see that this is an issue that’s not just limited to climate change. In fact, there are many actions that society is currently doing that impose great risks on the present generation as well as future generations. And so the idea of longtermism is to really focus on trying to reduce those risks, trying to steer society in a way that we avoid unnecessarily destroying civilization altogether, steering us away from oblivion, steering us away from some long-lasting dystopia, and instead steering us to a better future where everyone is happy and healthy and free.


Rufus: Let’s talk a little more about wonderful versions of our possible future. You write, “The future could be wonderful. We could create a flourishing and lasting society where everyone’s lives are better than the very best lives today.” How would we arrive at that? What, what would that look like?

Will: So I make the argument for thinking that people’s lives in the future might be much, much better than even the best lives today are by comparing the best lives of, say, 300 years ago with the best lives today. Think of some rich, aristocratic man in England in 1700. He would not have been able to travel. He would’ve had no pain relief. He could easily have died of cholera or syphilis. He would’ve had to have surgery without anesthetic. Even his diet would’ve been pretty boiling compared to the diets of basically any of the listeners here. If he was gay, he couldn’t have loved openly. He wouldn’t have lived in a democracy either. Now this is all for someone who’s very well off in 1700, let alone talking about people in poorer countries, women who didn’t have the benefits of feminism or access to university education, or the majority of people in the world who were of some form of forced labor.

“The idea of longtermism is to really focus on trying to steer society in a way that we avoid unnecessarily destroying civilization altogether.”

What that shows to me is that if we look more speculatively a few hundred years out and we just think about some fixed up version of our current world where everyone around the world has lives as good as the best off people in countries like Sweden or Costa Rica, I think that actually dramatically underestimates how life in the future could be if we manage to sustain continued moral progress and continued technological progress.


Rufus: When we think of threats to the future of humanity, like global warming, biohacking, and artificial general intelligence, I think global warming is is probably the concern that is most prevalent among people I know. Do you think that concern is misplaced?

Will: Firstly, I want to say that the environmentalist and climate change movement has been just outstanding in my view. The fact that people are taking science seriously, are concerned about future generations, and have had really major successes in terms of steering the world onto a better trajectory with respect to our carbon emissions is very inspiring to me.

In What We Owe the Future, I focus on these issues like AI and pandemics because they get a tiny fraction of our resources and attention. Global spending on climate change is on the order of a few hundred billion per year. How much gets spent on AI safety? It’s more like one or 200 million per year. That’s a factor of a thousand difference.

I wouldn’t say that concern for climate change is misplaced, it’s just that I think a sane world would have solved the climate problem decades ago. And I think that we should generalize from this. We should realize, Oh, it’s not just about carbon emissions. Actually, there are very many ways in which society is battling towards an uncertain future that perhaps could be very good but also could be very bad indeed. It could be oblivion. It could be a long lasting dystopia.

“50 or a hundred years ago, a longterm perspective would have alerted us to the risks from climate change.”

Rufus: I love that comment. We should learn from the case of global warming because the sane world would have solved these problems decades ago. We need, as a species, to be able to act upon trying to protect ourselves against risks about which we’re not a hundred percent certain.

Will: Exactly. Sometimes people ask me, “Oh, well, I’m skeptical of this longterm stuff. Suppose I’d had a longterm perspective in the past. What would I have done differently?” And the answer is that 50 or a hundred years ago, a longterm perspective would have alerted us to the risks from climate change. It was very speculative, but as early as 1896, climate scientists had a quantitative estimate of the warming that CO2 would cause. And it wasn’t wildly off. We could have been building a movement that was concerned about our carbon emissions many decades earlier. The large majority of carbon emissions have been emitted in the last 30 years. Imagine if we’d gotten started in the fifties and sixties to move society off of carbon emissions? We could be in a much, much better place now.

I think the same is just true with respect to developments in artificial intelligence and with respect to engineering of new viruses. This is often more speculative than other things we could be doing. However, we are able to get in at an early stage and steer the course of this whole technology. And so that means that even though we’re acting on somewhat more speculative evidence, the impact we can have is absolutely huge because we’re getting in before the problem. We have seen the problems in the world. So we can help have sensible regulation and safe technology developed ahead of time such that we never have to deal with problems in the first place rather than dealing with an entrenched lobby or dealing with catastrophes after they’ve occurred.

To enjoy ad-free episodes of the Next Big Idea podcast, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->