Humans Are Great at Arguing but Bad at Reasoning. Julia Galef Explains Why. | Next Big Idea Club
Magazine / Humans Are Great at Arguing but Bad at Reasoning. Julia Galef Explains Why.

Humans Are Great at Arguing but Bad at Reasoning. Julia Galef Explains Why.

Habits & Productivity Politics & Economics
Humans Are Great at Arguing but Bad at Reasoning. Julia Galef Explains Why.

Julia Galef is the president and co-founder of the Center for Applied Rationality, a Berkeley, California-based non-profit whose workshops teach participants to apply the lessons of cognitive science and rationality to their lives. Julia is also a writer, speaker, and co-host of the podcast Rationally Speaking. Her work has appeared in Slate, Science, Scientific American, Popular Science, and more.  Recently, she joined Heleo’s Mandy Godwin for a discussion about motivated reasoning, political polarization, and how to use rational thinking to achieve your goals.

Mandy: You define rationality a bit differently than most people conventionally think of it. Could you talk about it a little?

Julia: The colloquial definition of rationality, the way that most people actually use the word, is to mean “whatever I happen to believe.” The colloquial definition of irrational is “whatever disagrees with me.” Which is neither very useful—because we already have words for those things—nor is it the way that academics use the word rationality.

The way that the word is used in fields like cognitive science, philosophy, and decision theory is it’s broken down into two different types. One is epistemic rationality, which is a mode of thinking, and incorporating evidence about the world that makes your beliefs more accurate over time. The absolute, pure truth about the world—we’re never going to quite get there, of course, we’re never going to have 100% confidence in our ideas—but just making stuff up out of thin air is less likely to give you an accurate view of the world than collecting information and talking to experts and testing your beliefs. That’s epistemic rationality.

The other type of rationality that academics talk about it instrumental rationality, which is doing things that are likely, over time, to get you closer to your goals. I know the word “goals” tends to be used for things like professional success, earning money, and being more productive, but your goal could be anything. It could be having fun, providing for the people you care about, learning about the world. Whatever your goal is, instrumental rationality is defined as taking actions that efficiently move you towards that goal.

Mandy: That makes sense that those would be separate things.

Julia: They’re related in interesting ways, but they’re definitely distinct enough to deserve different labels.

Mandy: What got you into thinking about it in the first place?

Julia: It’s such a hard question, especially because I’m very reluctant to put too much weight on any one causal story. I can tell plausible stories about my parents, for example, they’re both more rational than most. Even when I was a kid, after we had an argument about rules or bedtimes or homework, they would sometimes come back and say, “You know, we’ve thought about it, and we thought you made a couple of good points, and we’ve changed our minds.” Which was so impressive to me as a seven-year-old, and inspired me to want to be as much like that as I could.

Beyond that, my specific career path now is focused on how we can get better at rationality in areas where it really matters.

Mandy: Regarding instrumental rationality and trying to help people reach those goals, how have you started to apply that in people’s lives?

Julia: A few years ago I co-founded a non-profit called the Center for Applied Rationality. We’re based in the Bay Area in Berkeley, California. We’ve been running workshops aimed at helping people apply this theory and cognitive science of decision making to their lives, to get better at noticing the kinds of biases that are built into everyone’s brains, noticing how they’re affecting our decision making and practicing some ways of trying to overcome those biases.

People who come to our workshops are generally focused on things like choosing a career, or making a career change. Should I quit my job? Should I start a start-up? Should I go to grad school? Should I leave grad school? As well as personal things, like, how do I deal with social anxiety? How can I find a relationship that’s satisfying? How do I deal with the arguments that I keep getting into with my parents? That kind of thing.

The other big topic that we’re focused on are questions about how we can improve the world. How can we make better decisions to have a positive impact on the world? We give a lot of scholarships to people who are either currently, or planning, to do something that could have a large, positive impact on the world.

Mandy: I watched your TED Talk on scout versus soldier mindset. That was interesting in how we don’t often think of the way that emotions are tied into the decision making process as something rational. Can you talk a little bit about the scout/soldier mindset and what you’ve gleaned from that?

“We still form the same intuitive, gut judgments we always did, but now we have reason as a tool to convince the other apes around us that we’re right and the other guy is wrong.”

Julia: I think probably the most important cognitive bias for us to be aware of is what scientists call motivated cognition, and what I call soldier mindset. This is the phenomenon where we have all of these unconscious motivations. We want the respect of other people, we want to think well of ourselves, we want to not have to be stressed out. These unconscious motivations have a powerful effect on the way that we reason, but it’s in the background. We don’t even notice.

Let’s say I’m unhappy at my job and I’m thinking, “What should I do about that?” I may very well have had this unconscious motivation not to be a quitter because I’ve grown up thinking that quitting things makes you a bad person. Even if I’m not consciously aware of that, I’m going to reject any arguments why it might be a good idea to quit my job.

It will feel like I am reasoning very objectively. You can show me an article about how the job market in data science is booming, and I would have lots of opportunities if I left my current job that I hate, and I can look through that article and find some reason to reject it. “But this author didn’t talk to enough people,” or, “This was written two years ago and it might be different now.” I can come up with some justification, but that justification isn’t my real reason for rejecting the article. The real reason is unconscious, that I’m afraid of being a quitter. This kind of mindset affects all kinds of reasoning, including the way that we think about politics, religion, or other ideologically charged topics.

I call it a soldier mindset because it’s very similar to the way a soldier approaches other soldiers on the battlefield. Some ideas are friendly soldiers, they’re on our side, and we want to support them, defend them, and help them win. Other ideas are enemy soldiers and we’re motivated to shoot them down, attack them, and defend ourselves from them.

As an alternative, I advocate something that I call scout mindset. The scout also cares about his or her cause, but the scout is motivated not to attack or defend, but to survey the landscape as accurately as possible. Are there bridges to cross the river? How strong is the enemy? How strong are their fortifications? How does the size of our army compare to theirs?

You’re going out and gathering information, and you’re motivated to try to understand the situation as accurately as possible. That is the mindset that is much more conducive to making good decisions, whether for your life or for the world.

Mandy: It seems as though the soldier mindset still contains a logic, but it’s a logic that’s within a certain frame that you want to exist.

Julia: Right, you’ve written the bottom line already, and now you’re just trying to find a vaguely logical way to get to that bottom line. Another way to think about it would be what a lawyer does.

The lawyer’s role is not to figure out what the truth is, the lawyer’s role is to find the most convincing-sounding arguments for whatever the lawyer’s already committed to defending. There’s an interesting theory about why our brains would have evolved to have that approach to reasoning. It’s called the argumentative theory of reasoning, and it was popularized by a couple of cognitive scientists named Dan Sperber and Hugo Mercier.

They were trying to address this mystery in cognitive science and behavioral economics: humans evolved this capacity for reasoning that animals, by and large, don’t have. Generally, new capacities evolve because they were useful in some way. You evolve better vision if you’re a hawk because that gives you an evolutionary advantage. It makes it easier for you to find food and avoid obstacles. You would think that humans evolved this capacity to reason about the world, to think consciously and not just with our lizard brains, because it was useful to us. It helped us make better decisions, right? At least that was the assumption that everyone has traditionally had.

If that’s true, it’s weird that we are so bad at reasoning. We seem to not use reason to try to figure out what the right answer is, but instead use it to defend something that we already believe. What Sperber and Mercier suggested is reason did not evolve because it was useful to helping us make better decisions. Instead, it evolved to help us defend our views to our fellow tribe members. It evolved, basically, to help us win arguments. We still form the same intuitive, gut judgments we always did, but now we have reason as a tool to convince the other apes around us that we’re right and the other guy is wrong.

Viewed through that light, our tool is working perfectly. It’s helping us generate all these convincing reasons why we’re right—but we’ve made the mistake of thinking that it’s supposed to help us arrive at the truth. It was never intended to do that.

“They’ll tell people about a cognitive bias, and then see if people avoid that bias when they’re answering questions on a survey. I’m not surprised it doesn’t work, for the same reason that telling people about proper nutrition and exercise doesn’t cause them immediately to eat lots of vegetables and go to the gym every day.”

Mandy: Arrival at truth is an evolutionary accident.

Julia: Basically. I’m not going to say that it’s unimportant to be able to be convincing and compelling to your peers. I understand why that’s valuable, but I think it’s also valuable to be able to think clearly and objectively, even in the privacy of your own head. Being aware that your built-in capacity for reason is not necessarily optimized for doing that is important.

Mandy: A lot of different fields of study seem to be converging on cognitive biases right now. There’s the rationality approach, the behavioral economics approach, the biology approach. Do you feel like this is a particularly important moment for looking at human cognitive biases?

Julia: I feel like cognitive biases have come into their own as an important field in the last decade or two. To be clear, they’ve been a central focus of a lot of psychology, of cognitive science, for many decades. They only started really entering the mainstream, and propagating to other fields, like economics, in maybe the last twenty years,

Now we’re at the point where large chunks of the public are aware of cognitive biases and they’re interested in how to improve their own decision making. People are really ready for some solutions, which is harder than identifying the existence of the problem.

Mandy: What have you seen as the preliminary look into what we should be doing about it?

Julia: There’s not that much research on this yet. The research that exists is mixed. A lot of studies have found that it’s really hard to “fix” cognitive biases, but the interventions that academics have tried, for the most part, have been pretty small scale. They’ll tell people about a cognitive bias, and then see if people avoid that bias when they’re answering questions on a survey. I’m not surprised it doesn’t work, for the same reason that telling people about proper nutrition and exercise doesn’t cause them immediately to eat lots of vegetables and go to the gym every day. Ingrained habits of mind are not going to be overcome with a little bit of extra knowledge.

To have any real hope of change, we need a couple of things. We need, first, a lot of practice. Practice on real world decisions and not on abstract problems. Second, we need to address the underlying motivations that cause us to want to use these flawed forms of reason in the first place. We need to notice that our identity is bound up in things like being a Democrat or a Republican, or in not being a quitter, or in not getting rejected, and that the decisions we’re making are being shaped by these motivations.

There are exceptions to the rule of “de-biasing doesn’t work.” One really cool exception—Phil Tetlock and Barbara Mellers pioneered this project for the government called the Good Judgment Project. They wrote about this in an excellent book called Superforecasting.

They’ve trained this team of smart amateurs to be able to make really accurate predictions about geo-political events: political and economic forecasting. These were questions that the forecasters didn’t have any particular expertise in. Things like, “Who will win the next election in Syria six months from now?” They gave forecasters this simple training program and they ended up being far more accurate than the next best team of experts who had been trained in intelligence analysis.

Mandy: Hearing you talk about how identity is so bound up in being a Democrat or a Republican, makes me wonder—we hear talk in the media about how this coming election feels like a “post-truth” election. How do you combat that with rationality?

Julia: I tend to focus on how do we change our own minds, and not on how do we change other people’s minds. Which is a huge field of study in its own right, the study of persuasion.

I would agree that the factual content of the candidate’s positions, or of the claims made about the candidates, is not really what’s driving most of the decision-making in this election. I think this is true of most elections, but I suspect it’s especially true of this one. My friend did a survey where he asked people first, who they were supporting for president, then second, if they could name anything bad about their own candidate, or anything good about the other candidate. A surprisingly large fraction, I think the majority of people, said they literally couldn’t think of a single bad thing about their own candidate, and couldn’t think of a single good thing about the other candidate. That’s a sign of how polarized and identity-based this election has gotten.

Mandy: It seems as if there’s a return to the strength of ideological argument not based on rationality. Does that seem stronger to you now?

Julia: One thing I’ve noticed is that people’s sources of information are more fragmented, and more niche than they were twenty years ago. I’m not a Trump supporter, I’ll just come out and say that. A lot of the things that Trump supporters believe seem ludicrous to me. Like believing that Hillary Clinton has a double, or believing that Hillary and Bill conspired to murder dozens of people, or believing that flaws in Trump’s past are just made-up conspiracies.

“There is irrationality in everyone, but a lot of the polarization and the difficulty that we have convincing each other comes down to basically rational interpretation of evidence, conditional on all of your prior beliefs about how the world works.”

But if you are someone who has heard a lot about conspiracy theories in the government, then to hear an additional claim that Hillary’s team is running some kind of conspiracy doesn’t sound so ludicrous against the backdrop of all these other conspiracy theories that you’ve heard credible-sounding people on Fox News talk about. Yes, there is irrationality in everyone, but a lot of the polarization and the difficulty that we have convincing each other comes down to basically rational interpretation of evidence, conditional on all of your prior beliefs about how the world works. Because we’ve been consuming such different diets of information from each other, things are going to sound either plausible or completely ridiculous to us based on how they fit into that prior network of beliefs. Does that make sense?

Mandy: Yeah, and thinking through it via “motivated reasoning” engenders more empathy, because you can begin to understand why someone would believe what they do. Even if it seems strange to you.

Julia: That is a very valuable lens through this very frustrating election. It’s very easy to say, “Ugh, I can’t understand how anyone can think” fill in the blank. A more useful way to approach a frustrating disagreement is to say, “Huh, why would someone believe” fill in the blank. You can start by saying, “I don’t understand how someone could believe this,” but that should not be your ending point. The next step is, “Let me see if I can figure out why this person seems to hold this belief.” Maybe the answer is just motivated reasoning, but even then you can go a little further and try to understand what motivations they have to want to hold this belief. Or you can ask, “What background assumptions do they have about the world that makes this seem like a plausible thing that could be true?”

It doesn’t mean you have to end up agreeing with them. I’m not even claiming that this person might be right. I just think it’s more valuable to approach these disagreements with curiosity instead of just frustration. Although, believe me, I empathize with the frustration.

Mandy: Something that’s surprising to me is that rationality research is so based in understanding the emotions. The normal way to talk about rationality is in opposition to emotions, and it’s nice to come full circle and realize that these are all part of the same thing.

Julia: I also under-appreciated the role of emotions when I first started getting into this field. When I first started running workshops, the classes were straightforward: “I’m going to teach you about probabilistic reasoning and how to use it in decisions.” People liked the classes, but they weren’t as impactful as we were hoping they would be. We kept noticing that we would have long back-and-forths with people about some decision, and at some point the person would realize, “Oh, the real reason that I don’t think it’s a good idea to quit law school is not all these justifications I’d been giving myself. It’s really that I don’t want to have that conversation with my parents.”

That was the point at which we were able to actually make some progress in finding a solution to their problems. We started noticing that these unconscious drives and assumptions and emotions were much more important to shaping people’s decision-making than their explicit knowledge about “how to make a good decision.” Independently, academia has been coming to the same conclusion.

When you asked me about research on overcoming biases, an even better example might be the research on self-affirmation. Researchers have found again and again that people are very resistant to evidence that challenges some core worldview of theirs. Like if you are a Republican who doesn’t believe that global warming is a thing, I can show you data on rising temperatures and you will find some reason to reject it. Or if you’re a Democrat who is opposed to the war in Iraq, I can show you data that attacks went down after George W. Bush increased troops to Iraq and you won’t believe it.

In some cases you’ll even hold to your original belief even more strongly than you did before you saw the new evidence. It’s called the multiplier effect. Researchers have been exploring ways to make people more receptive to evidence even if it challenges their worldview. One approach is realizing, “Okay, if the reason people are resistant to challenging evidence is that it feels like an attack on their identity, or sense of self-worth, then what if we help people bolster their sense of self-worth so that they don’t feel as vulnerable or threatened?” They had people go through this self-affirmation exercise, where they recall a time that they did something that they were really proud of, or think about traits in themselves that they are happy with. The goal of this was to make people feel good about themselves. Then, they showed people the data about attacks in Iraq or about rising global temperatures.

Those people who had gone through the self-affirmation were much more receptive to the new evidence, and much more likely to acknowledge that it contradicted their belief—or even change their minds—than the people who had not gone through the self-affirmation. It’s a nice demonstration about how important these unconscious emotions and motivations are to our seemingly objective conscious reasoning.

This conversation has been edited and condensed. 

Download
the Next Big Idea App

app-store play-market

Also in Magazine