Brian Christian is the author of The Most Human Human, a Wall Street Journal bestseller, New York Times editors’ choice, and a New Yorker favorite book of the year. Tom Griffiths is a professor of psychology and cognitive science at UC Berkeley, where he directs the Computational Cognitive Science Lab. The two recently sat down with Heleo’s Assistant Editor, Jeremy Price, to discuss their latest book, Algorithms to Live By, and how we can use the power of computer science to make better decisions, live smarter lives, and be kinder people.
This conversation has been edited and condensed. To view the full version, click the video below.
Jeremy: First of all, what exactly is an algorithm?
Brian: The underlying concept is really very simple. It’s any series of steps that you follow to perform an action or make a decision. This includes everything we think of computers doing, but it also includes a lot of familiar human activities. Baking bread from a recipe is following an algorithm, and a sequence of steps for chiseling a stone tool also constitutes an algorithm.
Tom: Throughout most of history, an algorithm was a procedure that a human being followed, and it’s only in the last 50 or 60 years that algorithms are things that we think of as procedures that computers execute. We wanted to talk about recognizing those human procedures as algorithms, and then apply the same kind of analytic tools used [for] evaluating computer algorithms to the things that people do.
Jeremy: Let’s dive in. You guys wrote on the trade-off between exploring and exploiting, and ever since I read this, I am starting to see this concept everywhere in my life. Can you give an introduction to this whole explore/exploit thing?
Tom: Sure. It shows up in basically any situation where you have to make a choice among a set of options, and then in the future, you’re going to make a choice again among the same or a very similar set of options. In that situation, you have this tension between going with something that, based on experience, you already know is pretty good, or going with something that’s new, learning about something else that you could benefit from in the future. That’s the trade-off between exploiting the knowledge that you already have, and exploring and acquiring more knowledge that you can exploit in the future.
One of the most prominent examples is, if you’re a technology company, figuring out what ads to put on a webpage. “Do I show an ad that I’ve already shown to a bunch of people and I know is likely to get clicked on, or do I try this new one out, and then see if someone’s going to click on that even more?”
In terms of human lives, this shows up in doing something like choosing what restaurant to go to. Do you go to your favorite place, or do you try something new? Do you hang out with your good friends, or do you try to make some new friends? Do you listen to your favorite band, or try out something new? There’s a whole host of human decisions where this matters.
“Acting a little randomly is another good way of exploring.”
Jeremy: So what should we be doing more of? Trying new things, or going with what we already know and like? Should we be going for the latest, or the greatest?
Brian: Looking at the way that some of these algorithms work, it all comes down to how much time you have and where you are in that interval of time. For example, the odds of making a great new discovery are going to be the greatest when you know the least about what’s out there. If you’re doing a semester abroad in Spain, and you go out to dinner on the first night there, it’s got a 100% chance of being the best restaurant you know about in Spain.
The second night, you’ve got a 50/50 chance of this new place being the greatest restaurant that you know about in Spain. As you can see, the odds of an exploration paying off in terms of dethroning your previous favorite option go down as a function of the experience that you accumulate.
Conjoined with that is the fact that the value of making a new discovery is greatest when you have the most time to enjoy it. Finding an amazing café on week one is more valuable than finding it on your final night in town.
Conversely, the pleasure you get from going to your favorite place only goes up as you learn more about the environment. By the time you leave Spain, your favorite restaurant is probably better than what your favorite restaurant was when you were in week two. For all of these reasons, we should be on a trajectory from exploration to exploitation as we move through an interval of time.
As we say in the book, this actually gives us a framework for understanding the entire arc of a human life. We think of babies as stereotypically random, having a short attention span. No matter how great of a toy you give them, they’re ruthlessly interested in the next thing. This makes sense from the explore/exploit framework; that really is what you should be doing. If you’re at the very beginning of an 80-year process, you should just cavort around at random, putting things into your mouth.
[On the other hand,] during the older years of life, we have these stereotypes that people are very fixed, they’re set in their ways. [And] you can think of that as optimal behavior, the appropriate way to interact with the world as a function both of the finite time remaining and the value of all the experiences that you’ve had, the exploration that you’ve done to date.
Jeremy: So when we’re agonizing over our next decision to try something new or stick with something that we like, we should take into account how much time we have left, how much time we will have to potentially enjoy a new discovery.
Tom: The other thing is that maybe you shouldn’t agonize quite so much, particularly if you’re early in the process. Acting a little randomly is another good way of exploring. Not making the optimal decision at that moment is actually a good way of getting extra information that might be useful in the future.
Jeremy: Another thing I wanted to ask about is overfitting, which is also everywhere [in real life]. Can you tell me about what that is?
Tom: Yeah. A lot of the time we have to make a decision or prediction based on limited data. We’re trying to make the best guess about what’s going to happen in the future. The problem is that there’s a gap between what we can measure and what matters, [like] how things are going to be in the future. Overfitting is about focusing too much on the thing that we can measure, and as a consequence, losing sight of the thing that matters.
Brian: ‘Teaching to the test’ is essentially a case of overfitting, where you have an instrument that’s designed to measure how well you’ve learned something, but mastering the test and actually learning the content can become unglued.
We give some striking examples in the book of how this emerges in law enforcement, because there’s a gap between the training and whatever it is you’re preparing for. There was a case where police officers found themselves, at the end of a firefight, with shells in their pockets, which seemed completely bizarre. You’re in the middle of a gunfight, so why are you picking up the shells from the ground and putting them in your pockets?
They realized this is good firing range etiquette. Part of the point of any training exercise is to drill certain things into your muscle memory so that you’re not thinking about them when you’re really in that situation. [But] be careful what you put into your muscle memory, because it captures the entire training exercise, including things that are just artifacts of the training process itself.
The most striking example was a case where a police officer is intervening in a robbery of a convenience store and manages to grab the weapon from the robber, then immediately hands the gun back to the guy.
“This is the thinking person’s argument for not thinking too much.”
Jeremy: What?
Brian: Because they had drilled this in training so many times. After you do the maneuver, you give it back.
Jeremy: That’s insane.
Brian: These are very vivid illustrations of this deeper point: any time there is a difference between how you’re preparing for something and what the thing is that you’re preparing for, you can do yourself a disservice by preparing too much. You can start to [prepare for] the preparation itself.
At a decision-making level, this is the thinking person’s argument for not thinking too much. The more uncertainty you have about something, the better you are served by a very simple, straightforward model with fewer factors and less deliberation.
Jeremy: As a chronic overthinker, I definitely appreciate that.
As you guys were going through the process of researching and writing this book, what was the most striking, most practical insight that you [learned]?
Tom: For me it was scheduling, trying to figure out how to manage your time. [There’s] literature about how to optimally use the time of a big computer or a factory machine to minimize downtime and maximize throughput. One important thing that goes into that is deciding what your goal is. Do you want to minimize the amount that you’re late in terms of hitting your deadlines, or do you want to most quickly get things off your to-do list? Once you make that decision, there are optimal algorithms for solving that problem. This is not just a recommendation that you might get from a time management book. It’s provably optimal.
If you want to minimize the max amount you go past a deadline, then you just work through jobs in order of their deadline, whereas if you want to get things off your to-do list as quickly as possible, then you work through jobs in the order of their length. You start with the shortest job and you work your way up. Neither of those perfectly characterize the normal situation that I’m in, but knowing that those are optimal strategies for those cases makes it straightforward to think about, “Well, how can I be productive?”
Jeremy: Same question to you, Brian. Was there one insight from the book that you’ve taken and really applied to your own life?
Brian: Yeah. We as humans can beat ourselves up for wanting to be more optimal or more deliberate about something. But in fact, there’s literature that shows that the intuitive thing is optimal. My favorite example of this is stacks of paper. A lot of us, and I’m included, have giant piles of paper. We say to ourselves, “I ought to get organized,” right? “I ought to control this mess in some fashion.”
In our chapter on caching, we look at optimal cache eviction policies and optimal storage management schemes, and one of the things that emerges is that you really can do no better than to just put the last thing you handled down on top of a stack.
There’s this beautiful result, which is that [the stack] is as organized as it possibly can be. You are being optimal just by doing what feels lazy and natural. Having that framework to validate some of those intuitions is very powerful.
Jeremy: One last question: toward the end of the book, you talk about this idea of ‘computational kindness.’ You say that counterintuitively, computations for computers are bad. You want to make solving problems as easy for the computer as possible, and the same is true of humans. [Here you talk about conversations like,] “What do we want to do tonight?” “It’s up to you.”
“Seemingly innocuous language like ‘Oh, I’m flexible’ or, ‘What do you want to do tonight?’ has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things. First, it passes the cognitive buck: ‘Here’s a problem. You handle it.’ Second, by not stating your preferences, it invites the others to simulate or imagine them. And as we have seen, the simulation of the minds of others is one of the biggest computational challenges a mind or machine can ever face.”
I loved that because I’ve always been the guy to say, “Whatever you want to do, let’s do that.” It’s stressful for me to think about making a decision that someone else is unhappy with. [But] I read that, and thought, “I’ve just been making life worse for all these people around me for years!” So tell us a bit more about how we can be computationally kind to each other.
“Accepting a certain level of intrinsic complexity to the world gives us a bit of peace of mind.”
Tom: The basic idea is to give people easier computational problems. A simple example is [when] you’re working with other people, you can say, “Here’s a job to do, here’s the deadline for it,” and give people the information they need in order to execute their scheduling algorithm. If you have an employee, tell them what the objective function is. Say, “You should try to get through things as quickly as possible,” or, “You should try to minimize the max lateness of any one of these things.” If you tell them that, then there’s a simple optimum algorithm they can follow. If they don’t know what they’re supposed to be doing, then they have to come up with some complicated way of trying to satisfy the constraints that they think that you might be following.
[You can also] think about it as a more general design principle for life, [when] interacting with friends and partners. There’s a tension between being polite and giving all the information that [they] require to make some decision.
Brian: I think you can also be computationally kinder to yourself. A lot of the problems we face in life are just hard. In the chapter on optimal stopping, we get into this famous 37% rule, which turns out to only succeed 37% of the time. Here’s a case where the best that you can do, following the optimal strategy, will fail almost two-thirds of the time. It’s just an objectively hard problem.
Even when you follow the best procedure, you’re not always going to get the perfect outcome. You’re not always going to get what you want. I think we have a very human tendency to beat ourselves up if things don’t go our way, or cast our mind back over, where did we go wrong, what should we have done differently? [But] accepting a certain level of intrinsic complexity to the world gives us a bit of peace of mind. You can rest easy knowing that you did the best that you could.