Futureproof: 9 Rules for Humans in the Age of Automation
Magazine / Futureproof: 9 Rules for Humans in the Age of Automation

Futureproof: 9 Rules for Humans in the Age of Automation

Book Bites Career Technology
Futureproof: 9 Rules for Humans in the Age of Automation

Kevin Roose is an award-winning technology columnist for the New York Times, and the bestselling author of three books: Futureproof, Young Money, and The Unlikely Disciple. He is the host of Rabbit Hole, a New York Times-produced podcast about what the internet is doing to us, and a regular guest on The Daily, as well as other leading TV and radio shows.

Below, Kevin shares 5 key insights from his new book, Futureproof: 9 Rules for Humans in the Age of Automation (available now from Amazon). Download the Next Big Idea App to listen to the audio version—read by Kevin himself—and enjoy Ideas of the Day, ad-free podcast episodes, and more.

1. Do things machines can’t do.

For many years, we have been preparing people for the future in exactly the wrong way. When I was growing up, it was common wisdom that in order to compete in a world filled with computers and new technology, we had to become more like machines: study engineering and computer science, optimize our time, and become as efficient and productive as possible.

But what I heard from researchers and experts in AI was that, essentially, the opposite is true. In order to succeed in a world filled with intelligent machines, we need to bring things to the table that machines can’t do, to differentiate ourselves from AI and machine learning. I found three buckets of things that can be done by humans, but can’t be done by machines nearly as well, and those buckets are labeled surprising, social, and scarce.

Surprising tasks involve lots of chaos and complicated scenarios, making do with uncertainty. For example, an AI can beat a human at chess, which is a very structured game with the same rules every time. But if you asked an AI to teach a kindergarten class, it would fail miserably. We are much better than machines at what’s called zero-shot learning, which is a term from computer science that basically means taking a completely new situation and making sense of it.

The second category of jobs protected from AI are in the realm of social work, which involves making people feel things rather than making things. These are jobs like nurse, therapist, clergy, teacher, or even bartender and flight attendant. These people are creating experiences rather than objects. Right now, AI just isn’t very good at tapping into our social desires.

The third category of safe-from-AI work is scarce work, which involves rare skills or scenarios, combinations of talents, or extraordinary ability. This would be jobs like the 911 operator—when we have an emergency and call 911, we want to connect to a human, not an automated phone tree, and that’s because some things are too important to entrust to machines. We trust humans because humans are good at making sense out of emergencies, or high-stakes scenarios. For this reason, scarce jobs are going to be done by humans well into the future.

2. Beware of boring bots.

When I started researching this book, I assumed that the biggest risk to humans would come from super-sophisticated, cutting-edge robots, like the stuff we see in sci-fi movies where machines overtake us in intelligence and turn us all into robots’ slaves. But the much bigger danger, in the near term, comes from much simpler forms of automation.

“In order to succeed in a world filled with intelligent machines, we need to bring things to the table that machines can’t do.”

I call these boring bots. There is a whole industry out there called robotic process automation. It’s roughly a $20 billion industry for automating common back-office business tasks, like accounting, tax auditing, bill processing, invoicing, and sales projection. These tasks have been done by humans for many years, but now, through AI and machine learning, they are being automated at a furious rate.

Especially since the pandemic, corporations have stepped up their use of robotic process automation, and they’re automating jobs that we never thought we could—jobs in middle management, in highly skilled fields where people with college degrees earn six-figure incomes. Research by the Brookings Institution shows that white-collar workers are more at risk from AI and automation than workers in manufacturing and other blue-collar fields.

One thing that worries me about this wave of corporate automation is that a lot of it is not very good. Economists Daron Acemoglu and Pascual Restrepo coined the term so-so automation, which basically means any automation that is just barely good enough to replace humans in the workplace, but isn’t good enough to generate substantial productivity gains or make the economy more dynamic. In the past when we’ve had big technological transformations, like the Industrial Revolution, some people lost their jobs because of new technology, but the technology was also really, really good, and created new industries which could then employ the people who had been displaced out of the old industries.

Now, a lot of the automation we’re seeing is so-so automation, which doesn’t actually make the economy more productive—it just puts humans out of jobs. One example of this would be an automated customer service line. I don’t know about you, but when I get a robot on the other end of a customer service call, I’m pressing zero. I want to talk to a human, because the likelihood is high that humans are going to be better at answering my question than a machine. This boring so-so automation, which just barely reaches the human threshold, is the kind that we’re seeing more of today. Our economists are worried because until now, we haven’t seen mass unemployment resulting from advancements in automation.

3. Leave handprints.

As I was trying to figure out the kinds of work that would remain safely in the hands of humans for the foreseeable future, I came across this principle from psychology known as the effort heuristic. The effort heuristic basically means that we value things more highly when we think other people worked really hard on them.

For example, there have been studies where two groups of people are given identical bags of candy, and one of them is told that the candy was randomly selected for them, and the other was told that a person who understood what kinds of candy they liked had picked out the candy specifically for them. Almost universally, people who thought that other humans had worked to pick out specific candies for them found that those candies tasted better.

“When most physical objects can be made by machines, the thing that we will value is people’s time, expertise, and effort.”

We like things that require effort from other people. That’s why, for example, you can get a very cheap flat-screen TV on Amazon, because flat-screen TVs are made by robots. But if you want a nice piece of art, it’s going to cost you more than the flat-screen TV. With the rise of AI and automation and how we perceive products’ value, we’re going to see the emergence of a split economy.

One economy consists of things that are done primarily by a machine, and then there’s this other human economy, which is more artisanal. The research bears this out, that those handmade things are going to become much more valuable in the years ahead. When most physical objects can be made by machines, the thing that we will value is people’s time, expertise, and effort. We should be working to make our output more obviously human. For me, as a journalist, leaving handprints means putting more of myself in my work by stating my opinions and trying to convey my personality through what I’m writing, and not just assembling facts—which machines are getting much better at.

There are lots of other examples of people who are succeeding because they’re leaving handprints. My accountant, Russ Garofalo, is working in an industry that has been decimated by AI and automation. Millions of people have started using programs like TurboTax, so the accountants left are the ones who bring something more to the table. Russ is a former standup comedian, and as a result, he’s really fun to talk to. I genuinely enjoy doing my taxes with him, and he’s managed to turn what could be a boring chore into an entertaining human experience. He’s been able to survive even as his industry becomes highly automated. That kind of automation is happening to workers in every industry, so to survive, we need to figure out how to express our humanity in our work, rather than trying to hide it or apologize for it. Humanity is what’s going to give us our value.

4. Treat AI like a chimp army.

This is a metaphor that I use to describe the danger of over-automating, of placing too much faith in the abilities of AI and machine learning, and later coming to regret it.

Take Mike Fowler, for instance. He’s an Australian entrepreneur who came up with an algorithm for generating T-shirt designs. It could take popular catchphrases, like, “Keep calm and carry on,” or, “Kiss me, I’m Irish,” and it could plug words from people’s social media profiles into those templates to generate millions of T-shirts that would automatically be listed for sale online.

It was a brilliant idea and he made a lot of money doing it—until one day the algorithm went haywire. It turns out that he had forgotten to take some words out of the database that the algorithm used to fill in these catchphrases. T-shirts were appearing for sale that said really offensive things, like, “Keep calm and hit her,” or, “Keep calm and rape a lot.”

“We need to recognize when technology is making us less human, and we need to resist that. In the end, if we’re indistinguishable from robots, we have nothing left to offer.”

When these designs became public, Mike Fowler’s career was ruined. This kind of thing is happening all over the economy to companies that go ahead with their AI and automation plans, assuming that the risk is very low, and finding out that actually what they’ve done is akin to letting an army of chimpanzees into their office, giving them computer terminals and saying, “Okay, go to work.” The metaphorical chimps can really mess up the operations of the company. We need to be realistic about what AI can and can’t do, and be really careful before we start turning important tasks over to machines that might not be ready to handle them.

5. Don’t be an endpoint.

There are basically two types of jobs that have emerged in the last 10 or 15 years. One of them is jobs that are assisted by AI and automation. This would be jobs like radiologists, who are using AI to help them diagnose tumors because the algorithms are actually more accurate than human judgment. There’s nothing wrong with that kind of AI-assisted work—in fact, it can be a really good thing.

But there’s another category of jobs that I am worried about, and I call these endpoint jobs. In software development, endpoints are the points of connection between two software programs. Today, we have a lot of workers who take instructions from one machine and then do something with a different machine. They are human endpoints. The most obvious examples of these are jobs in places like fulfillment centers or warehouses, where workers are told by an algorithm what to put in which boxes, and they’re monitored by algorithms all throughout the process.

These endpoint jobs aren’t just in warehouses; there are doctors, lawyers, security guards, and retail workers who are serving as a kind of human endpoint. Endpoint jobs are not using AI and automation as a way to do humans’ dirty work to free people up to focus on more creative and fulfilling tasks. Instead, they’re making these jobs more structured, more robotic. They’re essentially treating humans as an extension of a machine.

These are dangerous jobs, not just because they’re very likely to be automated, but because they are downgrading human potential. They are turning us from creative, thoughtful, idea-generating people into automatons. As we look to the future and try to make ourselves more human to succeed in a world filled with AI and automation, we need to recognize when technology is making us less human, and we need to resist that. In the end, if we’re indistinguishable from robots, we have nothing left to offer.

To listen to the audio version read by Kevin Roose, and browse through hundreds of other Book Bites from leading writers and thinkers, download the Next Big Idea App today:

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->