Magazine / Discuss AI like a Pro With This Conversation Guide to Emerging Tech

Discuss AI like a Pro With This Conversation Guide to Emerging Tech

Book Bites Habits & Productivity Technology

Arvind Narayanan is a professor of computer science at Princeton University and director of its Center for Information Technology Policy. Sayash Kapoor is a PhD candidate in computer science at Princeton who previously worked as a software engineer at Facebook.

What’s the big idea?

Conversations about artificial intelligence are often confusing and misguided because the term “AI” is an umbrella term. To better understand AI’s capabilities, limitations, and possibilities, we need to learn how to differentiate between types of AI and discuss them accordingly. This is critical considering every individual’s responsibility in guiding the future of AI in our society.

Below, co-authors Arvind and Sayash share five key insights from their new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Listen to the audio version—read by Arvind—in the Next Big Idea App.

https://cdn.nextbigideaclub.com/wp-content/uploads/2025/02/05174821/BB_Arvind-Narayanan-Sayash-Kapoor_MIX.mp3?_=1

1. Artificial intelligence is not a single technology.

AI is an umbrella term. It refers to a collection of loosely related technologies. Imagine an alternate universe where people don’t have words for different forms of transportation. They use the collective noun “vehicle” to refer to cars, buses, spikes, spacecraft, and any other way of getting from place A to place B. Imagine how confusing this world would be. There are furious debates about whether vehicles are environmentally friendly, but no one realizes that one side of the debate is talking about bikes, and the other side is talking about trucks. If you replace the word vehicle with artificial intelligence, you’d have a pretty good description of our world.

In the book, we break down the different applications of AI. ChatGPT, a form of generative AI, has almost nothing in common with AI that a bank might use, for instance, to calculate their credit risk for a loan applicant. This kind of AI tries to predict the future: Will this applicant pay back the loan or not? A third kind of AI is used by social media companies. In some areas, AI has made remarkable and widely publicized progress, which has allowed companies to exploit public confusion and slap the AI label on whatever they’re selling. To avoid confusion about what AI can and can’t do, we must be clear about which kind of AI we’re talking about.

2. AI can’t predict the future.

Across the United States and in many countries, risk prediction algorithms help determine what happens to defendants when they’re arrested—who will be released and who will be jailed until their trial. When these AI algorithms are used, that decision is based on a prediction of who will commit a crime if they’re released rather than a determination of guilt. However, we can’t know who will commit a crime, and the research shows that these algorithms are biased and don’t work well. These algorithms only work slightly better than the flip of a coin.

“These algorithms are biased and don’t work well.”

This kind of predictive logic has proliferated in many consequential areas of our lives, like hiring, education, and healthcare. We’re deeply skeptical of making decisions about people by predicting their future. It’s in this area that we think AI snake oil is heavily concentrated. By AI snake oil, we mean AI that doesn’t work and probably will never work. We discuss many horror stories in the book. For example, a health insurance algorithm predicted that a patient would require 16.6 days to recover after his surgery in a nursing home. What an oddly precise number! When that time elapsed, the patient’s coverage stopped even though she couldn’t walk.

3. AI is better as a tool.

AI has an 80-year-old history. Companies and governments have long used various forms of AI, but generative AI is the first time that AI has been a consumer technology. In this form, it’s still new and immature. In the long run, we are broadly positive about generative AI, but the rollout of this technology has been haphazard. Companies have unleashed powerful but unreliable large language models and have let people figure out what to do with them. The problem with this approach is that it’s easier for sketchy actors to use generative AI than it has been for everyday people. For example, there are whole AI-generated books on Amazon. If someone is trying to make a quick buck, they don’t care if the AI output is any good.

Fortunately, things are changing. There is more effort to integrate generative AI into everyday tools. Generative AI can be useful to every knowledge worker, and people should be curious and experiment with new AI tools or features in existing software. But that doesn’t mean you should look at news about AI capabilities and panic. For example, ChatGPT can reportedly pass the bar exam and the medical licensing exam. Of course, that doesn’t mean it has replaced lawyers or doctors. It’s not like a lawyer’s job is to answer bar exam questions. In fact, the impact of generative AI on professions like law and medicine has been relatively small. We’re only slowly seeing the integration of generative AI into existing legal technology and electronic health records.

Generative AI is often compared to other general-purpose technologies like electricity. But electricity is useful because it powers specific appliances that do concrete things. In the same way, you don’t need to feel pressured to stare at a chat box and figure out what to do with it. It’s okay to wait for applications that cater to your everyday work and life.

“Technologies such as spellcheck and robotic vacuum cleaners, when at the cutting edge of what was possible, were called AI.”

There is an amusing definition of AI that says, “AI is whatever hasn’t been done yet.” This means that we generally use the term “AI” to refer to fairly new technology that doesn’t work well, and probably has double-edged societal implications over time. As it starts working better, it will fade into the background, we will take it for granted, and stop calling it AI. Technologies such as spellcheck and robotic vacuum cleaners, when at the cutting edge of what was possible, were called AI.

With many generative AI applications, things will follow this path gradually. To imagine what this might look like a couple of decades from now, a useful analogy is the internet. A few decades ago, when the internet was new, we would log on to the internet to do specific things and then log off. But today, the internet has become a pervasive background medium for a vast amount of knowledge work. We predict that a similar evolution is likely with generative AI as it starts to become a more useful tool and is better integrated with existing workflows in various professions.

4. You should evaluate AI tools for yourself.

How well AI works for you can vary greatly from person to person or organization to organization. An AI vendor might claim that their tool is 97 percent accurate, but a number like that without context means nothing because that accuracy can be highly sensitive to the specific data that it was evaluated on. The good news is that when it comes to generative AI-based tools, it’s straightforward to use these tools for yourself and evaluate how good they are for your specific use cases. When it comes to predictive AI tools, organizations, rather than individuals, should be responsible for evaluating how well these tools work.

A good example from the medical domain is sepsis prediction. Sepsis is a disease that can be deadly. When a hospitalized patient is at risk of developing sepsis, it’s helpful to know that ahead of time so that doctors can take preventive measures. Epic is a healthcare technology company that built a sepsis prediction tool, which they claimed to be very accurate—roughly 83 percent. The tool was deployed to hundreds of hospitals. Each of these hospitals should have evaluated that tool on their specific patient populations to see if they got the claimed accuracy. However, most of these hospitals are under a lot of pressure and might not have the expertise to do so, so they skipped that important step.

“Part of the responsibility for this failure lies with the hospitals for taking that developer’s claims at face value.”

Years later, an evaluation was done by a particular hospital system and found the accuracy they were getting was much less than what the developer claimed. Part of the responsibility lies with the developer, but part of the responsibility for this failure lies with the hospitals for taking that developer’s claims at face value.

5. We have agency over the future of AI.

The future of AI is up to all of us. We have agency at the level of individuals, organizations, and societies. We can choose which applications of AI we consider ethically acceptable or unacceptable, how we integrate AI into companies and organizations, and how we regulate AI.

But to exercise this agency, we have to think ahead instead of perpetually reacting to AI developments. Consider children: We think that AI will play a huge role in the lives of children born today, and that role can be enormously positive and fulfilling, negative and addicting, or anything in between. In a few years, we’ll be having the same kinds of conversations about kids and AI that we’re having today about kids and social media. We are very critical of how society has responded to kids being on social media. If we want to avoid those mistakes with AI, now is the time to act.

I have two young children myself, and when I spend time with them, we frequently use AI together as a tool for learning and entertainment. It’s also proved to be a good opportunity to help them understand the risks and limitations of AI early on in their lives. That’s one way in which I try to be proactive in exercising my own agency over the future of AI.

To listen to the audio version read by co-author Arvind Narayanan, download the Next Big Idea App today:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.