No, AI Isn’t About to Take Over the World. Here’s Why
Magazine / No, AI Isn’t About to Take Over the World. Here’s Why

No, AI Isn’t About to Take Over the World. Here’s Why

Science Technology
No, AI Isn’t About to Take Over the World. Here’s Why

Cade Metz has spent years chronicling the rise and rise of artificial intelligence, first as a reporter at the New York Times and now in his new book, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.

In this forward-looking conversation, Cade joins the Next Big Idea podcast to tell host Rufus Griscom what AI can do, where it’s headed, and whether we should be worried that supercomputers will wage war against humanity.

A single breakthrough led to the rise of AI. It’s called the “neural network.”

Loosely speaking, a neural network is designed in the image of the human brain. Our brains are networks of neurons, and each neuron is essentially making a tiny calculation and passing that on to the next neuron.

And that’s the way a neural network—this mathematical system—works. You have these faux neurons that are each doing a tiny, meaningless calculation on their own. But in combination with all these other calculations being done by all these other faux neurons, it’s able to recognize patterns.

If you look at a neural network when it’s learning to recognize a cat, for instance, particular faux neurons are learning particular parts of a cat. There are clusters of neurons that actually learn what the nose looks like, or what the curve of the ear is. And then this all comes together to recognize the cat as a whole. Ultimately it’s just mathematics, but it’s mathematics on this enormous scale that can learn complex images, sounds, or languages.

Neural networks can read, write, and tweet.

The neural network has completely changed the way systems can recognize the spoken word. And what it’s really doing now is increasing the ability of machines to understand natural language, the way that you and I piece words together.

“If you can identify a target in a video, you can put a gun on your system, and then you’ve got an autonomous weapon. And that’s what concerns a lot of people.”

Essentially, you take giant amounts of texts from the internet—Wikipedia articles, digital books—and you feed it into a giant neural network. This neural network will spend months analyzing all that text and trying to identify the patterns. What those systems are doing, fundamentally, is just trying to predict the next word in any sequence of words. As they keep analyzing, they get better and better at that one task: predicting the next word in a sequence.

But what’s so fascinating is that once it learns that one task, that same system can apply what it has learned to all sorts of other tasks. You just have to give it a little bit more data. You show it a few tweets, for instance, and that same system can then learn to generate its own tweets. You show a few blog posts, and it learns to write its own blog posts. You show it examples of conversation, and it can learn to converse. That’s what we’re seeing now.

AI has a bias problem.

These systems learn from data, and when you have human researchers choosing that data, either consciously or unconsciously, they’re going to choose data that suits their worldview. If most of those researchers are white men, then they’re going to choose particular types of data. And that means the systems that train on that data will exhibit the biases of those researchers.

It’s hard to deal with this bias problem, and companies are still struggling with it, even as they’re trying to get this technology out there, because they know how important it is to their bottom lines and their futures. That’s why they’re pushing it out, even though they don’t quite know how to solve this bias problem.

“It’s not as if the car is on a path to becoming sentient.”

We are already dealing with serious ethical dilemmas because of AI.

Google was helping the military in a way that was essentially a path toward autonomous weapons. They were taking the neural network idea, and they were using it to identify objects in drone footage. That’s a way of doing lots of things. It’s a way of potentially identifying targets. It’s a way of doing surveillance. But if you look down the line, it’s a path toward autonomous weapons. If you can identify a target in a video, you can put a gun on your system, and then you’ve got an autonomous weapon. And that’s what concerns a lot of people.

Some people are on the other side of this—they think Google should absolutely be working with the military. They believe other countries, including China, are going to make a real push toward applying this technology to military uses, and they think the big American companies need to be doing the same thing.

It’s an example of a real tension in this field, where the ideals of some of these AI researchers clash with these giant public companies that are driven by the profit motive.

Elon Musk’s predictions about sentient AI distort how the technology actually works.

Musk has talked about AI improving at an alarming rate. But what was improving were things like speech recognition, image recognition, and natural language understanding. That is very different from a system that is recursively self-improving by learning from the world around it. The systems do not work like that today—period. A neural network is not learning in real-time from everything around it.

A good way to think about this is a self-driving car. The way a self-driving car recognizes the world around it is by recognizing stop signs, other cars, and pedestrians on the side of the road. But it’s not as if the neural network is learning from everything that’s going on as the car drives. You have to drive the car around and gather all the data—all the photos and videos—and then you take it back to your data center, and you feed it into your neural network and train it to do something. Then you put the new neural network on the car, and it does better. So it’s not improving itself; you very much have a human in the loop, and it’s a multi-step process. It’s not as if the car is on a path to becoming sentient.

 

To listen to ad-free episodes of the Next Big Idea podcast, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->