What Brains of the Past Teach Us About the AI of the Future
Magazine / What Brains of the Past Teach Us About the AI of the Future

What Brains of the Past Teach Us About the AI of the Future

Book Bites Science Technology
What Brains of the Past Teach Us About the AI of the Future

Max Bennett is the co-founder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuro­science and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30.

Below, Max shares five key insights from his new book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. Listen to the audio version—read by Max himself—in the Next Big Idea App.

Max Bennett A Brief History of Intelligence Next Big Idea Club

1. To understand the brain, we must go back in time.

We have been trying to understand the brain for centuries, and yet we still don’t have satisfying answers. The problem is that the brain is really complicated. The brain contains over 86 billion neurons and over 100 trillion connections all wired together in a tangled mess. Within a cubic millimeter of the brain, which is about the width of a single letter on a penny, there are over a billion connections. Even if we mapped all 100 trillion connections, we still wouldn’t know how the brain works.

The fact that two neurons connect to each other doesn’t tell us much about what they are communicating—neurons pass hundreds of different chemical signals across these connections, each with unique effects. Worst of all, this is made even more challenging by the fact that evolution doesn’t design systems in coherent ways—there are duplicated, redundant, overlapping, and vestigial circuits that obscure how different brain systems fit together.

These problems have proven so difficult, that some neuroscientists believe it will be many more centuries before we ever make sense of the brain.

But there is an alternative approach, one that searches for answers not in the human brain, but within fossils, genes, and the brains of the many other animals that populate our planet. In recent years, scientists have made incredible progress reconstructing the brains and intellectual faculties of our ancestors. This emerging research presents a never-before-possible approach to understanding the brain. Instead of trying to reverse-engineer the complicated modern human brain, we can start by rolling back the evolutionary clock to reverse-engineer the much simpler first brain. We can then track the changes forward in time, observing each brain modification that occurred and how it worked. If we keep tracking this story forward from the simple beginnings through each incremental increase in complexity, we might finally be able to make sense of the magical device in our heads.

2. The brain evolved in five steps.

As the evidence continues to roll in, a story has begun to reveal itself. The first brain evolved over 600 million years ago; one might think that over such an astronomical amount of time, the story of brain evolution would contain so many small changes that it would be impossible to fit into a single book. But instead, amazingly, it turns out that the main reconfiguration of brains occurred in only five key steps, referred to as the “five breakthroughs”.

Each breakthrough emerged from a new set of brain modifications and gifted our ancestors with a new suite of intellectual faculties.

“Some neuroscientists believe it will be many more centuries before we ever make sense of the brain.”

Each breakthrough was built on the foundation of those that came before. Just as the ancestors of lizards took fish-like fins and reconfigured them into feet to enable walking, and the ancestors of birds took those same feet and reconfigured them into wings to enable flying, brain evolution too worked by repurposing the available biological building blocks to face new challenges and enable new feats.

If we want to understand the human brain, and what is missing in current AI systems, the framework of these five breakthroughs offers a wonderfully instructive and simplifying approach.

3. The first brain was designed for steering.

Before brains evolved, animals didn’t move around much. They were most like today’s sea anemones and coral; they waited for food particles to come to them, at which point they would snatch food out of the water with their tentacles. But they did not actively pursue prey nor avoid predators.

However, around 600 million years ago, our ancestors evolved into a small worm-like creature the size of a grain of rice. These worm-like ancestors were the first animals to survive by moving towards food and moving away from danger. Not so coincidentally, these were the first animals to have brains.

This worm had no eyes or ears—it perceived the world only through a small portfolio of individual sensory neurons that each detected vague things about the outside world. Some neurons got activated by the presence of light and others got activated by the presence of specific smells. Despite perceiving almost nothing detailed about the external world, these worms could still navigate around using a clever technique called “steering.” This was the first breakthrough.

“The first brains had two primary motor programs—one for moving forward, and one for turning.”

When a piece of food is placed in water, molecules fall off of it and disperse throughout its surroundings. This produces what is called a “smell gradient,” where the concentration of these molecules is high directly around the food source and becomes progressively lower the further away from the food source you get. It is this physical fact that evolution exploited to enable the first form of navigation.

The first brains had two primary motor programs—one for moving forward, and one for turning. Although these worms couldn’t see, they could find the origin of food by applying two simple rules: whenever the concentration of a food smell increases, keep going forward; whenever the concentration of a food smell decreases, turn randomly. Taking advantage of how smell gradients work, if you keep applying this algorithm, eventually worms will make it towards the source of the food smell.

In other words, “steering” worked by categorizing things in the world into “good” and “bad” —worms steer towards good things like food smells and away from bad things like predator smells. This was the function of the first brain, and from it emerged many familiar features of intelligence, from associative learning to emotional states.

4. AI is missing the mammalian “world model.”

There are many debates about what the final steps are on the road to human-like artificial intelligence. From the perspective of the five breakthroughs, what is missing is not the first breakthroughs in the evolution of the human brain—steering and reinforcement learning—nor the most recent breakthrough, which was language. Instead, AI systems have skipped the breakthroughs that evolved halfway through our brain’s journey; we have missed the breakthroughs that emerged in early mammals and primates.

Early mammals emerged 150 million years ago, as small squirrel-like creatures in a world filled with massive predatory dinosaurs. They survived by burrowing underground and emerging only at night to hunt for insects. From the crucible of this incredible pressure to survive was forged a new brain region called the neocortex. The neocortex enabled these early mammals to imagine the future and remember the past, in other words, to simulate a state of the world that is not the current one.

This was the breakthrough of simulation. It enabled these animals to plan their actions ahead of time. It enabled our squirrel-like ancestors to peek out from their burrow, spot nearby predators, and simulate whether or not they could successfully make a dash across the forest floor without getting caught. Simulation also gifted these mammals fine motor skills, as they could plan their body movements ahead of time, effortlessly figuring out where to place their paws to balance themselves and jump between tree branches. This is why lizards and turtles, lacking a neocortex, move slowly and clumsily on the forest floor, while mammals like squirrels and monkeys crack open nuts and climb in trees.

“Planning requires dealing with imperfect noisy information, an infinite space of possible next actions, and ever-changing internal needs.”

To accomplish all this, the neocortex creates an internal representation of the external world, what AI researchers call a “world model.” The world model in the neocortex contains enough details of how the world actually works that animals can imagine themselves doing something and accurately predict the consequences of their actions. In order for a mouse to imagine itself running down a path and correctly predict whether a nearby predator will catch it before it gets to safety, its imagination needs to accurately capture the nuances of physics: speed, space, and time.

We already have AI systems that can make plans and simulate potential future actions, the most famous modern example being AlphaZero, the AI system that recently beat the best Go and chess players in the world. AlphaZero works, in part, by playing out possible future moves before deciding what to do. But AlphaZero and other AI systems still can’t engage in reliable planning in real-world settings, outside of the constrained and simplified conditions of a board game.

In real-world settings, planning requires dealing with imperfect noisy information, an infinite space of possible next actions, and ever-changing internal needs. A squirrel dashing from one tree to the next has, literally, an infinite number of possible actions to take, from the low-level choices of exactly where to place each individual paw, to the higher-level choices of exactly which path to take. How the neocortex enables mammals to plan in such complex environments is still beyond our understanding; this is why we do not yet have robots that can wash our dishes and do our laundry, the secret to which lives within the minuscule brains of squirrels and rats and all the other mammals in the animal kingdom.

5. The secret to AI safety lives in monkey brains.

One of the key problems in the field of AI alignment is ensuring that AI systems understand the requests that we make of them. This has also been called the “paperclip problem,” after Nick Bostrom’s allegory of asking an AI system to run a paperclip factory as efficiently as possible, at which point his imagined AI system goes on to convert all of earth into paperclips. This thought experiment reveals that AI can be dangerous even without it being intentionally nefarious: the AI system did exactly what we told it to do, but failed to infer the true intent of our request and our actual preferences. The paperclip problem is one of the biggest outstanding challenges in the field of AI safety.

When humans speak to each other, we automatically infer the intent of each other’s words. This ability was part of the fourth breakthrough, the breakthrough of “mentalizing.” It emerges from parts of the neocortex that appeared with early primates. These primate areas endow monkeys and apes with the ability to simulate not only the external world but also their own inner simulation itself, enabling them to think about their own thinking and the thinking of others.

Early primates got caught in a political arms race; their reproductive success was defined by their ability to build alliances, climb political hierarchies, and cozy up to those with high status. We see this in the social groups of modern nonhuman primates like chimpanzees, bonobos, and monkeys. The most powerful tool in surviving the political world of primate life was the evolution of mentalizing, which enables primates to predict the consequences of their social choices, to “imagine themselves in other people’s shoes,” to infer how they might feel and what they might do and what they want.

The new areas of the neocortex in primates contain the algorithmic blueprint for how to build AI systems that do the same. One way or another, in order to create safe AI systems, we will have to endow these systems with a reliable understanding of how the human mind works, without which our AI systems will always risk accidentally weaponizing an innocuous request like optimizing a paperclip factory into a world-ending cataclysm.

To listen to the audio version read by author Max Bennett, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->