Below, Tom Griffiths shares five key insights from his new book, The Laws of Thought: The Quest for a Mathematical Theory of the Mind.
Tom is a professor of psychology and computer science at Princeton University and Director of the Princeton Laboratory for Artificial Intelligence.
What’s the big idea?
How can we study something we can’t see or touch? Mathematics allows us to develop rigorous theories about how minds work. It also lets us use those theories to build artificial intelligence systems. Just as physicists seek to identify Laws of Nature, cognitive scientists hope to discover the Laws of Thought.
1. The story of AI goes back hundreds of years.
For many people, AI seems to have come out of nowhere. In late 2022, it suddenly became possible for anyone to have a conversation with chatbots that could draw on more knowledge than any human. Dig a little deeper and you might discover that the approach behind those chatbots—building bigger and bigger artificial neural networks—had its first dramatic demonstration in 2012, when it was used to significantly improve how well computers identify images. But the story goes back much further than that.
When Enlightenment thinkers, like René Descartes or Gottfried Wilhelm Leibniz, first began using mathematics to effectively describe the physical world around us, they also suggested that the same kind of approach might be used to describe the mental world inside us. Those early efforts led to the development of mathematical logic and digital computers, which in turn led to the creation of cognitive science by psychologists who used mathematical ideas to come up with new theories about the mind. Modern AI springs from that tradition: key advances in the development of artificial neural networks came from psychologists seeking to understand how the human mind works.
2. No single piece of mathematics describes the mind.
Cognitive scientists started using mathematical logic to describe thought, but after a couple of decades realized that wasn’t going to work. Concepts have fuzzy edges that logic just can’t capture. Artificial neural networks were developed in parallel and became much more powerful after a group of psychologists showed how they could be used to learn more complex relationships than anyone had thought possible.
“Concepts have fuzzy edges that logic just can’t capture.”
Continuing to scale up those neural networks takes us to modern AI. But understanding how neural networks learn—and how to create systems that learn more like people—requires a different approach, one that uses ideas from probability theory. These three mathematical traditions intertwine to give us a more complete picture of how the mind works.
3. Crucial discoveries come from pursuing unpopular ideas.
The first neural networks that could learn were built by a computer scientist who abandoned the project after deciding that, in order for them to learn anything interesting, they would have to be much larger than he considered practical. But a psychologist worked out how to make them learn better, which caused a lot of excitement about the potential of that approach. However, that same computer scientist then showed that even those neural networks had fundamental limitations, and they decreased in popularity.
A decade later, some psychologists became interested in neural networks as tools for understanding human cognition, cracked the problem of how to get them to learn more complex relationships, and neural networks became popular again. Then, machine learning researchers became interested in the statistical foundations of learning, and neural networks decreased in popularity. Soon, more powerful computers and larger datasets made it possible to use neural networks to solve even more challenging problems, bringing us to the present day.
This back-and-forth between disciplines—where an unpopular idea in one discipline is picked up and improved upon by researchers in another discipline—is a nice illustration of how an interdisciplinary field like cognitive science can have a huge impact.
4. We are closer than ever to understanding the human mind.
I used to tell my students that cognitive scientists have made a lot of progress in figuring out how to ask questions about the mind, but we’re still a long way from having answers. But now, the progress in AI over the last decade is beginning to suggest answers to some of our deepest questions about human intelligence.
“Artificial neural networks give us important hints about how that might work.”
Mathematical frameworks like logic and probability theory are fundamental to describing the nature of thought and learning, but the abstract rules and inferences they identify need to be implemented in real human brains. Artificial neural networks give us important hints about how that might work. Putting these pieces together gets us remarkably close to fulfilling the vision that Descartes and Leibniz had centuries ago of having a mathematical framework for describing thought.
5. There are still big differences between human minds and AI.
Despite all that progress, modern AI still has some important gaps. One of the biggest regards learning. If you read aloud all of the text that is used to train today’s chatbots, it would take tens of thousands of years. By contrast, a human child learns to be a fluent speaker of their native language in less than ten years. That means that there’s something in human brains that is different from what is inside our AI algorithms. Figuring out what that might be is a problem that we study in my lab, and a preoccupation of many cognitive scientists.
There are also interesting questions about what exactly it is that artificial neural networks are learning, and whether they represent the world in the same way as us. In some cases, they may be, but in others, we can show that they are quite different. Figuring out what AI systems know and when they are likely to succeed or fail at a task is a great opportunity to use the methods that cognitive scientists have honed by studying humans. For a long time, we have only had one species that demonstrated this kind of intelligent behavior, so having another one to study opens the door to not just understanding more about AI but understanding more about ourselves.
Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:










