Jeff Sebo directs the Center for Environmental and Animal Protection and the Center for Mind, Ethics, and Policy at New York University. He is also co-director of NYU’s Wild Animal Welfare Program, as well as an associate professor of environmental studies and affiliated professor of bioethics, medical ethics, philosophy, and law.
What’s the big idea?
We share the world with a vast number of non-humans: vertebrates, invertebrates, plants, fungi, and even AI systems. There is disagreement and uncertainty about our ethical obligations to non-humans. If there is any chance that an entity has sentience, agency, or otherwise moral significance, then we should take a stance of humility and caution in how we allow our actions to affect them.
Below, Jeff shares five key insights from his new book, The Moral Circle: Who Matters, What Matters, and Why. Listen to the audio version—read by Jeff himself—in the Next Big Idea App.
1. If you might matter, we should assume that you do.
In ethics, we have a lot of disagreement and uncertainty about what it takes to matter. Some people think you need to be sentient—able to consciously experience pleasure and pain. Other people think you need to be agentic—able to set and pursue your own goals based on your own beliefs and desires. Other people think you need to be alive—able to perform basic life functions associated with survival and reproduction.
In science, we also have disagreements and uncertainty about which beings have these features. With other mammals and birds, we can be confident that they are alive, sentient, agentic, and morally significant. They can feel, think, and they matter for their own sakes. But what about reptiles, amphibians, or fishes? What about invertebrates with more distributed cognitive systems?
What about plants and fungi with radically different kinds of cognitive systems? What about chatbots and robots made of silicon instead of being made from carbon? In these cases, we might genuinely be uncertain about whether it feels like anything to be them. But we should not wait for certainty before taking the basic steps to treat these non-humans well. If there is at least a realistic, non-negligible chance that they matter for their own sakes, based on the best information and arguments currently available, then we should take reasonable, proportionate steps to consider and mitigate the risks that our actions and policies might be imposing on them.
2. Many beings might matter.
How can we tell which non-humans are sentient, agentic, or otherwise morally significant? When proof and certainty are unavailable, we can at least collect evidence and estimate probabilities. In particular, we can use a marker or indicator method to assess non-humans for behavioral or anatomical evidence associated with capacities like sentience and agency.
With animals, we can use behavioral tests. We can ask:
- Do they nurse their wounds?
- Do they respond to analgesics and antidepressants?
- Do they make behavioral trade-offs between the avoidance of pain and the pursuit of other valuable goals?
To the extent that the answer is yes, it increases the probability of moral significance. When we ask these questions about animals, the answer is often yes. Many experts in many fields are prepared to say that there is at least a realistic, non-negligible chance of moral significance in all vertebrates, mammals, birds, reptiles, amphibians, fishes, and even many invertebrates like cephalopods, mollusks, crustaceans, and insects.
We might not be able to trust behavioral evidence in the same way with AI systems, but we can look at underlying architectures. We can ask, do AI systems have computational features that we associate with capacities like sentience and agency? Does AI have its own forms of perception, attention learning, memory, self-awareness, social awareness, language, and reason? To the extent that the answer is yes, that increases the probability of moral significance.
While current AI systems might not have many of these capacities, we can expect that near-future AI systems will have advanced and integrated versions of many of these capacities. We should give at least minimal moral consideration to all vertebrates, many invertebrates, and many near-future AI systems in the spirit of caution and humility.
3. If we might be affecting you, we should assume that we are.
We also have disagreement and uncertainty about what we owe everyone who matters. In ethics, we debate whether we have a general responsibility to help each other. Some people think that we should: If I can prevent something bad from happening without sacrificing anything comparably significant, then I should help. Other people think no, we do not have a general responsibility to help others: I should consider the risks that I might be imposing and I should reduce and repair harms that I cause, but beyond that, helping is optional.
In science, we often have disagreement and uncertainty about whether our actions and policies are imposing risks and harms on vulnerable others. Suppose you dump toxic waste in a lake, then you walk by that lake the next day and see a rabbit drowning in the water. Did you play a role in this predicament? You might not have directly imperiled the rabbit—you might not have picked her up and plopped her in the middle of the lake—but you might have indirectly imperiled her. Your toxic waste might have played a role in her getting stuck. We should cultivate caution and humility in the face of disagreement and uncertainty about these ethical and scientific issues.
“How often are we in a position of at least reducing and repairing the harm that we cause to vulnerable others?”
In this case, you should help the rabbit either because you might have a responsibility to help others where possible, or at least because your own actions might have indirectly imperiled this rabbit. Helping her is a way of reducing and repairing the harm that you are personally causing in the world. But if we do have these responsibilities, then we must ask how often do they arise? How often are we in a position of at least reducing and repairing the harm that we cause to vulnerable others?
4. We might be affecting many beings.
We now live in the Anthropocene, a geological epoch where humanity is a dominant influence on the planet. We affect non-humans worldwide, whether we like it or not, both directly and indirectly, individually and collectively. Consider industrial animal agriculture. This food system kills hundreds of billions of captive vertebrates and trillions of captive invertebrates every year, to say nothing of all the wild animals killed for food or other purposes.
This food system also significantly increases global health and environmental threats, including threats associated with the spread of diseases, antimicrobial resistance, pollution, biodiversity loss, and human-caused climate change. When these threats occur, they imperil humans and non-humans alike. They imperil us directly by exposing us to diseases, fires, and floods, and they imperil us indirectly by amplifying ordinary threats that we already face, like hunger, thirst, illness, or injury. For animals, they amplify threats associated with human violence and neglect.
In the future, we can expect similar dynamics to occur with emerging technologies like advanced AI systems. If and when AI systems are sentient, agentic, or otherwise morally significant, we could be using them at greater scales than we do with non-human animals right now. When we do that, we could also create and amplify a wide range of threats. We could lose control of AI, and AI could harm us. We could retain control of AI and use it to harm each other. AI might amplify ordinary threats we already face, like bias, disinformation, and misinformation. We have a responsibility to consider all affected stakeholders equitably when making decisions about our effects in the world.
5. We should reject human exceptionalism.
Human exceptionalism is the presumption that humanity always matters most and takes priority. If such a vast number and wide range of non-humans might matter and our actions and policies might be affecting them at a global scale, then we owe them a lot.
Many humans assume that we should nevertheless prioritize fellow humans because we have higher capacities for welfare: I can suffer more than a mouse, for example. But we might not always have higher capacities for welfare. I might not be able to suffer more than an elephant or a whale or a sophisticated AI system. Even if we have higher capacities for welfare than non-humans individually, we might not have higher capacities for welfare than them in the aggregate because the non-human population is and will be much larger and more diverse than the human population. They have more at stake than we do overall, even if we have more at stake than they do individually.
“As the dominant species, we have a responsibility to ask how we might be affecting all stakeholders.”
Some humans also assume that we should prioritize fellow humans because we have closer bonds with fellow humans, but that might not always be true either. If we are affecting non-humans everywhere, then we have morally significant bonds with them, too. We might not be able to sustain our assumption that we always take priority. There might be a limit to how much we can support non-humans because we lack the knowledge, power, and political ability needed to help them.
But we can still do more than we are at present. We can also try to build knowledge, capacity, and political will toward helping them in the future. When we can prioritize them effectively and sustainably, perhaps we should prioritize them then.
Some of these ideas might seem like a distraction from other important issues, but I think that we should take them seriously alongside those other important issues. As the dominant species, we have a responsibility to ask how we might be affecting all stakeholders, including humans and animals, and eventually, potentially, AI systems. And if that leads us to uncomfortable conclusions about our treatment of non-humans, so be it. We can accept those conclusions and try to build a better world for humans and non-humans alike. Whether you agree with all my conclusions in the book, I hope you at least find the arguments interesting and thought-provoking.
To listen to the audio version read by author Jeff Sebo, download the Next Big Idea App today: