Beena Ammanath is a global thought leader in AI ethics. She is the Executive Director of the Global Deloitte AI Institute, and Founder of Humans for AI. Beena has worked with companies such as General Electric, Bank of America, Hewlett Packard Enterprise, Thomson Reuters, and British Telecom, and she has served on the boards of several tech startups, nonprofits, and universities.
Below, Beena shares 5 key insights from her new book, Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI. Listen to the audio version—read by Beena herself—in the Next Big Idea App.
1. Trustworthy AI is more than fairness and bias.
Whenever the topic of AI ethics and trust comes up, the most common point of concern is bias and fairness. And it’s a valid concern—there have been quite a few examples of artificial intelligence that, for example, recommended stiffer or more lenient prison sentences depending on the defendant’s skin color. Or perhaps an individual applies for a mortgage, and the risk assessment AI gives a lower score because of the applicant’s racial or ethnic heritage. So bias is a very valid concern, but trust in AI is much more than that. Qualities of trustworthy AI can include robustness and reliability. Does the AI remain accurate over time? The real world is a messy place when it comes to data, but trustworthy AI can do what it was designed to do no matter what data it encounters.
Trustworthy AI is also explainable, meaning we understand how it works and can explain that to others, even if they’re not a data scientist. You shouldn’t need a PhD to understand how an AI is working. This moves toward another quality, transparency, and it refers to how the AI is used. If you call up a business and the entity on the other end of the phone is a natural language processing AI, wouldn’t you want to know that it’s not human? No one likes to be tricked, so transparency is important.
Now, even though there are no Terminator types around the corner, we do need AI to be safe. When it is put out into the world to make our lives better, we need to make sure that it doesn’t inadvertently cause harm in terms of physical, financial, or emotional harm. Likewise, we need AI to be secure so it cannot be corrupted by criminals or bad actors.
And therein lies another quality: respect for privacy. Every person and organization has a valid expectation of privacy, and the way AI works must align with and support those expectations. At its core, AI is highly sophisticated equations, which could never be punished or held to account in a meaningful way. And so human accountability is a critical part of trustworthy AI.
“You shouldn’t need a PhD to understand how an AI is working.”
And last, responsibility. To quote Jeff Goldblum in Jurassic Park, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” This is true for AI as well. We must always ask, “Is it really a responsible choice to send this AI out into the world?” And sometimes the answer is no.
2. It is impossible to have one set of universal rules for AI.
Let’s take reliability as an example. Imagine an artificial intelligence tool that can identify humans in the real world, whether they are walking or running, smiling, or wearing a mask, four feet tall or seven feet tall. One company takes this AI and puts it into a self-driving car to help identify and avoid pedestrians. Another company takes the same AI and uses it at a sports stadium to keep track of how many people visit the concession stands during a game. Now, how important is it for the AI tool to be reliably accurate at the stadium? And how important is it for the tool to be accurate in a self-driving car? You see the difference of course. How the tool is used directly impacts which trust qualities are most important.
This is true across the board—no two AI tools are the same, nor are the AI use cases. This means that the ethical implications of the AI are intimately tied to how the AI is being used. In short, it’s impossible to have just one set of rules for every possible scenario. And if this makes you raise an eyebrow, you’re not alone. How can we create rules, regulations, and best practices when ethics are dependent on largely unique circumstances and applications? It is complicated and complex. However, there are solutions, ideas, and answers in the book that can help you get started.
“The ethical implications of the AI are intimately tied to how the AI is being used.”
3. The only way to solve for AI ethics is by going deep into the details.
One of the ways we can make progress in AI ethics is by taking a much closer look at what AI ethics means in practice, and how it relates to the development of the technology. Take transparency—what does that really mean? On a superficial level, it could mean that end users, whether that’s consumers, business employees, or government officials, know they are interacting with an AI tool. Okay, but why should they know? Dig down a level. When people understand their engagement with an AI, they can make informed decisions. Do you want to share your data with AI? Do you trust that interactions with the AI will lead to favorable outcomes? Why do you trust it?
But why is human decision-making so important if artificial intelligence is designed to automate things? Aha, an even deeper insight. Artificial intelligence does not replace human decision-making. In an ideal scenario, it enables better decision-making. Humans are always part of the AI equation, and because of that, we need meaningful awareness of the artificial intelligence around us. Each element of AI trust and ethics can fold out like a near endless accordion of considerations and details, concerns, and opportunities. This is where we need to spend our intellectual effort, and we should delight in the opportunity. AI is the transformational technology of our era. This is our chance and our obligation to make it as valuable and as trustworthy as it can be.
4. Trustworthy AI is not just for data scientists.
Trustworthy AI is a team sport. It used to be that AI was the province of data scientists and PhDs. Math and computer programming were prerequisites for taking part in AI. That used to be true of computers, too. Before the availability of the personal computer, not too many people understood how computers worked, and they did not have the technical knowledge to use one, nor the opportunities to do so.
“As our day-to-day tasks and responsibilities come to include the use of AI, we will all need to think about what role we will play in AI ethics.”
AI is on a similar trajectory. In time, as more interfaces are developed and more use cases are established, the number of people using artificial intelligence in their work and personal life will only grow. As our day-to-day tasks and responsibilities come to include the use of AI, we will all need to think about what role we will play in AI ethics.
5. AI ethics is not just a big tech issue.
The large technology companies are famous for using AI. Search engines, social media, consumer tech—companies in these spaces are powerful innovators with AI. However, AI ethics is not only something for that industry to solve. Just as all of us will in time use artificial intelligence in our work, every organization in every sector and industry will eventually use AI.
Many already are. Manufacturers, medical researchers, retailers, hospitals, government and public services…select any industry, and you will find an organization using AI. The important lesson is that AI ethics is not something for another industry to solve; we can’t just wait for a perfect solution to emerge. Instead, each person and organization is called upon to take charge of AI ethics. What does it mean exactly? Well, it’s our task to define it and pursue it to AI’s greatest potential and benefit.
To listen to the audio version read by author Beena Ammanath, download the Next Big Idea App today: