One of TIME’s 100 Most Influential People in AI, Verity Harding is director of the AI & Geopolitics Project at the Bennett Institute for Public Policy at the University of Cambridge and founder of Formation Advisory, a consultancy firm that advises on the future of technology and society. She worked for many years as Global Head of Policy for Google DeepMind and as a political adviser to Britain’s deputy prime minister.
Below, Verity shares five key insights from her new book, AI Needs You: How We Can Change AI’s Future and Save Our Own. Listen to the audio version—read by Verity herself—in the Next Big Idea App.
1. All technology is political.
Science and technology are often presented as something neutral that can then be used for either good or bad. But this isn’t technically true. Most inventions and discoveries are not developed in isolation but are heavily influenced by the politics and culture of the societies in which they emerge. This is because politics and culture decide whose work gets funded and whose doesn’t and how those technologies are received and regulated. AI is no different.
2. Because AI is being built by human beings, it will reflect human flaws.
The is a light and dark side to AI: this new technology holds great promise, as well as having troubling potential outcomes too. I’m perfectly comfortable, for example, with AI recommending me a TV show, but I’d be much more nervous about an AI doctor or lawyer.
AI reflects humanity in all its complicated glory. So, to ensure that we end up with AI that does more good than harm, we must be alive to both possibilities and careful about where we allow AI to enter.
3. We have a lot to learn from the past.
Having worked in the tech industry for a long time, I know it’s not a field well known for its humility. But while AI is new, the history of invention is not. In my book, I take three examples of transformative technologies (the space race, IVF, and the internet) to show how we’ve managed change in the past.
“The more people get involved in guiding the future of AI, the better.”
In these case studies, I don’t look at the technical details as much as the political ones—not, for example, the United States’ relative satellite capabilities compared with the Soviet Union. Rather, I am more interested in the United Nations Treaty of 1967, which made outer space the province of all mankind—a lofty and noble ambition, even against the backdrop of a Cold War. Looking at how we accomplished it back then can teach us a lot about what we need to do today to save our future.
4. History gives us hope.
In that same example of the UN Outer Space Treaty, we see that even in the middle of an unstable and dangerous world where the threat of nuclear war between the United States and the Soviet Union was all too real, compromise and cooperation were still possible. We see similar things in my other two examples. There tends to be a similar course for hope across case studies, in which scientists and politicians work together with the public to guide technological change towards the greater good.
We have managed great change peacefully before, and we can do it again.
5. AI needs you.
The good news about AI being very heavily influenced by politics and culture is that this means we all have a role to play in how AI evolves. The more people get involved in guiding the future of AI, the better. So, ask yourself, What do you want the future of AI to look like? What do you want from AI? How do we achieve the AI you want to see in the world? I hope my book helps you think through your answers to those questions. There is a place for all of us at the table, and AI needs you to have an opinion and to express it.
To listen to the audio version read by author Verity Harding, download the Next Big Idea App today: