Magazine / AI’s Journey from Good Intentions to Supremacy and Exploitation

AI’s Journey from Good Intentions to Supremacy and Exploitation

Book Bites Politics & Economics Technology

Parmy Olson is a technology columnist at Bloomberg, covering artificial intelligence, social media, and tech regulation. She was previously a tech reporter for the Wall Street Journal and Forbes, and is the author of We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency.

Below, Parmy shares five key insights from her new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World. Listen to the audio version—read by Parmy herself—in the Next Big Idea App.

https://cdn.nextbigideaclub.com/wp-content/uploads/2024/09/10201707/BB_Parmy-Olson_MIX.mp3?_=1

1. Today’s AI boom was sparked by two utopian dreamers.

Sam Altman and Demis Hassabis grew up on opposite sides of the ocean—Sam in St. Louis and Demis in London—but they had the same, wildly ambitious dream: to create computer systems that could surpass humans in intelligence. The goal wasn’t just AI but AGI, or artificial general intelligence: godlike software that could process information, speak, create, and reason as well as any human being.

Sam became a powerful figure in Silicon Valley who believed AGI would bring abundance to humanity and trillions of dollars of new wealth. Demis was an introspective scientist in London and more philosophical in his outlook. He believed AGI would make critical scientific breakthroughs, helping humans cure diseases and unlock the mysteries of the universe—perhaps even discover God. Set in their humanitarian ideals and knowing how powerful this new tech could be, both men tried to create responsible oversight for their systems. They both sought to ensure that whoever had ultimate supremacy of AI was not profit-oriented like the large tech companies who desperately wanted it for growing their businesses. On that front, both Sam and Demis would fail.

2. How humanitarian-oriented AI submitted to Big Tech.

When Demis co-founded DeepMind in 2010, it was the first company aiming to build AGI, and it received funding from wealthy tech tycoons like Elon Musk and Peter Thiel. Demis was obsessed with games and measured success by which games his AI systems could conquer, like the Chinese game of Go. He made huge progress, but when Sam announced that he was also building AGI with a new research lab called OpenAI, Demis became paranoid. He worried that Sam had stolen some of his ideas. His past investor, Elon Musk, was now not only backing OpenAI but trash-talking Demis, telling OpenAI’s engineers that he was an evil genius who wanted to dominate the world. By now, Google had bought DeepMind, and Sam believed his lab was safer and more human-oriented as a non-profit. But not for long.

“His past investor, Elon Musk, was now not only backing OpenAI but trash-talking Demis, telling OpenAI’s engineers that he was an evil genius who wanted to dominate the world.”

Building powerful AI systems was hugely expensive. Sam soon u-turned on his founding principles and turned OpenAI into a for-profit company, striking a major deal with Microsoft. Over in the U.K., Demis was struggling to keep to his humanitarian goals, too. He spent years on a plan to break away from Google, and even got the search giant to agree to let DeepMind spin out into something like a non-governmental organization. Demis really didn’t want Google to steer AGI, but just as Microsoft was gaining control of OpenAI, Google was tightening its grip on DeepMind and its AI technology.

3. The AI-fueled power of tech giants.

Very few organizations on the planet have the money, computing power, and talented engineers needed to make the most powerful generative AI—the new form of AI sparked by OpenAI’s release of ChatGPT. Once that little chatbot was out in the world, writing poems and generating photorealistic images, an arms race was on to make something even more capable. But the race was mostly taking place between a handful of giant companies: Google, Microsoft, Amazon, and Meta.

Over the last two years, these companies, collectively known as Big Tech, acted like a giant squid sucking all the oxygen out of the room. They took over promising AI startups who had been hoping to be the next DeepMind or OpenAI, structuring the deals in clever ways to avoid antitrust regulations. As AI hype gathered pace, the six biggest tech companies watched their market caps skyrocket to nearly $16 trillion, rivaling the GDP of China.

I believe this consolidation of power will have far-reaching side effects. It will continue to entrench the market dominance of tech giants, giving them greater control over the user data of billions of people and even greater influence over public opinion and culture. The economic disruption of AI, which will likely displace jobs in industries like entertainment, customer service, and media, also looks increasingly like it will benefit big tech firms more than anyone else.

4. AI is being built in secret.

In the last two years, artificial intelligence has advanced more rapidly than scientists had expected, but its research is also dominated by tech giants. In the last decade, the number of academic papers with ties to companies like Google and Microsoft has more than tripled, reaching 66 percent in 2022. Some academics say these strategies by Big Tech mirror those used by Big Tobacco to shape research and serve corporate interests.

“Archrivals OpenAI and DeepMind refuse to share details about their training data and methods, making it almost impossible for independent researchers to scrutinize their technology for potential harm.”

As the race to create ever more powerful AI models has intensified, that research has become more secretive. For instance, archrivals OpenAI and DeepMind refuse to share details about their training data and methods, making it almost impossible for independent researchers to scrutinize their technology for potential harm. In 2021, a small group of female scientists at Google and the University of Washington tried to fight back, publishing what would become a seminal paper laying out how Google’s newest AI models could amplify racial and gender biases and reinforce stereotypes in society. The researchers warned that they could spread misinformation and have huge environmental costs, and that big companies like Google were overlooking ethical considerations and societal impacts while they raced ahead. Google fired two of those scientists. If you didn’t toe the line that bigger is better, you were out.

5. The biggest problem in AI is a complete lack of oversight.

Sam and Demis set out to create powerful technology that would benefit humanity. But that has become a distant dream as new forms of AI have been shaped and controlled by a handful of tech monopolies. They have raced to deploy software that “hallucinates” or makes mistakes, that can reinforce biases, threatens to erode human agency and trust in everything we see and hear online, and that is likely to further invade our privacy—all with little to no oversight.

Barring the European Union’s AI Act (which won’t come into force till 2026 and whose impact is still uncertain), there are no laws regulating AI. The flimsy ethical boards that Sam and Demis set up turned out to be a whitewash. This all points to a troubled future for AI, one that’s increasingly dominated by monopolistic corporate entities and steered towards acting as a tool for corporate gain rather than our collective human benefit. That’s not what Sam and Demis set out to do, but the rest of us are set to find out the price.

To listen to the audio version read by author Parmy Olson, download the Next Big Idea App today:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.