Gary Rivlin is a Pulitzer Prize-winning investigative reporter who has been writing about technology since the dawn of the internet. He has covered this beat for Wired, the New York Times, and in previous books. He is a two-time Gerald Loeb Award winner and former reporter for the New York Times.
What’s the big idea?
A veteran tech-reporter who covered the dot-com boom firsthand now turns his attention to our AI era. He has spent the past two-plus years reporting from the front lines—following founders, venture capitalists, and tech titans—to document today’s race to cash in on this AI moment.
Below, Gary shares five key insights from his new book, AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence. Listen to the audio version—read by Gary himself—in the Next Big Idea App.
1. AI has been just around the corner for 70 years.
AI’s early pioneers were wildly optimistic. Consider Alan Turing, the English mathematician who famously cracked Nazi codes during World War II. He was also a pioneer of AI, or what he called “intelligent machines.” In 1950, he predicted that, by the year 2000, computers would be able to convincingly fool people into thinking they were talking to another person. He was off by over two decades.
At a landmark 1956 gathering at Dartmouth College, a group of researchers confidently predicted that, within ten years, a machine would defeat the world’s best chess players. It wouldn’t happen for another 40 years. That’s been the story of AI for much of its history: a technology that has hovered tantalizingly just around the next bend, forever a decade away.
I tell the early history of AI largely through two figures. One is Frank Rosenblatt. In the late 1950s, Rosenblatt pioneered the concept of neural networks: computers that don’t just follow instructions but learn and get better with training—the foundation of today’s AI revolution. The other is Marvin Minsky, who pushed the field in a totally different direction by favoring a brute-force approach where human coders told computers specific instructions.
Minsky was a contentious figure. He successfully discredited Rosenblatt’s ideas, leading the field down a dead-end path that delayed meaningful progress in artificial intelligence by about half a century. The irony? In 1967, Minsky confidently declared that AI would be “substantially solved” within a generation.
2. Safety takes a distant second to making money.
A decade ago, AI safety was top of mind for those working in the field. We’ve all seen enough sci-fi movies to know that if AI goes wrong, it can go catastrophically wrong.
DeepMind was the first hugely successful AI startup. When its founders agreed to sell to Google for $650 million in 2014, they insisted on two major conditions:
- Google could never use its AI for state surveillance or military purposes.
- Google would create an independent ethics board to oversee AI development.
The founders of DeepMind even turned down more money from Facebook because they didn’t trust its “move fast and break things” philosophy when it came to AI.
Likewise, OpenAI (the company behind ChatGPT) started as a nonprofit in 2015 so that it wouldn’t be pressured by investors to choose profits over safety. But then ChatGPT’s release in late 2022 was like a starter’s pistol going off in Silicon Valley. Suddenly, everyone was in a mad dash to cash in on AI.
Reid Hoffman—the book’s main character—described the divide between two camps: The “zoomers” who wanted to push ahead as fast as possible, no matter the risks. And the “bloomers” who believed in AI’s potential but also wanted sensible safeguards.
“At an AI Summit in Paris in February 2025, the focus had shifted entirely from safety and ethics to profit and speed.”
By the start of 2025, it was clear the zoomers had won. At an AI Summit in Paris in February 2025, the focus had shifted entirely from safety and ethics to profit and speed. As U.S. Vice President J.D. Vance declared at the Summit, “The AI revolution is not won by handwringing about safety. It will be won by building.” The US and the UK even refused to sign a non-binding declaration on AI safety.
As for Google’s promises to DeepMind’s founders, Google announced at the start of 2025 that it dropped its ban on using its AI technologies for surveillance and military applications and discontinued the independent ethics board it had promised to create.
3. AI favors the giants of tech.
Building AI is extremely expensive. Training, fine-tuning, and operating the latest AI models costs hundreds of millions of dollars, if not billions. It also requires massive computing power, scarce high-performance chips, and top-tier researchers, many of whom command multi-million-dollar salaries.
Consider Inflection, the high-profile startup at the center of my book. The company was founded by Reid Hoffman and DeepMind cofounder Mustafa Suleyman. Inflection raised more than $1.5 billion and built a chatbot called Pi with millions of devoted users.
Yet by early 2024, Inflection’s founders realized they needed to raise another $2 billion just to fund their ambitions for the next 12 months. After that? Maybe $4 billion, $6 billion—who knew? Meanwhile, some of their biggest rivals—Microsoft, Google, Meta—each had tens of billions of dollars in cash reserves. These giants could simply dip into their enormous war chests to fund AI development, while startups like Inflection had to frantically circle the globe seeking investors. Inflection didn’t seem to stand a chance. After giving up on his startup dreams, one founder said, “My best assessment is that in the next five to ten years, none of the startups in the consumer AI space are going to make it.”
Some argue that the rise of DeepSeek, a scrappy Chinese AI startup, changes the calculus. DeepSeek built a competitive chatbot at a fraction of the cost that companies like OpenAI or Google paid. But DeepSeek is a well-funded startup whose creators most likely built on the breakthroughs of American AI firms. The fundamental truth remains: training and running these models that can spit out a five-page report in seconds or deliver a picture or video clip on demand require staggering sums of money. The traditional Silicon Valley dream—starting a company in the garage and growing into the next Google—may be over in the AI era. When experts predict that, by 2027, a single AI model might cost 100 billion dollars to train, how can any startup compete with tech’s ruling class?
4. We are not ready.
Right after ChatGPT took off, some people called for a pause in AI development. But meaningful change doesn’t come from trying to halt progress; it comes from adapting to it.
I witnessed this firsthand while covering the rise of the internet in the 1990s. I saw bookstores, media outlets, and others wishing the web would fade away. It didn’t. Likewise, we can’t stop AI. However, we can prepare.
Tech pioneer Mustafa Suleyman, who has been working in AI since 2010, puts it bluntly, “Spend time in tech or policy circles, and it quickly becomes obvious that head-in-the-sand is the default ideology.”
“By launching an early, relatively tame version of AI, OpenAI forced society to start wrestling with AI’s implications.”
We need to move past the debate over whether AI is good or bad and start focusing on managing its risks—like job loss and the dangers of autonomous weapons—while maximizing benefits in medicine, science, and education.
OpenAI rushed to release ChatGPT because it was hearing footsteps and they wanted to get their bot out before anyone else did. But CEO Sam Altman offered another reason: “We could have gone off and just built something jaw-dropping. But if we built advanced AI in the basement, with the world blissfully walking blindfolded along, I don’t think that makes us very good neighbors.”
By launching an early, relatively tame version of AI, OpenAI forced society to start wrestling with AI’s implications—grappling with its risks, exploring its potential, and figuring out how to prepare for what’s next. The good news? We still have time. The bad news? That window won’t stay open forever.
5. AI is both overhyped and underhyped.
AI today reminds me of the dot-com boom. Investors are throwing fistfuls of money at AI companies with only the vaguest idea of what they might build or how they might make money. Every startup claims it will change the world by next Tuesday.
But we tend to overestimate the short-term impact of a new technology while underestimating its long-term impact. I saw that pattern play out with the internet. The internet didn’t reshape society in eighteen months, as people were imagining during the dot-com bubble. But in the long run, the internet became even more central to our lives than even the optimists predicted. It just took 15 or 20 years to unfold.
I expect that AI will be the same. Yes, it’s overhyped. AI agents won’t be booking our airline flights or handling our calendars by the end of the year. The technology will not remake the workplace over the next twelve months. It won’t revolutionize the world in two years. But in 10 or 15 years? It will change medicine, education, scientific research—you name it. AI is an inevitability. The real question isn’t if AI will reshape our world but how and by who.
To listen to the audio version read by author Gary Rivlin, download the Next Big Idea App today: