Bill Gates Says Superhuman AI May Be Closer Than You Think
Magazine / Bill Gates Says Superhuman AI May Be Closer Than You Think

Bill Gates Says Superhuman AI May Be Closer Than You Think

Entrepreneurship Podcast Technology
Bill Gates Says Superhuman AI May Be Closer Than You Think

Where is AI headed, and how quickly will it get there? Should we be early adopters or keep our distance? Will it make our lives better or put us out of work?

We can’t think of a better person to answer these questions than Bill Gates. He’s played a leading role in every major tech development over the last half-century, and he’s got a pretty good track record when it comes to forecasting the future. Back in 1980, he predicted that one day there’d be a computer on every desk; today on the show, he says there will soon be an AI agent in every ear.

Rufus and Bill are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called AI First. Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

Sign up for The Next Big Idea newsletter here

Rufus Griscom: I’m Rufus Griscom, and this is the Next Big Idea. Today, Bill Gates on AI, the path to super intelligence and what it means for all of us.

I suspect that every moment in human history has felt pivotal, precarious, as if anything could happen. But it also must be true that some moments are more pivotal than others. This is one of those moments. We’ve seen the impact of transformative technological change. The internet has sped the world up and social media now on most every phone, in most every hand has polarized our communities, hyperbolized our politics, and now we are in the early moments of the AI revolution. What will the next decade bring? There are few people I would rather ask this question than Microsoft co-founder and global philanthropist, Bill Gates. Bill’s been at the forefront of the race to build machines that can empower humans for 50 years, ever since he declared it his mission to put a computer on every desk in every home.

He was instrumental in driving the development of personal computing in the ’80s, the growth of the internet in the ’90s and more recently leading the charge to eradicate malaria and other diseases. In the last few years, he’s been on the front lines of Microsoft’s partnership with OpenAI and the development of GPT. How is it, you may be wondering, that Bill Gates has ended up joining us today? Well, for the last few months, I’ve been reading a book that’s being published serially by Harvard Business Review. It’s called AI-First. And it features interviews with folks like Reid Hoffman, Mustafa Suleyman, Sam Altman, and Bill, who collectively make the case that AI isn’t overhyped, it’s underhyped. We thought it would be interesting to not just interview the co-authors of this book, career technologist, Andy Sack, an old friend of mine, and former Starbucks Chief Digital Officer, Adam Brotman, and they suggested inviting one of their most interesting interviewees, Bill Gates.

And so what’s Bill’s take on the AI revolution? Superintelligence is coming. There’s no clear way to slow it down. And the technology available today is already a game changer. This is largely a good thing. We can harness AI to solve our biggest global problems. We are likely to live in decades to come in a world of superabundance. But it will take vigilance to make sure it’s the world we want for ourselves and generations to come. By the way, the format of today’s show is a little different from what you’re used to. First we’ll hear a conversation I had with Andy and Adam, co-authors of AI-First, about how they came to write this book. Then we’ll bring on Bill for a wide-ranging conversation about artificial intelligence and our collective future.
Welcome, Andy and Adam, to the Next Big Idea.

Andy Sack: Thanks for having us.

Adam Brotman: Glad to be here. Happy to be here.

Rufus: Andy, you’re a serial entrepreneur. You’ve built and invested in countless startups. You advised Microsoft CEO, Satya Nadella. You’re the founder and managing director of Keen Capital, a blockchain fund. And you have the rare distinction of being an old friend of mine. And you, Adam, are no slouch. You were the first Chief Digital Officer at Starbucks, where you led the development of their app and payment platform. Quite a good app by the way. Thank you.

Adam: Thank you.

Rufus: You were co-CEO of J.Crew. And now the two of you have joined forces to start a new company, Forum3, to help companies take advantage of the power of AI. Does the world really need another consulting firm?

Andy: No.

Rufus: But you wouldn’t define Forum3 as a consulting firm. What are you guys setting out to do?

Adam: It’s a great question in the sense of we provide software and we’re building software, we provide services, consulting services and other services, and we’re writing a book which we’re going to talk to you about, called AI-First, that’s being published by Harvard Business Review. But they’re all related to the topic of taking advantage of AI to transform your business, transform your marketing efforts in building your brand. And so we’ve actually taken describing our Forum3 as an AI lab. Because we can’t come up with a better, more descriptive term, but it’s actually an appropriate term and gives you a sense of how Andy and I think about the space. We’re not taking a traditional approach to building the Forum3 company around AI, and I think that’s related to how non-traditional this new technology is.

Rufus: So you’ve written this book, you’re publishing it serially, which is very interesting. It’s called AI-First. Why AI-First?

Andy: It’s worth noting that our original title, which is the title when we wrote the proposal for Harvard Business Review, original title was Our AI Journey. And Harvard Business Review approached us bit over a year ago and at the time we had just pivoted to become a generative AI company at Forum3. And both Adam and I, our company Forum3, were on a collective journey to explore what was this generative AI, which felt like a very significant technological development. Having been a career technologist, really started my first internet company in 1995, a bit over a year ago, I was like, “This is a big frigging deal.” Little did I know just how big of a frigging deal it was. The title, Our AI Journey, started that way. We started with a bunch of interviews with thought leaders, one of which we’re going to get to talk with today, with Bill Gates. But we also spoke with Sam Altman and Reid Hoffman and Mustafa Suleyman, to name a few.

And it’s really been about Adam and I educating ourselves about what is this technology, what does it mean for business leaders? What does it mean for society? How does it change the rules of the game? And at one point I argued with Harvard Business Review, I wanted to call the book The Holy Shit Moment. That title was not approved, understandably. But I think it is a holy shit moment, certainly for business, certainly for technology. And it’s a really groundbreaking technology that we’re mostly excited about the possibilities and opportunities that comes. And really when we talked about what title to name it, AI-First, it was something that we arrived at because as we went along, we realized that it was a total shift in mindset that was required for myself, for Adam, about how we think about our specific little business, but also how we approach business. And when you think about it from the individual to the organization, you need a shift in mindset. And thus the name, AI-First.

Rufus: Well, I had a few holy shit moments reading the first four chapters of your book, which I think is what’s been published so far. You’re publishing it serially, which I wouldn’t be surprised if we see more of that kind of approach to book writing in the future. One holy shit moment for me was when Sam Altman told you that he thought we’d have AGI, which of course is artificial general intelligence, defined as, “Machine intelligence that matches or exceeds human intelligence within five years.” Within five years. I think most people would put it out further if they think it’s going to happen.

You asked Sam what AGI would mean for business, for example, for marketing teams. And he said, “It will mean that 95% of what marketers use agency strategists and creative professionals for today will nearly instantly, at almost no cost, be handled by the AI. And the AI will likely be able to test the creative against real or synthetic customer focus groups. Again, all free, instant and nearly perfect images, videos, campaign ideas. No problem.” That’s pretty astonishing. Do you guys buy it? Do you think that this might be five years out?

Adam: Yeah. It is worth remarking that when he said that to us, we stepped outside the office and didn’t talk, which is rare for Andy and I, we didn’t talk for a couple minutes. We just sat there looking at the San Francisco scenery and taking it in. Because it was both how fast this was moving and what it really meant. And then we got into the book and we talked to Reid Hoffman next, we talked to Bill, we talked to Mustafa Suleyman. These are the top people in the field. And they started reinforcing and validating what Sam was saying and giving us more details about it. So yeah, I say, while we were holy shit, quiet, stunned, had to step aside after that Sam meeting, now we’re more like ringing the alarm bell saying, “Yeah, I don’t know if it’s five years, what your definition is, but this thing is coming fast and the genie’s out of the bottle for good and for bad.”

Rufus: So you’ve interviewed Bill Gates, Mustafa Suleyman, Reid Hoffman, you mentioned. What surprises have you encountered along the way?

Andy: The biggest surprise for me, I would say, is I don’t think that people have an awareness of just how fundamental and significant of a technology shift this is and how fast it’s coming and it’s now. I learned, as I talked about, it’s such a significant moment and how significant it’s going to change the rules of business, the game of business, what’s defensible, how to approach strategy. You need to start to wrap one’s mind around what it means because it’s happening today.

Rufus: Certainly many of us have a certain amount of concern and fear when it comes to thinking about this pace of tech acceleration and moving beyond the AGI inflection point. And we’ll talk about that with Bill. But I’m experiencing equal parts adrenaline rush and concern. On the adrenaline rush side. What I remember from the mid ’90s, which was really just the early days of the dawn of the internet, I remember seeing the first Mosaic browser. I think the three of us were all just out of college at that time. And the decision to get in early and try to figure out this new technology and try to think in advance about how it would play out, I think that was a decision that really benefited all three of us.

When I think back on the inflection point of the advent of the smartphone, I was not thinking enough about that. We could have sat in a room and said, “You know what? You’ve got a mobile device that’s a powerful computer with a GPS unit in it. We can create Uber.” I did not have that sequence of thoughts. But this feels like another such moment. I mean, I have pattern recognition, it’s just exploding with we’ve seen this movie before and we should all be paying really intense attention to what’s happening.

Adam: What’s wild about this one, we’re all kind of applying the same pattern recognition. However, this one is different. It’s more powerful, but it’s also more dangerous and more confusing. It’s like intelligence as a service, production level intelligence. And so on the one hand, I’m like you and I think, Rufus, you and I and Andy have talked about this in the past, so this isn’t new, but we’re applying our pattern recognition and there’s this feeling of excitement and, “Okay, we see this, let’s get on it.” But there is a feeling of apprehension as well about what it means to misinformation and jobs and maybe even worse that goes with it. And that wasn’t the same feeling we had with the other seminal moments.

Rufus: That’s true.

Adam: So that’s a key difference here and I think it’s good that we’re acknowledging that.

Rufus: Yeah. There’s a question of… I mean, back in those prior revolutions, I think I felt nothing but, let’s hit the accelerator. And I find myself thinking now let’s hit the brakes. And there’s a separate question that Bill’s uniquely suited to answer, which is, even if we thought it made sense to apply a braking mechanism to this process, is there any effective way to do that given the global nature of this process and given that we’re not a bunch of friends, all the entities building these technologies? So I think that’ll be an interesting thing to get Bill’s take on.

Adam: You couldn’t ask a better person a more perfect question for him to answer. So I’m excited to hear what he says.

Rufus: Coming up after the break. We’ll hear from Bill and what he has to say may surprise you. We’ll be right back.


Rufus: Bill, Andy says you win about as frequently as he wins on the pickleball court. Does that sound right to you?

Bill Gates: Pretty equal, yeah.

Andy: Hey, Bill.

Bill: Hi.

Rufus: Bill Gates, welcome to the Next Big Idea.

Bill: Thank you.

Rufus: Bill, Andy and Adam and I were just talking about the digital transformations we’ve seen in our own lives in the last 40 years. And you haven’t just seen these transformations, you’ve played an instrumental role in moving them forward. You’ve said that the demo you saw last September of GPT-4 was mind-blowing. Was it more mind-blowing than the first demo of the graphical user interface that you saw at Xerox PARC in 1980?

Bill: I’d say yes. I mean, I’d seen graphical interface prior to the Xerox PARC stuff. And that was an embodiment that helped motivate a lot of what Apple and Microsoft did with personal computing in the decade after that. But compared to unlocking a new type of intelligence that can read and write, graphics interface is clearly less impactful. Which is saying a lot.

Rufus: Well, I was interested to learn that AI is not a new interest of yours. You were intrigued as a student way back in the ’70s. And I gather you wrote, I think, a letter to your parents and said, effectively, “Mom, dad, I may miss out on the AI revolution if I start this company.” Which is the company that became Microsoft. The AI revolution took a little longer than maybe you might’ve guessed back then. Now it’s happening. What interested you about AI in those early days? And is it becoming what you’d imagined back then?

Bill: Well, certainly anybody who writes software is thinking about what human cognition is able to achieve and making that comparison. And when I was in high school, there were things like Shakey the Robot at Stanford Research Institute, which should engage in reasoning and come up with an execution plan and figure out to move the ramp and go up the ramp and grab the blocks. And it felt like some of these key capabilities, whether it was speech recognition, image recognition, would be fairly solvable. There were a lot of attempts and so-called rule-based systems and things that just didn’t capture the richness.

And so our respect for human cognition constantly goes up as we try to match pieces of it. But we saw with machine learning techniques, we could match vision and speech recognition. So that’s powerful. But the holy grail that even after those advances I kept highlighting was the ability to read and represent knowledge like humans did, just nothing was good at all. Then language translation came down, but still that was a very special case thing. But GPT-4 in a very deep way, far beyond GPT-3, showed that we could access and represent knowledge and the fluency in many respects, although not the accuracy, is already superhuman.

Rufus: Yeah. It’s just astounding. We never would’ve guessed that moving the chess pieces on the chessboard would be harder than becoming a better chess player than Kasparov. But it is interesting to see what the challenges turn out to be. And as you said that Xerox PARC demo set the agenda for Microsoft for maybe the next 15 years. The development of Windows and Office. And do you think that the impact of what’s happening right now in AI is going to set the agenda for the next many decades and even more so?

Bill: It’s absolutely the most important thing going on and it’ll shape humanity in a very dramatic way. It’s at the same time that we have synthetic biology and robotics being controlled by the Ais. So we have to keep in mind those other things. But the dominant change agent will be AI.

Rufus: In 1980, you had a light bulb moment when you famously declared, “There will be a computer in every home, on every desk.” What do you think the equivalent is for AI? Do you think we’ll have an AI advisor in every ear?

Bill: Well, the hardware form factor doesn’t matter that much, but the idea of the earbud that’s both adding audio and canceling out audio and enhancing audio clearly will be a very primary form factor. Just like glasses that can project arbitrary video into your visual field will be the embodiment of how you’re interacting. But the personal agent that I’ve been writing about for decades, that’s superior to a human assistant in that it’s tracking and reading all the things that you wanted to read and just there to help you and understands the context enough that silly things like you don’t trust software today to even order your email messages. It’s in a stupid, dumb time ordered form because the contextual understanding of, okay, what am I about to do next? What’s the nature of the task that these messages relate to? You don’t trust software to combine all of the new information, including new communications. You go to your mail and that’s time ordered. You go to your text and that’s time ordered. You go to your social network and that’s time ordered. I mean, computers are operating at a almost trivial level of semantics in terms of understanding what’s your intent when you sit down with the machine or helping you with your activities. And now that they can essentially read like a white-collar worker, that interface will be entirely agent-driven, agent executive assistant, agent mental therapy, agent friend, agent girlfriend, agent expert, all driven by deep AI.

Rufus: It seems like it will be useful in proportion to how much it knows about us, and I imagine at some point in the not too distant future, probably all four of us will be asked if we want to turn on audio so our AI assistant can effectively listen to our whole life. And I would think that there’ll be benefits to do that because we’ll get good counsel, good advice. Do you think that’s true? And do you think, will you turn it on when invited to turn on the audio?

Bill: Well, computers today see every email message that I write and certainly digital channels are seeing all my online meetings and phone calls, so you’re already disclosing in digital systems a lot about yourself. And so yes, the value added of the agent in terms of, summarizing that meeting or help me with those follow-ups, it’d be phenomenal. And the agent will have different modes in terms of which of your information it’s able to operate with. So there will be partitions that you have, but for your essentially executive assistant agent, you won’t exclude much at all from that partition.

Andy: Rufus, before we go further down the agent pathway, one question that I’ve been thinking about since our interview with you, Bill, for AI First in which you talked about really comparing your experience at Xerox PARC versus your experience experiencing ChatGPT-4, I think you’re in the most unique position, and there are probably a couple of other people that I could think of, but you’re in the most unique position to have the set of understanding of computer technology as well as building business and how computers affect human beings. I’m curious, if what you said in conversation, which was ChatGPT was as big, it sounded like you even said it was bigger than your Xerox PARC moment, what does that make you think about when you think about your grandchild’s life and what advice do you have for the next generation of leaders for tackling the challenges that are unique to AI? I’m curious about that perspective.

Bill: There’s certainly novel problems in that other technologies develop slower and the upper bound of their capabilities is pretty identifiable. This technology in terms of its capability will reach superhuman levels. We’re not there today, if you put in the reliability constraint, a lot of the new work is adding a level of metacognition that done properly will solve the erratic nature of the genius that is easily available today in the white collar realm and over time in the blue collar realm as well. So yes, this is a huge milestone that some of those past things are helpful too, but it’s novel enough that nobody’s faced the policy issues, which are mostly of a very positive nature in terms of white collar labor productivity.

Andy: What’s the thing that excites you the most about the invention?

Bill: Well, all these shortages, there’s no organization that faces white collar shortage as much as the Gates Foundation where we look at health in Sub-Saharan Africa or other development countries or lack of teachers who can engage you in a deep way, ideally in your native language. And so the idea that by using the mobile phone infrastructure that continues to drive pretty significant penetration even in very poor countries, the idea that medical advice and personal tutors can be delivered where because it’s meeting you in your language and your semantics, there isn’t some big training thing that’s taking place there, you just pick up your phone and listen to what it’s saying. So it’s very exciting to take the tragic lack of resources that particularly people in developing countries have to deal with.

Rufus: You’ve been working for 20 years on the Gates Foundation and really tackling these issues of global healthcare, education, climate change, do you think that AI will be an accelerant that will make it possible to accomplish in five or 10 years what it took the last 20 years to accomplish, or how meaningful do you think the acceleration is likely to be in these areas?

Bill: Well, the very tough problems of some diseases that we don’t have great tools for, AI will help a lot. The last 20 years, we was pretty miraculous in that we cut childhood death in half from 10 million a year to 5 million a year. That was largely by getting tools like certain vaccines to be cheaper and making sure they were getting to all the world’s children. And so that was kind of low-hanging fruit, and now we have tougher issues. But with the AIs, the upstream discovery part of, okay, why do kids get malnourished? Or, why has it been so hard to make an HIV vaccine? Yes, we can be way more optimistic about those huge breakthroughs. AI will help us with every aspect of these things, the advice, the delivery, the diagnosis, the scientific discovery piece is moving ahead at a pretty incredible clip and the Gates Foundation’s very involved in funding quite a bit of that.

Rufus: Yeah, we had your friend Sal Khan on the show recently and got the chance to spend a bunch of time with Khanmigo, and I was just astonished by what that can do. I know you were recently in New Jersey visiting schools that are implementing Khan Academy’s new programs, and that’s pretty exciting, this idea that improving education at scale for billions of people, the impact of that is pretty hard to measure.

Bill: Yeah, I mean, Sal’s book doesn’t say, okay, what world are we educating kids for? It’s just if all AI was, was available in education, that’s pretty miraculous. Because you have the other things shifting at the same time, it’s a little more confusing. But that realm where he says, okay, what if it was just in education, is incredibly positive.

Rufus: Yeah. Well, that gets to the personal part of, I think you have a new granddaughter. I know Adam has a seven-year-old, and when we think of this question of, what does it look like? I mean, fantastic that our kids will have an Aristotle-level private tutor to help further accelerate their educational process. But there is the question of, what will they need to know to be effective in the world? And my kids and Andy’s kids are a little older, but I know Adam, you’ve got a younger daughter and Bill, you’ve got a new granddaughter.

Adam: It’s interesting, because Bill, I wanted to come at this from a slightly different direction, but since you brought it up, she watches me use whisper mode on ChatGPT, she’s seen me live in an AI world, and it’s fascinating to watch her be very comfortable with a voice interface, especially at her age, it’s actually easier for her to do voice interface than she’s still learning how to spell, I mean she just figured out how to read. So I thought that was an interesting, I’ll call it, look into how much is this going to be, not just natural language chat, but even voice chat versus point and click.

But Bill, I was going to ask you something about, maybe come out of this from a slightly different direction, which is, what do you think about this debate, there’s a little bit of a debate going on, and maybe that’s too strong of a word about whether or not the fact that all these frontier or foundation models have clustered into benchmarks around ChatGPT-4 and there are some people that are on the side that we’re plateauing or something like that, but most of the smartest researchers I follow tend to still stay with the fact that the scaling laws are going to continue to apply for at least the next couple of years. I’d love to get your take on A, Where do you come out on that discussion and B, do you find yourself rooting for it to plateau or are you emotionally agnostic because of some of the concerns around the technology?

Bill: Well, the big frontier is not so much scaling. We have probably two more turns of the crank on scaling, whereby accessing video data and getting very good at synthetic data that we can scale up probably two more times. That’s not the most interesting dimension. The most interesting dimension is what I call metacognition, where understanding how to think about a problem in a broad sense and step back and say, “Okay, how important is this answer? How could I check my answer? What external tools would help me with this?” The overall cognitive strategy is so trivial today that it’s just generating through constant computation each token in sequence, and it’s mind-blowing that that works at all. It does not step back like a human and think, “Okay, I’m going to write this paper and here’s what I want to cover. I’ll put some facts in. Here’s what I want to do for the summary.”

And so you see this limitation when you have a problem like various math things, like a Sudoku puzzle where just generating that upper left-hand thing first causes it to be wrong on anything above a certain complexity. So we’re going to get the scaling benefits, but at the same time, the various actions to change the underlying reasoning algorithm from the trivial that we have today to more human-like metacognition, that’s the big frontier. It’s a little hard to predict how quickly that’ll happen. I’ve seen that we will make progress on that next year, but we won’t completely solve it for some time after that. So your genius will get to be more predictable. Now, in certain domains, confined domains, we are getting to the point of being able to show extreme accuracy on some of the math or even some of the health-type domains, but the open-ended thing will require general breakthroughs on metacognition.

Rufus: And do you think that metacognition will involve building in a looping mechanism so the AI develops an ability to ruminate as we homo sapiens do? And I’ve heard some people like Max Tegmark suggested that that could be part of what makes us conscious is this ability to have conversations with ourselves.

Bill: Yeah, consciousness may relate to metacognition. It’s not a phenomena that is subject to measurement so it’s always tricky, and clearly these digital things are unlikely to have any such equivalent, but it is the big frontier and it will be human-like in terms of knowing to work hard on certain hard problems and having a sense of confidence and ways of checking what you’ve done.

Andy: One of the thing that I’ll just say in the process of writing and interviewing you for AI-First as well as Reid Hoffman and Sam Altman, Mustafa, it’s been an education for Adam and I. And I come away from these conversations regularly going, “Oh my goodness.” And I’m blown away at the… I’m paying attention every day to the pace of the technological advance by really many different companies, large companies, there’s a lot of money, there’s a lot of talent being poured into it. And so the pace of the development and the potential impact of that technological advance, I’m astounded by and have some limited understanding, do you think we’re moving too fast?

Bill: If we knew how to slow it down, a lot of people would probably say, “Okay, let’s consider doing that.” As Mustafa writes in his book, the incentive structures don’t really have some mechanism that’s all that plausible of how that would happen given the individual and company and even government level thing. If the government level incentive structure was understood, that loan might be enough. And the people who say, “Oh, it’s fine that it’s open source,” they’re willing to say, “Well, okay, if it gets too good, maybe we’ll stop open sourcing it,” but will they know what that is and would they really say, “Okay, maybe the next one.” So you pretty quickly go to, let’s not let people with malintent benefit from having a better AI than the defense good intent side of cyber defense or war defense or bioterror defense. You’re not going to completely put the genie back in the bottle, and yet that means that somebody with negative intent will be empowered in a new way.

Rufus: So perhaps not a good idea for the most sophisticated AI models to be open source, in your judgment, given this global environment?

Bill: Yeah. And people sort of see that point in principle. But then when you try to get them to say, “Okay, specifically, where would you apply that?” It gets a bit less clear.

Rufus: I mean, Adam and I were talking yesterday about how even if it were possible hypothetically to stop AI development exactly where it is right now, it would probably take 10 years of Forum3 and other folks helping companies and individuals figure out how to apply the technology that currently exists.

Bill: I’m not sure about that because it’s pretty clear I want to make an image. Okay, what do I have to learn? I have to learn English. This is the software meeting us, not us meeting the software. So it’s not like there’s some new menu, file, edit, window, help and oh, you got to learn that. You have to type the formula into the cell. This is you saying, “Hmm, I wish I could do data analysis to see which of these products is responsible for the slowdown.” And it understands exactly what you’re saying.

So the idea that there’s an impedance of adoption, it’s not the normal thing. Yes, company processes that are very used to doing things the old way will have to adjust. But if you look at telesupport, telesales, data analytics, give somebody a week of watching an advanced user and say no manual of any kind, just learn by example of how this stuff is being used, the uptake, assuming there’s no limit in terms of the server capacity that connects these things up, which I don’t expect, certainly in rich countries, there’ll be a gigantic limitation there. And you’re talking about an adoption rate that won’t be overnight, but it won’t be 10 years. Like take human translation, the idea that a free product provides arbitrary audio and text human translation, I mean, that was a holy grail of, “Oh my God, if you ever had a company that could do that, it would collect tens of billions in revenue and solve the Tower of Babel.” And here a small AI company is providing that as an afterthought free feature. It’s pretty wild.

And you say, “Well, how are people going to adapt to free translation?” I don’t think it’s going to take them that long to know, “Hey, I want to know what that guy was saying.” And yes, the quality of that a year from now and the coverage of say, all African languages will get completed. The foundation’s making sure that even obscure languages that are not written languages, that were in partnership with others, gathering the data for those, the Indian government’s doing that for Indian languages. So I don’t think saying, “Hey, calm down. It takes a long time to figure out how to utter the description of the birthday card you want,” and so it’ll take 10 years for the lagging people to switch their behavior.

Rufus: Well, I think Sam Altman said on your podcast, Unconfuse Me, which I enjoy, that they’re seeing a productivity improvement of up to 300%, I think among their developers. And in other sectors I think we’ve seen reports of 25, 50% increases in productivity, just getting that, the great Gibson line, “The future is here, it’s just not evenly distributed.” It does feel like getting all companies to fully benefit from that level of productivity enhancement certainly will be a process of some kind.

I was interested in your comment in the first chapter of AI first, which is about productivity. You said, “Productivity isn’t a mere measure of output per hour. It’s about enhancing the quality and creativity of our achievements.” What do you mean by that?

Bill: Well, whenever you have a productivity increase, you can take your X percent increase and increase the quantity of the output, you can improve the quality of the output, or you can reduce the human labor hours that goes in input. And so you always take those three things.

There are some things when they get more productive, like when the tire industry went from non-radial tires to radial tires, even though the cost per year of tire usage went down by a factor of four, people didn’t respond by saying, “Okay, I’m going to drive four times as much.” So the demand elasticity for some things like computing or the quality of a news story, there’s very high demand elasticity. If you can do a better job, you just leave the human labor hours alone and take most of it in the quality dimension.

And then you have a lot of things where that’s not the case at all. The appetite for miles driven did not change. So society’s full of many things that are across that spectrum. And so whenever you have rapid productivity increases, there was a memo inside Microsoft about how we were going to make databases so efficient that it would become a zero sized market. Now, in that case, we’re still in the part of the curve where you have demand elasticity, but someday even in that domain we’ll get past incremental demand.

Adam: If you were making a guess right now, and you mentioned healthcare and education, how would you respond to the question about what do you think the first big, I’ll call it breakthrough application will be?

For example, one of the podcasts that Andy and I listen to, they were talking this weekend, they keep saying, “Oh, we haven’t seen the big breakthrough application,” which is interesting because I’m not sure that’s true, but let’s just take it on its face value that we’re still in this sort of, I’ll call it experimentation phase or whatever, which is what they were trying to say. I’d be curious to get your, what’s your thought? Where do we see the first big, the Uber, like if location services and mobile cloud, the first big app was kind of Uber and everyone talked about Uber being an example of that.

Andy: Before that, it was probably Google Maps. It was probably map technology.

Adam: Yeah, that’s right. That’s right. So Bill, when you just think out, do you go right to education, healthcare? Where does your head go when you think, oh, I’ll bet you the first big breakthrough app, consumer app or even industrial app will be what?

Bill: Well, I guess the naysayers are pretty creative to be able to say that something gigantic hasn’t happened.

Adam: I agree.

Bill: They don’t think summarizing meetings or doing translation or making product parameters more productive.

Adam: That’s right.

Bill: It’s mind-blowing. This is white-collar capability with a footnote that in many open-ended scenarios, it’s not as reliable as humans are. You can hire humans and they can go haywire, and so you have some monitoring, but these things, if put into new territory, are somewhat less predictable as there’s some domains where we compound what goes on, like support calls or telesales calls where you’re not pushing off the edge at all. So I don’t know, I just can’t imagine what they’re talking about that-

Andy: I think the comment when people say that, not withstanding what you just said, Bill, they’re creative in their naysaying capabilities because I think your response is accurate for sure. It’s the second order effect. When the car was developed, it could get you from point A to point B and you might even be able to predict the development of roads and highways, et cetera. But you might not be able to predict Los Angeles or suburbs, drive-in movie theaters.

I think in more modern stance, the worldwide web came along and there were lots of brochureware and there was travel agent Expedia came along and that was all sort of run-of-the-mill first order effect. But people point at Uber as a second order effect on the technology that it was like, oh, you couldn’t have predicted that. Now, maybe you could, maybe you couldn’t. But that’s what Adam’s question I think is going for. When you look at AI, in many ways the game of search has already changed, which is ubiquitous consumer activity and certainly ChatGPT was a monumental, the fastest growing adopted technology ever. So I’m not minimizing or giving credence to the naysayers, but it’s really about the second order effects.

Bill: ChatGPT 3 was not that interesting. It was interesting enough that a few people at OpenAI felt the scaling effect would cross a threshold. And I didn’t predict that and very few people did. And we only crossed that threshold less than two years ago, a year and a half in terms of general availability. So we are very much in the people who are open-minded and are willing to try out new things are the ones using it.

But you just demo, okay, here’s image editing and no, I’m not teaching you 59 menus and dialogues in Photoshop to do editing. I’m telling you type, get rid of that green sweater. And people are like, “Oh, I don’t know if I could do that. That sounds very hard for me.” And when you show people that, it’s like, what? Make that photo bigger. I didn’t take a shot that was bigger, but I’d like the photo to be bigger. So fill in the missing piece to make it bigger. And it’s like, what?

Or patient follow-up where it calls you up and talks to you about, did you fulfill your prescription? How are you feeling? What are you doing? People may get saturated if they really try and expose themselves to the various examples, I do think they’d be saturated with, oh my God, this is a lot of extremely concrete capability. And then you think, okay, when I call up to ask about my taxes, when I want my medical bill explained, that white-collar worker is almost free type mentality is the best way to kind of predict what this thing suffuses to. Even though I fully admit there’s a footnote there that it’s in some ways still a little bit of a crazy white-collar worker, but we’re going to get rid of that footnote over a period of years.

Rufus: I know one of those crazy white-collar workers who’s a CEO of a company that’s growing very quickly, who asked his top salespeople what takes you the most time during this day? And they said, drafting follow-up emails following sales calls. And he created an instance of GPT to pulled in all their best practices, best communications, automatically transcribes every phone call and automatically generates the follow-up email. And he’s laying off half of his sales team so that the best half of his sales team can now work twice as efficiently.

So there we have both a success story in the sense that it’s a highly efficient and wildly impressive implementation of the technology, but for the other half of the sales team, it’s not quite as exciting unless they can use new AI technologies to build a competing company or to do something else, which I guess gets to this broader question of to what extent do we think this empowers the little guy versus the big guy? We’re seeing that just a few big companies seem to be the dominant players in the development of the technology. But on the other hand, it does seem that everyone has access to GPT-4 Omni now for free. So there’s also an equalizing element.

Bill: Well, it’s important to distinguish two parts of economic activity. One is the economic activity building AI products and both base level AI products and then vertical AI products. And we can say for sure that the barriers to entry are uniquely low in that we’re in this mania period where somebody literally raised $6 billion in cash for a company and many others raised hundreds of millions. And so the idea that there’s never been as much capital going into a new category, you could even say a new mania category, this makes the internet or the early auto industry mania look quite small in terms of the percentage of IQ and the valuations that come out of this.

There was no company before the turn of the century that had ever been worth a trillion dollars. Here we have one chip company who doesn’t make chips, it’s a chip design company that in six months adds a trillion dollars of value. And so the dynamics within the AI space is both hyper-competitive, but with lots of entry. And yes, Google and Microsoft have the most capital, but that’s not really stopping people either in the base capabilities or in those verticals.

Once you leave the AI tools domain, which as big as it is, is a modest part of the global economy, how that gets applied to, okay, I’m a small hospital chain versus a big hospital chain. Now, when I have these tools, does that level the playing field or not? You would hope that it would and that you can offer for the same price or less, a far better level of service.

All of these things are in the furtherance of getting the value down to the customer and figuring out early in an industry where the barriers are so that some of the improvements stick with companies versus perfect competition where it all goes to the end users. That’s very hard to think through. Like picks-and-shovels is saying, okay, look to the side industries as well as to the primary industry. Savings and loans did better than home builders because it was a more scarce capability there that a few did better than others. It’s asking a lot, but people are being forced to think about the competitive dynamics in these other businesses.

When you free up labor, that labor, societies essentially richer, that through your tax system you can take that labor and put it into smaller class size or helping the elderly better and you’re net better off. Now, for the person involved, they may like that transition or not, and it requires some political capacity to do that redirection. And you can have a view of our current trust in our political capacity to reach consensus and create effective programs. The frontier of possibilities is improved by increased productivity. You’d never want to run the clock backwards and say, “Thank God we were less productive 20 years ago.”

Rufus: We were talking earlier about the impossibility of slowing down or the great difficulty of slowing down the current pace of AI development. Do you think AI companies should be governed? And if so, by whom? By boards, by government, by all of the above?

Bill: Well, government is the only place where the overall wellbeing of society as a whole, including against attack and a judicial system that’s fair and creating educational opportunities. So you can’t expect the private sector to walk away from market-driven opportunity unless the government decides what the rules are. So this is, although the private sector should help educate government, work with government, the governments will have to play a big role here. So that’s a dialogue that people are investing in.

Now, governments will take the things that are most concrete, like what are the copyright rules or what are the abuses of deep fakes? Or in some applications, does the unreliability save health diagnosis or hiring decisions mean that you ought to move more slowly or create some liability for those things? They’ll tend to focus in on those short-term issues, which that’s fine, but the biggest issue has to do with the adjustments to productivity that, overall, should be a phenomenal opportunity if political capacity and the speed with which it was coming were paired very well.

Rufus: Our environment of polarization doesn’t help the effectiveness of our government. And I think you mentioned on your podcast that in a worst-case scenario, we could imagine polarization breaking our democracy. Do you think AI can help us all get along? And if so, how would it do that?

Bill: Well, it’s such a powerful tool that at least we ought to consider for all our tough problems where it can be beneficial or where it can exacerbate things.

So certainly if somebody wants to understand, okay, where did this come from, this article or this video, what is the providence? Is it provably a reliable source or is this information accurate? Or in general, in my newsfeed, what am I seeing that somebody who’s voting for the other side, what are they seeing? And try to explain to me what has pushed them in that direction. You’d hope that sort of the, again, going back to the paradigm of white-collar capability being almost free, that well-intended people who want to bridge those misunderstandings would have the tools of AI to highlight misinformation for them or highlight bias for them or help them be in the mindset and understand, okay, how do we bridge the different views of the world that we have?

So yes, although it sounds outlandish, it’s like when people say, “Oh, let’s geo engineering for climate.” They’re like, “Oh no,” you always think technology might be the answer and okay, I’m somewhat guilty of that, but here the AIs are going to be both part of the solution, while if we’re not careful also potentially exacerbating these things. And you can almost say it’s good that the blue-collar job substitution stuff is more delayed than the white-collar stuff, so that it’s not just any one sector and actually it’s the more educated sector that’s seen these changes first.

Rufus: I hadn’t thought of that. Okay, last question. You’ve said that a possible future problem that befuddles you is how to think about our purpose as humans in a world in which machines can solve problems better than we can. Is this a nagging concern that you continue to wrestle with? How do you think about it now?

Bill: Well, I don’t think somebody who’s spent 68 years in a world of shortage, I doubt that either at that absolute age or having been immersed in such an utterly different environment, that the ability to imagine this post shortage type world will come from anyone near my age. So I view it as a very important problem that people should contemplate. But no, that’s not one that I have the solution or would expect to have.

Rufus: Although you have some experience with living in a post scarcity world in the sense that you haven’t had scarcity in your own personal life for a few years now.

Bill: I haven’t had financial scarcity, but somebody who’s had the enjoyment of being successful and sees problems out there like malaria or polio or measles, the satisfaction that, okay, the number of people who work on this, the amount of research money for this is very, very scarce. And so I feel a unique value added in taking my own resources and working with governments to orchestrate, okay, let’s not have any kids die of malaria. Let’s not have any kids die of measles. So, you’re right, financially, that what I do for fun is a potential kind of thing that people can do, play pickleball, because the fact the machines will be good at pickleball, that won’t bother us. We’ll still enjoy that as a human thing. But, the satisfaction of helping out reduce scarcity, which is the thing that motivates me, that also goes away.

Rufus: Yeah. Yeah. Yeah. So, the true last question. Rumor has it, you’re working on a memoir. Can you tell us anything about that?

Bill: Yeah. We announced that in next February, a first volume that covers my life up till the first two or three years of Microsoft, about age 25 or so, called Source Code will come out. So, I’m working on editing that right now, since we’re about to hit deadlines. But, yeah. We got a good reception to the pre-announcement of that first volume.

Rufus: Is GPT helping you out with that?

Bill: Actually, no. Not because I’m against it or anything. I suppose, in the end, maybe we should. But, no. We’re being a little traditional in terms of how we’re both writing and editing.

Adam: Will there be two volumes or three volumes, do you think?

Bill: Three. So, we’ll probably wait three years before we do a second one. But, there’s a period that’s Microsoft oriented and a period that’s giving-all-the-money-away focused.

Rufus: Well, if you and Andy play enough pickleball, maybe you’ll live long enough to write a fourth volume.

Andy: We hope so.

Bill: Making AI Good. We’ll make that the fourth volume.

Rufus: Exactly. Well, Bill. Thank you so much for joining us today. Such an interesting conversation.

Bill: Yeah. Fantastic.

Adam: Thanks, Bill.

Andy: Thanks, Bill.


Rufus: Wow! Adam and Andy, so interesting. Let’s unpack some of our favorite moments, Adam, for me, there was when you said, “Some people say we’re waiting for the breakout application for AI. What’s it going to be?” And Bill said, “The naysayers are pretty creative to be able to say that nothing transformative has happened. What’s happening is mind-blowing.” I thought that was a great moment.

Adam: There’s several. I’m sure we’ll talk about them. That was definitely my favorite, because classic Bill in the sense of, he’s just got such a great and unique perspective, the way he sees the world and explains the world. And he’s right. The killer app is here. And it relates to another moment where he said, “Look, one of the holy grails for a long time was a perfect translator app, like real-time, natural language. And this is a free afterthought feature of the foundational AI systems that are out there.” And so, his comment about, which I agree with about, it’s interesting that people are saying that we’re still waiting for the Uber of AI, and yet this white-collar intelligence as a service at production level is available. He pointed out, he goes, “it’s still got issues and it hallucinates and it has problems and whatever. But, as it is today, it is quite the killer app.”

Andy: Yeah. I don’t think that sentiment can be emphasized sufficiently enough, both just how profound the technology is today and the fact that we take for granted that in an instant this podcast could be translated into I think 150 different languages, instantly. Both the taking for granted of that technological leap forward, as well as all the plethora of other capabilities set, where it exists today, and that we’re both looking for and scoffing at the expectation of the next consumer app like Uber. It just completely, under appreciating the moment that we are in.

Adam: Yeah. And it relates to another point, because he was making the point, Bill was just now, about how it’s not like it’s doing all this and you need to go to school on how to use it. He said, “The software is meeting the human. You just need to say what you want it to do. And to the extent it can do it, it just does it.” And that’s unlike any other software we’ve ever experienced. So, it’s universally, democratically accessible, both in terms of its ease of use in terms of its ability to show up at production scale on a smartphone, its capability set. I thought that was a really poignant moment for Bill.

Rufus: Well, and then he made the point about the acceleration of just the capital and of the businesses. And Bill’s not someone who’s easily impressed by business growth. But he pointed out that there was no company in the world before 2000 that was worth $1 trillion. We just had one chip design company add $1 trillion of value in six months. Obviously referring to Nvidia. Someone just raised $6 billion for an AI company. I think he was referring to Elon Musk. But, clearly, Bill Gates himself is wide-eyed about the pace of investment and acceleration of business value.

Adam: Yeah. I thought another interesting moment, tell me what you guys think, was when we asked him about where this is going? And the scaling laws, and do they apply? And I thought he gave a pretty specific answer, which I learned from. He was saying, “We get two more turns of the crank on scaling, literally in terms of how much more data we can feed to it.” And my guess is, we get quite a few more turns of the crank when it comes to compute and we’ll see how much of the scaling relates to compute versus data. But, his point was it’s not about that as much as it’s about “metacognition,” I think was the word he used, and this idea of how do you get the systems to think deeper and new level-two thinking, et cetera. That was a great answer and I thought it was a new way, I don’t know about you guys, of thinking about the scaling laws and the progress these are making.

Rufus: Yeah. Yeah. What’s astonishing is we have this GPT-4 omni-level intelligence when the systems are really highly inefficient, as I understand it. And we’re in the process of building in much more intentional and efficient storage of information and ways of thinking. And then, of course, I have a geeky obsession with human consciousness and the question of whether it may become possible to build some version of consciousness on silicon. So, I was pretty interested in his comment that, “Yeah. Metacognition is the next capability we need to build into AI. And yes, consciousness may be related to metacognition.” He did say computers are unlikely to mirror humans in this way of being conscious. But, unlikely doesn’t mean it won’t happen.

Andy: What was his point, which was … He was being humorous, I think, that thank goodness it’s the white-knowledge workers that AI is coming for. What was his point at that juncture?

Adam: Because we were talking about the societal implications and the inference. He didn’t say this was, things like, are we going to need universal basic income? And what happens if you’re displaced from your current job? Or need to be retrained into a new job? I think his point was that white-collar workers are … I think he literally said, “tend to be more college educated and, therefore, in theory are probably more malleable to being retrained into another white-collar job to learn how to use these systems.” Whereas, as opposed to … I think he was saying, and I don’t know this to be true, that it may be harder to retrain a blue-collar job than a white-collar job. But, I think that was his point, whether it’s true or not. I took it as, maybe there’s more of a safety net for white-collar workers, that kind of stuff.

Rufus: Well, and if you think of how destabilizing it would be for society to suddenly have every truck and taxi driver in the world out of a job. That’s what we all thought was going to happen 10 years ago. And it was a great nuance that I had not thought about, that actually it’s good for social stability, that we’re going to have a whole bunch of attorneys and other people who are losing their jobs. And you know what? They’re going to be okay.

Adam: Yeah. They’re probably more … I hadn’t thought of it, because Bill does such a good job of thinking macro, to his point about the work he’s done to save child mortality and all that kind of stuff. But, white-collar workers and college-educated people, I am guessing statistically, I don’t know this, probably are more likely to have a higher percentage of home equity ownership, of a 401k. I don’t know that, but I’ll bet you there’s more of a safety net in general that has been built up under that group. So, yeah. I think that was his point, Andy, that … And it’s weird. Remember Sam Altman mentioned that to us when we met with him. He actually said to Andy and I … I don’t know if it made its way into the book, but I’ll give you a behind the scenes. He said, “I thought the thing it would be worst at would be creative thinking, like creativity.”

So, he wasn’t talking about white-collar versus blue-collar, but it’s similar. He was saying, “I thought it would be better at, I’ll call it rote summarization and data analysis.” And he was shocked at how creative it could be. The diffusion models can produce an image, can produce a video. But, it can be creative in its thinking and its strategic thinking, which is why we write about and we really emphasize to our business clients, you really need to be inviting AI to the table all the time, because people don’t think of it as a creative tool. And creative thinking and helping you come up with solutions to your thorny problems as a white-collar worker and it’s actually quite good at that.

Rufus: Adam Grant made the point, I think it was in his book Originals, that creative success is highly correlated with the quantity of ideas that are generated. So, you look at Picasso, the quantity of drawings and paintings you generated. And Buzzfeed famously used to generate 20 headlines for every article, pick the best one, created this incredible click bait. And it strikes me that having AI as a creative partner will make it easier for people in business to be able to generate, not just one or two or three ideas for a given angle on a marketing campaign or a communication, but a dozen or several dozen. And it will still, at least for some time, be the human that’s doing the critical editorial selection process.

Adam: It’s interesting about that point, Rufus, is that one of the things we’ve learned is that in the best practice of prompting, if you want to be a really good prompter, there’s a couple of different techniques that work really well. One of them is called chain-of-thought prompting, which is where you’re actually making and forcing the AI to go through its steps and show its reasoning, just like a human would, as opposed to just trying to skip to the answer. And related to chain of thought like that, is you ask the AI to actually produce 30 answers. So, if it’s, for example, it’s like a tagline, you actually tell it, “I want you to produce 30. And then I want you, before you stop in your prompt answer, to rank the top five of the 30 you produced and tell me why.” And so all of a sudden you get an answer that’s so much better than if you just said, “Give me a tagline.”

Rufus: Well, getting to the AI risk topic, I was interested to hear Bill say that, “Yes. If there was actually a way to slow down,” in response to, I think it was your question, Andy … “If there was a way to slow down AI development, a lot of people leading companies would probably choose to do so.” I thought that was his subtle way of saying, “Yes. If we could slow down AI development now, that would be a good idea.” He didn’t say that outright, but I think that was the implication. But then, he went on to the practical matter that-

Andy: Which is there’s more capital and it’s charging ahead.

Rufus: Yeah. It’s charging ahead.

Andy: And incentives.

Rufus: Incentives and it’s a global environment and the good guys have to have better technology than the bad guys.

Adam: I thought it was also interesting how he mentioned government regulation. And if I heard what he just said correctly, if I interpret it correctly, he was saying, “Yeah. It’s the only way. It’s the only way that we have a chance of-

Andy: It’s the only party.

Rufus: Right.

Andy: The United States is just so far behind on a regulatory basis, policing privacy in particular, than say Europe. They just have so many more protections. And do I think that either the US or Europe are going to get regulation right for AI? It’s really tricky. It’s a very tricky topic.

Rufus: And if anyone would have a negative association with government regulation, it would be Bill Gates. Right? He had the antitrust-

Andy: Yeah. It was super painful.

Rufus: … stuff that Bill and Microsoft went through was extremely painful. So, the fact that he’s saying, and we’ve heard Sam Altman say this too, please regulate this sector. It’s important. Those weren’t his exact words, but clearly everybody agrees it’s important. Well, Andy and Adam, I’d love to pose a question to you that we posed to Bill, which is, what’s your advice for your kids when it comes to how to respond to this AI journey of ours, this AI transformation? Is it, “Jump in with two feet. Learn how to deploy and engage with AI as fast as you can”?

Andy: Yes. I think I’m reminded of at different points … I remember seeing my first browser, the Mosaic browser back in 1994. I think when it comes to technology, it’s a tool. It can be really, really useful and powerful. And I’ve been fortunate enough to be a career technologist. I’ve enjoyed the career, but I think AI is as significant if not more significant than the browser. And so, I’ve encouraged both my kids who are in their twenties to certainly dive in and be aware and use it for their professional and personal enjoyment and advancement.

Adam: It’s interesting. I would say to my daughter the same thing I would say to an adult right now, which is when AI doesn’t change, is the fact that you still, to be successful in life, in my opinion, need to demonstrate a growth mindset, intellectual curiosity, and most important passion towards something. The interesting meta point here, is that Andy and I are passionate about how connecting the dots between technology and business and brands and experiences … And we’ve made a career out of it. But, to be honest, I would do what I’m doing with Andy for free. Don’t tell Andy that. But, if I could pay my bill some other way, I’d do it for free. And the truth is that I mean that. I love what I do.

And so it’s cliche, but how does that relate to your question? Well, if I was talking to someone else whose kid is more like Andy’s kid’s age in law school and I was like, “Hey,” and they were worried, “Oh, my God!” because of AI, they’re not going to be lawyers and accountants. And I’m like, well, I can tell you this much. If they love law or accounting or they love the craft and the profession, then there’s going to be … We say this to all of our clients, there’s going to be … The leading law firms are going to be the best at using AI to further what they do. So, my advice would be, yes. Definitely be literate and be proficient and experiment with these platforms as much as you can, because whatever … But, that’s not going to be what makes you successful. But, if you don’t do that, whatever your passion is, if you don’t have that tool in your tool belt, you’re just going to feel like you can’t succeed as well, because you don’t have that AI literacy.

Edited and condensed for clarity.

To enjoy ad-free episodes of the Next Big Idea podcast, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->