Charlotte Blease is an Associate Professor in the Medical Faculty at Uppsala University in Sweden. She is also a researcher in the Digital Psychiatry Program at Harvard Medical School.
What’s the big idea?
Humans are fallible…and unfortunately, that applies to doctors too. Misdiagnosis and error lead to a considerable number of deaths. Medical professionals have their own biases and limited bandwidth and time to keep up with the latest research developments. Doctors are doing their best, but it’s possible that AI could do even better. As eerie as it is to consider entrusting your healthcare to a bot, it could be a lifesaver.
Below, Charlotte shares five key insights from her new book, Dr. Bot: Why Doctors Can Fail Us―and How AI Could Save Lives. Listen to the audio version—read by Charlotte herself—below, or in the Next Big Idea App.

1. Medicine is failing us more than we think.
Every four to five days, a tragedy on the scale of 9/11 strikes, yet hardly anyone notices. On a weekly basis, the equivalent of four airplanes (each carrying 170 people) falls out of the sky, killing all the passengers onboard. These figures don’t make the 24-hour news cycle, but the inconceivable is happening. Medical error is one of the leading causes of death in the United States and is responsible for over a quarter of a million fatalities annually.
Around one-third of this death toll is due to misdiagnosis. Globally, most people will face a diagnostic error at least once in their lifetime. In Europe, 22 million patients with rare diseases don’t have a diagnosis and 8 million wait an average of a decade to get one. In low- or middle-income countries, the misdiagnosis rate is likely to be considerably higher.
Errors and misdiagnoses are not the only problems. Modern medicine prides itself on being scientific; yet, studies show that evidence-based treatments are offered only about half the time. And when it comes to accessing medical expertise, healthcare is upside down: Those most in need—the sickest, poorest, the elderly, and the most marginalized in society—are more likely to be left behind.
People with disabilities, parents, and part-time workers (including those with gig economy jobs) often struggle to attend check-ups. American Time Use Survey data show patients sacrifice an average of two hours for a 20-minute doctor’s visit, with low-income and unemployed people facing up to 28 percent longer burdens. Even when we manage to put a foot in the doctor’s office, patients are not treated equitably. Hippocratic Oaths are sometimes hypocritical oaths, leaving some of us at a disadvantage in the clinic.
2. Doctors are medicine’s second victims.
Underneath the commanding professional garb, physicians face realities that patients seldom see. In the U.S., half of all doctors say they are burned out, with 20 percent reporting they are depressed. An estimated 300 to 400 doctors in the U.S. take their own lives every year. That’s the equivalent of one medical school graduating class dying by suicide annually.
“By graduation, half of what medical students learn is already outdated.”
As patient numbers surge, these pressures are only mounting. Doctors are officially a scarce resource. We are not producing enough of them to meet patients’ needs. The UN forecasts that by 2037, we will share the planet with a billion more people. Longer life and more people carry consequences for doctors and our healthcare systems. In the U.S., by 2050, most people aged 50 or older will live with one or more chronic illnesses.
Making matters worse, medical knowledge moves faster than doctors can keep up. It takes approximately 17 years for research to transition from the bench to practice. By graduation, half of what medical students learn is already outdated. And with a new biomedical article published every 39 seconds, even skimming 2 percent of summaries would take more than 22 hours a day. Nor is the knowledge treadmill shrinking. There are over 7,000 rare diseases, with around 250 more identified each year. Viewed from another angle, it’s remarkable that doctors get it right as often as they do.
3. AI is resilient.
Digital tools defy traditional doctor stereotypes. They don’t don classic white coats with stethoscopes draped around silicon necks. But they are remarkably resilient. Being devoid of brows and brains, bots don’t sweat or stress. They are not hostage to circadian rhythms, low blood sugar, or distractibility. AI’s brute computational power means it isn’t limited by fleshy constraints.
Physicians have barely any time to read, never mind acquire the latest research. They are deprived of their sleep daily. But machines can crunch their way through open-source data at breakneck speed without needing to stop for a breath, a break, or even to take a pee. Like speed-freak bookworms, AI has a stunning capacity to ingest medical publications and data in seconds, 24/7. Where doctors vary in unwanted ways, AI can be more consistent. AI chatbots make errors, too, but the question is, who or what makes fewer mistakes? While much more research is needed, tantalizing studies are proving hard to ignore. Studies demonstrate that some AI tools vastly outperform human doctors in clinical reasoning, including for complex medical conditions.
A particular AI superpower is spotting patterns humans miss. In one recent study, researchers fed 50 clinical cases (including 10 rare conditions) into a popular chatbot. It accurately identified 90 percent of the rare disease diagnoses within eight suggestions, routinely outperforming the doctors in the study. For the one in 10 people worldwide who live with a rare disease, AI could be a lifeline.
4. Bots could be less prejudiced than people.
Bots are sometimes biased because AI reflects individual and societal prejudices. Its training can embed unwanted biases—giving rise to the slogan “garbage in, garbage out.” When unwanted biases are baked into machines, leading to unfair recommendations, this is called algorithmic discrimination. In medicine, there is a huge scope for machines to perpetuate unfair treatment via coded biases. In a 2024 study of ChatGPT 4, researchers found the model was far more likely to diagnose men than women with equally common conditions like COVID-19 and colon cancer. It also recommended fewer CT, MRI, or ultrasound scans for Black patients versus white patients, and it judged white men as more prone to exaggerating pain than any other group.
Examining the scope for bias, errors, and safety with AI is crucially important. But this focus often comes with a selective amnesia about the creaking, inherited systems we already rely on. It assumes the status quo of human doctors working in a traditional way is inherently superior. Unfortunately, a wealth of research demonstrates that doctors are biased, too.
“De-biasing AI is likely a more achievable goal than debiasing doctors’ split-second decisions in high-pressure clinics.”
We’re better at spotting bias in others than in ourselves. That’s why it’s a game changer to train AI to do some heavy lifting in healthcare: to flag missing demographics, expose skewed findings, and identify prejudice. AI studies have, at scale, identified discriminatory language embedded in electronic medical records for patients with chronic pain, diabetes, and addictions. AI also shows us that doctors are more likely to use negative descriptors for Black versus white Americans. As uncomfortable as it sounds, de-biasing AI is likely a more achievable goal than debiasing doctors’ split-second decisions in high-pressure clinics.
Pain, too often dismissed in clinical settings, is especially problematic for marginalized patients. In one study, AI was used to read knee X-rays to predict arthritis pain, capturing 43 percent of the differences across race, income, and education, compared to just 9 percent by radiologists. That five-fold jump shows AI could finally give every patient’s pain the attention it deserves.
5. People pour their hearts out to machines.
“You know, doctor, I really like this computer better than the physicians upstairs.” So proclaimed the very first person to “talk” to a computer about their health. The year was 1966, the location was a hospital at the University of Wisconsin, and the physician was Warner Slack. On hearing the patient’s candid admission, Dr. Slack was not insulted. He recognized something unique was unfolding. Dr. Slack later reflected in an academic article, “The physician presents an authoritarian figure,” and yet, “the patient was very comfortable with the machine and criticized it freely.”
Many doctors still believe that AI can never replace them when it comes to face-to-face interactions. In multiple international surveys of doctors, I have asked them their opinions on what technology might do. Most tend to believe that AI can handle the routine chores, but when it comes to genuine human connection, doctors will always be indispensable.
However, since Dr. Slack’s first study, nearly 60 years ago, a wealth of research has demonstrated that patients tend to be more talkative with machines. They’re more likely to disclose sensitive or embarrassing symptoms, challenge opinions, and ask questions. Sitting in front of a physician can sometimes meddle with our time-crucial medical disclosures, including red-flag cancer symptoms and our mental health. The very humanness of our doctors can undermine the delivery of healthcare.
Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:
