top of page

DEEP MEDICINE

Acutely aware of my inferiority in the medical field, I will not be writing any public actionable takeaways from this book - I'll just use the author's own (amazing) words to get the points across. Apologies in advance for any issues you may find in the summary, attribute them to my restructuring of notes and not the original content.


I've divided the book into three main parts 1) How we have been caught off-guard by technological advances in medicine and have a host of issues in our healthcare; 2) Where AI can be better than doctors in healthcare and 3) How we can supercharge healthcare by combining AI with medical professionals. (The fourth part is where medicine is headed in the future in terms of medical professionals)


The book is amazing for medical students and healthcare workers, those interested in tech and ethics, and really anyone who thinks they may be a patient at some point (so everyone).


PS. coming from a country (Finland) where health data is Public to one where it isn't (UK) I am interested in, and would urge everyone to read the 24 Reasons Why We Should Have Access To Our Health Data in the end of the summary.


1. How Technology Has (Currently) Negatively Impacted The Healthcare System/Shallow Medicine

This is where we are today: patients exist in a world of insufficient data, insufficient time, insufficient context, and insufficient presence. Or, as I say, a world of shallow medicine.

Communication With Patients

Over 2,000 years ago, Hippocrates said, “It is more important to now what sort of person has [a] disease than to now what sort of disease a person has.”

Imagine if a doctor can get all the information she needs about a patient in 2 minutes and then spend the next 13 minutes of a 15minute office visit talking with the patient, instead of spending 13 minutes looking for information and 2 minutes talking with the patient. - LYNDA CHIN

William Osler, who said, “Just listen to your patient; he is telling you the diagnosis.”

In learning to talk to his patients, the doctor may talk himself back into loving his work. He has little to lose and much to gain by letting the sick man into his heart.—ANATOLE BROYARD

  • Now, the highest-ever proportion of doctors and nurses are experiencing burnout and depression owing to their inability to provide real care to patients, which was their basis for pursuing a medical career. What's wrong in healthcare today is that it's missing care. That is, we generally, as doctors, don't get to really care for patients enough. And patients don't feel they are cared for. As Francis Peabody wrote in 1927, “The secret of the care of the patient is caring for the patient."$ The greatest opportunity offered by Al is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honored connection and trust—the human touch between patients and doctors.

  • Because of electronic health records, eye contact between the patient and doctor is limited. Russell Phillips, a Harvard physician said, “The electronic medical record has turned physicians into data entry technicians.” Attending to the keyboard, instead of the patient, is ascribed as a principal reason for the medical profession’s high rates of depression and burnout. Nearly half of doctors practicing in the United States today have symptoms of burnout, and there are hundreds of suicides per year. In a recent analysis of forty-seven studies involving 42,000 physicians, burnout was associated with a doubling of risk of patient safety incidents, which sets up a vicious cycle of more burnout and depression. Abraham Verghese nailed this in the book’s foreword, the role of the “intruder” and its impact on doctors’ mental health, which beyond clinicians has potential impact on patient care.

  • David Meltzer, an internist at the University of Chicago, has studied the relationship of time with doctors to key related factors like continuity of care, where the doctor who sees you at the clinic also sees you if you need care in a hospital. He reports that spending more time with patients reduced hospitalizations by 20 percent, saving millions of dollars as well as helping to avoid the risks of nosocomial infections and other hospital mishaps.

  • Based on more than 60,000 visits by nurses, physical therapists, and other clinicians, they found that for every extra minute that a visit lasts, there was a reduction in risk of readmission of 8 percent. For part-time providers, the decrease in hospital readmission was 16 percent per extra minute; for nurses in particular it was a 13 percent reduction per minute. Of all the factors that the researchers found could influence the risk of hospital readmission, time was the most important.



Unnecessary Mass Screening And Tests

  • Subsequent evaluation from a national sample showed that the top seven low-value procedures were still being used regularly and unnecessarily. Two primary factors seem to account for this failure. The first reason, called the therapeutic illusion by Dr. David Casarett of the University of Pennsylvania, was the established fact that, overall, individual physicians overestimate the benefits of what they themselves do. Physicians typically succumb to confirmation bias—because they already believe that the procedures and tests they order will have the desired benefit, they continue to believe it after the procedures are done, even when there is no objective evidence to be found. The second reason was the lack of any mechanism to affect change in physicians’ behavior. Although Choosing Wisely partnered with Consumer Reports to disseminate the lists in print and online, there was little public awareness of the long list of recommendations, so there was no grassroots, patient-driven demand for better, smarter testing. Furthermore, the ABIMF had no ability to track which doctors order what procedures and why, so there was no means to reward physicians for ordering fewer unnecessary procedures, nor one to penalize physicians for performing more.

  • So that leaves us stuck where we have been—physicians regularly fail to choose wisely or provide the right care for patients. David Epstein of ProPublica wrote a masterful 2017 essay, “When Evidence Says No, But Doctors Say Yes,” on the subject.6 One example used in the article was stenting arteries for certain patients with heart disease: “Stents for stable patients prevent zero heart attacks and extend the lives of patients a grand total of none at all.” As Epstein concluded about stenting and many other surgeries: “The results of these studies do not prove that the surgery is useless, but rather that it is performed on a huge number of people who are unlikely to get any benefit.” Part of the problem is treatment that flies in the face of evidence, but another part of the problem is the evidence used to make the decisions to treat. In medicine, we often rely on changes in the frequency of so-called surrogate endpoints instead of the frequency of endpoints that really matter. So with heart disease we might treat based on changes in blood pressure because we have no evidence about whether the treatment actually changes the frequency of endpoints that really matter.

  • Shallow evidence, either obtained from inadequate examination of an individual patient like Robert, or from the body of medical literature, leads to shallow medical practice, with plenty of misdiagnoses and unnecessary procedures. This is not a minor problem. In 2017 the American Heart Association and American College of Cardiology changed the definition of high blood pressure, for example, leading to the diagnosis of more than 30 million more Americans with hypertension despite the lack of any solid evidence to back up this guideline.7 This was misdiagnosis at an epidemic scale.

For God's Sake, Teach Doctors Maths

  • One important tool that has widespread awareness in medicine but is nevertheless regularly ignored is Bayes’s theorem, which describes how knowledge about the conditions surrounding a possible event affect the probability that it happens. So, although we know about 12 percent of women will develop breast cancer during their lifetime, that does not mean that every woman has a 12 percent chance of developing breast cancer.

  • Rule-based thinking can also lead to bias. Cardiologists diagnosing heart disease in patients evaluated in the emergency department demonstrate such bias, when they assume that a patient must be over age forty before they really suspect heart attack. There is a discontinuity in the data that indicates doctors were classifying patients as too young to have heart disease, even though the actual risk of a forty-year-old having a fatal heart attack is not much greater than the risk of a thirty-nine-year-old patient having one. This matters: having examined the ninety-day follow-up data for the patients in question, Coussens found that many individuals who were incorrectly deemed too young to have heart disease subsequently had a heart attack.

  • One of the greatest biases prevalent among physicians is overconfidence, which Kahneman called “endemic in medicine.” To support his assertion, he recalls a study that determined physician confidence in their diagnoses and compared causes of death as ascertained by autopsy with the diagnoses the physicians had made before the patients died. “Clinicians who were ‘completely certain’ of the diagnosis antemortem were wrong 40 percent of the time.” Lewis understood this bias, too: “The entire profession had arranged itself as if to confirm the wisdom of its decisions.” Tversky and Kahneman discussed a bias toward certainty in a classic 1974 paper in Science that enumerated the many types of heuristics that humans rely on when dealing with uncertainty. Unfortunately, there has never been a lack of uncertainty in medicine, given the relative dearth of evidence in almost every case. Unfortunately, dealing with that uncertainty often leads to a dependence on expert opinions.

  • A classic experiment performed further reinforces lack of simple reasoning. A survey of cancer doctors at Stanford asked them to choose an operation for patients with terminal cancer. When given a choice described as having a 90 percent chance of survival, 82 percent chose it. But when it was described as a 10 percent chance of dying, only 54 percent selected the option. Just flipping the terms “survival” and “dying,” and the corresponding percentages, led to a marked change in choice.

Where AI Makes A Bigger Mess

  • If there’s one thing that the human brain and AI certainly share, it’s opacity. Much of a neural network’s learning ability is poorly understood, and we don’t have a way to interrogate an AI system to figure out how it reached its output.

  • The first fatality of a pedestrian hit by a driverless car occurred in an Uber program in Arizona in 2018. The car’s algorithm detected a pedestrian crossing the road in the dark but did not stop, and the human backup driver did not react because she trusted the car too much.


2. Where AI Is Better Than The Doctor

Computing science will probably exert its major effects by augmenting and, in some cases, largely replacing the intellectual functions of the physician.—WILLIAM B. SCHWARTZ, 1970

Medical diagnostic AI can dig through years of data about cancer or diabetes patients and find correlations between various characteristics, habits, or symptoms in order to aid in preventing or diagnosing the disease. Does it matter that none of it “matters” to the machine as long as it's a useful tool? -GARRY KASPAROV

AI is how we can take all of this information and tell you what you don't know about your health. —JUN WANG

  • A highly optimistic, very exuberant projection of Watson’s future appears in Homo Deus by Yuval Noah Harari: “Alas, not even the most diligent doctor can remember all my previous ailments and check-ups. Similarly, no doctor can be familiar with every illness and drug, or read every new article published in every medical journal. To top it all, the doctor is sometimes tired or hungry or perhaps even sick, which affects her judgment. No wonder that doctors sometimes err in their diagnoses or recommend a less-than-optimal treatment.”

  • With Gratch’s work, however, it seems that for deep thoughts to be disclosed, the avatars have a distinct advantage over humans. Indeed, at a 2018 Wall Street Journal health conference that I participated in, the majority of attendees polled said they’d be happy to, or even prefer to, share their secrets with a machine rather than a doctor. And, on a related note, an interesting Twitter poll with nearly 2,000 people responded to “You have an embarrassing medical condition. Would you rather tell and get treatment from (1) your doctor, (2) a doctor/nurse, or (3) a bot?” The bot narrowly beat “your doctor” by 44 percent to 42 percent.

  • A small study of thirty-four youths, with an average age of twenty-two, undertook a “coherence” analysis of many features of speech such as length of phrases, muddling, confusion, and word choice to predict whether patients at risk of schizophrenia would transition to psychosis. The machine outperformed expert clinical ratings.

  • Geoffrey Hinton proclaimed, “I think that if you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”

  • “To avoid being displaced by computers, radiologists must allow themselves to be displaced by computers."

  • The use of AI in this field extends beyond facilitating drug discovery to predicting the right dose for experimental drugs. Since the optimal drug dose may depend on so many variables for each individual, such as age, gender, weight, genetics, proteomics, the gut microbiome, and more, it’s an ideal subject for modeling and deep learning algorithms. The challenge of getting the dose right is heightened by the possibility of drug-drug interactions.

Or, We Could Just Use Pigeons

  • A considerable body of data gathered over five decades has shown that pigeons can discriminate between complex visual stimuli, including the different emotional expressions of human faces as well as the paintings of Picasso and Monet. In 2015, Richard Levenson and colleagues tested whether pigeons could be trained to read radiology and pathology images.45 The team placed twelve pigeons in operant conditioning chambers to learn and then to be tested on the detection of micro-calcifications and malignant masses that indicate breast cancer in mammograms and pathology slides, at four-, ten-, and twenty-times levels of magnification. Their flock-sourced findings were remarkably accurate. This led the researchers to conclude that one could use pigeons to replace clinicians “for relatively mundane tasks.” (OK, this is the biggest LOL and best argument (in terms of funny) against the use of AI in radiology - HOWEVER, we are saying AI is good because it's so much cheaper than a radiologist too, training pigeons and then blowing up the images would be so expensive and pointless so not sure the argument is the best but I love the study, I'm sure they've had the biggest laughs running it )


3. The Superpower Of Doctors + AI

In Diagnosis

  • In a study that compared more than 200 doctors with computer algorithms for reviewing a diagnostic vignette, the diagnostic accuracy for doctors was 84 percent but only 51 percent for algorithms. That’s not too encouraging for either the doctors or AI, but the hope of its leaders is that the collective intelligence of doctors and machine learning will improve diagnostic accuracy.

  • PathAI advertises an error rate with algorithms alone of 2.9 percent, and by pathologists alone of 3.5 percent, but the combination drops the error rate to 0.5 percent.

  • Up-to-the-minute biomedical research would be useful, but it isn’t the goal. Ralph Horwitz and colleagues wrote a thoughtful perspective, “From Evidence Based Medicine to Medicine Based Evidence,” that quoted Austin Bradford Hill, an eminent English epidemiologist, on what doctors weren’t getting from research. “It does not tell the doctor what he wants to know,” Hill said. “It may be so constituted as to show without any doubt that treatment A is on the average better than treatment B. On the other hand, that result does not answer the practicing doctor’s question: what is the most likely outcome when this particular drug is given to a particular patient?” To make the best decision for a particular patient, a physician or an AI system would incorporate all of the individual’s data—biological, physiological, social, behavioral, environmental—instead of relying on the overall effects at a large-cohort level.

Building a Specialist GP

  • As I noted, there are not very many dermatologists in the United States: fewer than 12,000 dermatologists to look after more than 325 million Americans. So the story here isn’t so much replacing dermatologists with machines as empowering the family physicians and general practitioners who are called on to do most of the dermatological grunt work. A fully validated, accurate algorithm would have a striking impact on the diagnosis and treatment of skin conditions. For dermatologists, it would reduce the diagnostic component of their work and shift it more to the excision and treatment of skin lesions. It would make primary-care doctors, who are the main screening force for skin problems, more accurate. For patients, who might otherwise be subject to unnecessary biopsies or lesion removals, some procedures could be preempted.

  • The machine accuracy for detection of depression was 70 percent, which compared favorably to general practice doctors, as previously published, who have a false positive diagnosis of depression of over 50 percent. Psychiatrists are better, but the vast majority of people with depression are seen by a primary-care doctor or aren’t seen by any clinician at all, let alone a psychiatrist.

Allowing Doctors To Better Interact With Patients

If a physician can be replaced by a computer, then he or she deserves to be replaced by a computer.- WARNER SLACK, HARVARD MEDICAL SCHOOL

By these means we may hope to achieve not indeed a brave new world, no sort of perfectionist Utopia, but the more mode stand much more desirable objective—a genuinely human society. - ALDOUS HUXLEY, 1948

Today, we're no longer trusting machines just to do something, but to decide what to do and when to do it. The next generation will grow up in an age where it's normal to be surrounded by autonomous agents, with or without cute names. RACHEL BOTSMAN

In learning to talk to his patients, the doctor may talk himself back into loving his work. He has little to lose and much to gain by letting the sick man into his heart. —ANATOLE BROYARD

  • AI speech processing already exceeds the performance of human transcription professionals. Why not have the audio portion of the visit captured and fully transcribed, and then have this unstructured conversation synthesized into an office note? The self-documented note could be edited by the patient and then go through the process of both doctor review and machine learning (specific to the doctor’s note preferences and style). After fifty or more notes processed in this way, there would be progressively less need for a careful review of the note before it was deposited into the electronic record. This would make for a seamless, efficient way of using natural-language processing to replace human scribes, reduce costs, and preserve face-to-face patient-doctor communication.

Personalized Medicine

  • (Apple watch-like device) It uses deep learning to peg the relationship between a person’s heart rate and physical activity, to prompt the user to record an electrocardiogram if his or her heart goes off track, and to look for evidence of atrial fibrillation.

  • Personalized medicines, diets etc


4. The Future Of Medicine

The Age Of The Empathic Doctor Is Upon Us

Are we selecting future doctors on a basis that can be simulated or exceeded by an AI bot?

  • As machines get smarter, humans will need to evolve along a different path from machines and become more humane.

  • He winds up with burnout and requests a six-month sabbatical because “the problem is your requirement to develop empathy”! He writes, “It doesn’t matter how powerful the human or machine software: ask it to do something impossible, and it will fail.”

  • We also know that medical professionals generally have low scores on empathy quotient (EQ) tests. Altruists have EQs in the 60–70 range, artists and musicians in the 50s, doctors in the 40s, and psychopaths less than 10.

  • Being present is essential to the well-being of both patients and caregivers, and it is fundamental to establishing trust in all human interactions.” He gave us his definitive definition: “It is a one-word rallying cry for patients and physicians, the common ground we share, the one thing we should not compromise, that starting place to begin reform, the single word to put on the placard as we rally for the cause. Presence. Period.”

  • The fundamentals—empathy, presence, listening, communication, the laying of hands, and the physical exam—are the building blocks for a cherished relationship between patient and doctor. These features are the seeds for trust, for providing comfort and promoting a sense of healing. They are the building blocks that enable genuine caring for the patient and a doctor’s professional fulfillment that comes from improving a person’s life. All these humanistic interactions are difficult to quantify or digitize, which further highlights why doctors are irreplaceable by machines.


24 Reasons Why You Need To Own Your Own Medical And Health Data

  1. It’s your body.

  2. You paid for it.

  3. It is worth more than any other type of data.

  4. It’s being widely sold, stolen, and hacked. And you don’t know it.

  5. It’s full of mistakes that keep getting copied and pasted, and that you can’t edit.

  6. You are/will be generating more of it, but it’s homeless.

  7. Your medical privacy is precious

  8. The only way it can be made secure is to be decentralized.

  9. It is legally owned by doctors and hospitals.

  10. Hospitals won’t or can’t share your data (“information blocking”).

  11. Your doctor (>65 percent) won’t give you a copy of your office notes.

  12. You are far more apt to share your data than your doctor is.

  13. You’d like to share it for medical research, but you can’t get it.

  14. You have seen many providers in your life; no health system/insurer has all your data.

  15. Essentially no one (in the United States) has all their medical data from birth throughout their life.

  16. Your electronic health record was designed to maximize billing, not to help your health.

  17. You are more engaged and have better outcomes when you have your data.

  18. Doctors who have given full access to their patients’ data make this their routine.

  19. It requires comprehensive, continuous, seamless updating.

  20. Access to or “control” of your data is not adequate.

  21. ~10 percent of medical scans are unnecessarily duplicated due to inaccessibility.

  22. You can handle the truth.

  23. You need to own your data; it should be a civil right.

  24. It could save your life.

IF YOU READ THIS, YOU MIGHT CONSIDER JOINING THE NEWSLETTER:

bottom of page