Just Finished Reading: How Doctors Think #books
My daughter’s Godmother is studying to be an MD, and has started her internship. Starting her internship coincided with her birthday, which meant that many of the presents she received were related to medicine. One of the gifts, which she gracefully allowed me to borrow before she read it was How Doctors Think, by Jerome Groopman, MD.
Groopman’s book covers one subject which I love: heuristics and bias. Heuristics are the stuff the practice of medicine is made of, which makes it a little strange that this isn’t always taught. The influence of the intuitive, fast, effortless System 1 thinking versus the slower, conscious, System 2 thinking is reasonably well known. System 1 allows us to unconsciously come to conclusions based on the information at hand, as Groopman says: “When you hear hoofbeats, think horses, not zebras.” The practice of medicine is such that most of the diseases encountered fit into a nice pattern, however it is also a burden which make cognitive bias possible. When a doctor sees nine patients who are suffering from flue symptoms, System 1 will quickly come to the conclusion that the diagnoses of the tenth patient with these symptoms is also flue, and will even ignore facts to the contrary.
If it looks like a duck, it walks like a duck, and it quacks like a duck, guess what? It’s a duck.
Taking the example of the series House you see that certain doctors who make suggestions during the differential diagnoses are prone to prefer a particular diagnoses, whether it is an auto immune disease, cancer, or psychological. The conscious mind (System 2) having found no fault in the choice that System 1 has made will defend the choice. As Groopman says in his book: “The doctor becomes increasingly convinced of the truth of his misjudgment, developing a psychological commitment to it. He becomes wedded to his distorted conclusion.”
When I realized that most of the occurrences of disease are evaluated based on bayesian inference of disease, and much like the Quants and their use of statistical arbitrage there are aberrations which can cause diagnostics to fail. While brushing up on the fundamentals of Bayesian probability I rediscovered the nitty-gritty of Markov chains, something in which I once had more than a passing interest. And came to the realization that to fundamentally diagnose a specific disease the sample rate, and thus the expensive tests, for specific markers should go up. Whether the first test confirms or denies a hypothesis System 1 takes this first as given, any further answer is seen as a deviation from the first “true” answer. Ockham’s razor doesn’t hold when the premise is reached based on incorrect data. Groopman calls this search satisfaction – you expected to find something found something so you stop searching rather than finishing your search.
When you hear hoofbeats, think horses, not zebras.
This means that in a simulation the sample rate may cause the diagnoses to become more accurate, additional samples with aberrant results in reality may not cause as large a shift in the accuracy of the diagnoses. And naturally the speed at which a result of a laboratory test is returned means that in worst cases the patient is dead before the result is known, which means that the only diagnostic tool which can be relied on is suffers from flaws.
Current medicine seems to be held back by the limitations of man, and although this can be improved upon the basic limitation of human beings means that even with the knowledge that humans are fallible, it is impossible to take the human out of the current equation therefore there is a three stop solution:
- critical reasoning
Consider that there are ofter many unknowns with disease with the knowledge of the pitfalls apply critical reasoning to what you’re thinking. Communicate correctly with you doctor or patient, get it clear what the issue is, what the possible solutions could be. And show compassion, both doctor and patient rely on each other.