Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing

DSA ADS Course - 2021

Applied Probability, Probability Theory, Medical Diagnosis, Medical Testing, Probability of Diagnosis 

Probability theory and accuracy in medical diagnosis. Estimating probability of diagnosis is critical job of front line physicians and many overestimate probabilities ordering unwarranted tests and overtreatment. 

Textbook example of failure of many physicians to accurately compute probability - JAMA study in 2021

One of the things that always needs to be estimated in any individual consultation is probability. What is the probability that the breast lump is cancer? What is the probability that the fever is due to a serious bacterial infection? When faced with these questions, I think most doctors are more like an experienced chess player than a robot. They act on a feeling, not on a conscious weighing of probabilities. Doctors with a nervous disposition therefore order more tests and prescribe more antibiotics, while those with a more relaxed disposition order fewer tests and prescribe fewer antibiotics.

But how good is the average doctor?

That is what a study recently published in JAMA Internal Medicine sought to find out. The study was conducted in the United States, and funded by the National Institutes of Health. 492 physicians working in primary care in different parts of the United States filled in a survey, in which they had to estimate the probability of disease in four different common clinical situations, both before and after a commonly used test.

So, what were the results?

In the pneumonia scenario, the doctors overestimated the pre-test probability of pneumonia by 78%. In other words they thought the likelihood that the patient had pneumonia was almost double what it actually was. Not good. Unfortunately, that was their best performance. When it came to angina, they overestimated the pre-test probability by 148%. When it came to breast cancer, they overestimated the pre-test probability by 976% (i.e. they thought it was ten times more likely than it actually was). And when it came to the urinary tract infection scenario, they overestimated the pre-test probability by 4,489%! (i.e. they thought it was 45 times more likely than it actually was).

Doh! What are doctors being taught in medical school these days?

What I think is particularly interesting here is that the error was always in the same direction – in each of the four scenarios the doctors thought that the disease was more likely than it is in reality. If this reflects real world outcomes, then that would mean that doctors probably engage in an enormous amount of overtreatment. Obviously, if you think a patient likely has a urinary tract infection, you’re going to prescribe an antibiotic. And if you think a patient likely has angina, you’re going to prescribe a nitrate. You might even refer the patient for some kind of interventional procedure.

This means that doctors probably engage in an enormous amount of overtreatment. Obviously, if you think a patient likely has a urinary tract infection, you’re going to prescribe an antibiotic. And if you think a patient likely has angina, you’re going to prescribe a nitrate. You might even refer the patient for some kind of interventional procedure.

I think the over-estimation has more to do with cognitive bias than with fear of litigation. Once you anchor on a diagnosis, say pneumonia in someone with a fever and a cough, you will almost certainly overestimate the probability of that diagnosis.

Let’s move on. When it comes to how much a test changes estimation of probability, the doctors overestimated the effect of a positive lung x-ray by 92%, of a mammography by 90%, and of a cardiac stress test by 804%! They were relatively on the mark, however, when it came to estimating the impact of a positive urine culture, only overestimating by 10%.

When it comes to how much a negative test changes the estimation of probability, the doctors actually did ok, being close to the mark for both the chest x-ray, urine culture, and cardiac stress test, but wildly underestimating the predictive value of a negative mammogram (in other words, they thought breast cancer was far more likely than it actually was after getting back a negative mamogram, so again, they overestimated the probability of disease).

What can we conclude from this? Doctors have a pretty poor understanding of how the tests they use influence the probability of disease, and they heavily overestimate the likelihood of disease after a positive test. They are however generally better at understanding the impact of a negative test than they are at understanding the impact of a positive test.

Finally, the survey asked the doctors to consider a hypothetical scenario in which 1 in 1,000 people has a certain disease, and estimate the probability of disease after a positive and negative result for a test with a sensitivity of 100% and a specificity of 95%. Sensitivity is the probability that a person with the disease will have a positive test result. Specificity is the probability that a person without the disease will have a negative test result.

If you test 1,000 people, you will get one true positive (since the sensitivity is 100% you will catch every single positive case) and 50 false positives (with a specificity of 95% that means five false positives per 100 people tested). The odds of any one person with a positive test actually having the disease will thus be roughly 2% (1/51). So what did the doctors answer?

The average doctor in the study thought that the odds of a person with a positive test actually having the disease was 95%. In other words, they overestimated the probability by 4,750%!

Apart from that, they thought that a person with a negative test still had a 3% probability of disease, even though the sensitivity was listed as 100% (which means that the test never fails to catch anyone with the disease). Oops. I should add that there were no meaningful differences in how correct the answers were between attendings (more senior doctors) and residents (more junior doctors).

What can we conclude?

Doctors suck at estimating the probability of common conditions in scenarios they face on a daily basis, are not able to correctly interpret the tests they use, and don’t understand even very basic diagnostic testing concepts like sensitivity and specificity. It’s kind of like a pilot not being able to read an altitude indicator. Be afraid. Be very afraid.

Medical schools should be thinking long and hard about the implications of this study. What it tells me is that medical education needs a massive overhaul, on par with the one that happened a hundred years ago after the Flexner report. We don’t send pilots up in to the air without making sure they have a complete understanding of the tools they use. Yet that is clearly what we are doing when it comes to medicine. Admittedly the practice of medicine is much more complex than flying a plane, but I don’t think that changes the fundamental point.
 

------------------------------------------------------------------------------

Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing - April, 2021 JAMA Study

CONCLUSIONS AND RELEVANCE

This survey study suggests that for common diseases and tests, practitioners overestimate the probability of disease before and after testing. Pretest probability was overestimated in all scenarios, whereas adjustment in probability after a positive or negative result varied by test. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse

Key Points

Question  Do practitioners understand the probability of common clinical diagnoses?

Findings  In this survey study of 553 practitioners performing primary care, respondents overestimated the probability of diagnosis before and after testing. This posttest overestimation was associated with consistent overestimates of pretest probability and overestimates of disease after specific diagnostic test results.

Meaning  These findings suggest that many practitioners are unaccustomed to using probability in diagnosis and clinical practice. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.

Abstract

Importance  Accurate diagnosis is essential to proper patient care.

Objective  To explore practitioner understanding of diagnostic reasoning.

Design, Setting, and Participants  In this survey study, 723 practitioners at outpatient clinics in 8 US states were asked to estimate the probability of disease for 4 scenarios common in primary care (pneumonia, cardiac ischemia, breast cancer screening, and urinary tract infection) and the association of positive and negative test results with disease probability from June 1, 2018, to November 26, 2019. Of these practitioners, 585 responded to the survey, and 553 answered all of the questions. An expert panel developed the survey and determined correct responses based on literature review.

Results  A total of 553 (290 resident physicians, 202 attending physicians, and 61 nurse practitioners and physician assistants) of 723 practitioners (76.5%) fully completed the survey (median age, 32 years; interquartile range, 29-44 years; 293 female [53.0%]; 296 [53.5%] White). Pretest probability was overestimated in all scenarios. Probabilities of disease after positive results were overestimated as follows: pneumonia after positive radiology results, 95% (evidence range, 46%-65%; comparison P < .001); breast cancer after positive mammography results, 50% (evidence range, 3%-9%; P < .001); cardiac ischemia after positive stress test result, 70% (evidence range, 2%-11%; P < .001); and urinary tract infection after positive urine culture result, 80% (evidence range, 0%-8.3%; P < .001). Overestimates of probability of disease with negative results were also observed as follows: pneumonia after negative radiography results, 50% (evidence range, 10%-19%; P < .001); breast cancer after negative mammography results, 5% (evidence range, <0.05%; P < .001); cardiac ischemia after negative stress test result, 5% (evidence range, 0.43%-2.5%; P < .001); and urinary tract infection after negative urine culture result, 5% (evidence range, 0%-0.11%; P < .001). Probability adjustments in response to test results varied from accurate to overestimates of risk by type of test (imputed median positive and negative likelihood ratios [LRs] for practitioners for chest radiography for pneumonia: positive LR, 4.8; evidence, 2.6; negative LR, 0.3; evidence, 0.3; mammography for breast cancer: positive LR, 44.3; evidence range, 13.0-33.0; negative LR, 1.0; evidence range, 0.05-0.24; exercise stress test for cardiac ischemia: positive LR, 21.0; evidence range, 2.0-2.7; negative LR, 0.6; evidence range, 0.5-0.6; urine culture for urinary tract infection: positive LR, 9.0; evidence, 9.0; negative LR, 0.1; evidence, 0.1).

Conclusions and Relevance  This survey study suggests that for common diseases and tests, practitioners overestimate the probability of disease before and after testing. Pretest probability was overestimated in all scenarios, whereas adjustment in probability after a positive or negative result varied by test. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.

Resource Type: