A 68-year-old man with heart disease sits in the doctor's office.
His LDL is high, his heart muscle is weak from previous heart attacks, and his doctor is considering prescribing him a statin.
"Don't worry," the doctor thinks, "there's a study for that." A major clinical trial called CORONA tested these exact medications in this exact type of patient.
The doctor looks up CORONA and finds that despite successfully lowering LDL and reducing inflammation, patients in the CORONA trial didn’t live longer than those who got a placebo.
So does that mean statins don't work in patients like this?
It's really easy to feel like you're making the right decision in medicine when there's a clinical trial that backs up your plan. "There's a study to answer that question," comforts those of us who constantly make decisions for patients in settings of uncertainty. It sounds reassuring.
Evidence exists, so we know what to do!
Except "a study for that" is almost never the final word. A single trial, even a large one, rarely tells the whole story. To apply evidence to real patients, you have to think about who the trial enrolled, what it actually tested and how, which outcomes it measured, and whether all of those match your patient, your treatment, and your outcomes of interest.
Let's look at a few trials to illustrate some common pitfalls. After all, you can't just read the conclusion of a study in the abstract and think that answers your question all the time.
When Competing Risk Crowds Out Benefit
Example: CORONA trial, testing statins in ischemic cardiomyopathy
One of the tenets of evidence based medicine is that the highest risk patients should be the easiest to find a benefit in because they have the most bad things happen, so a treatment that helps should require fewer patients studied for a shorter length of time.
And while that’s true, if your treatment doesn’t target the thing that makes people sicker, you may get a neutral result even if that treatment is actually helping prevent other outcomes.
That’s the case with the CORONA trial, which tested statins in heart failure due to coronary artery disease.1 CORONA randomized older adults with heart failure due to cardiovascular disease (ischemic cardiomyopathy) to rosuvastatin or placebo.
These were really high-risk patients. They had prior symptomatic coronary disease, low ejection fraction, high risk of hospitalization and mortality. Their LDL was over 130 and their hsCRP was elevated. In theory, this is the perfect population to see a benefit of statin treatment.
But despite that, the trial was neutral. LDL and CRP dropped, but there was no improvement in survival or in the composite of cardiovascular death, MI, and stroke.
Why might that be? Do statins not work in this group?
I don’t think that’s the right conclusion. About 30% of people in this study died by 3 years, mostly from sudden cardiac death and worsening heart failure.
In this really sick group with advanced heart failure, competing risks dominate. Deaths from progressive pump failure or arrhythmia aren’t outcomes a statin can prevent. Even if you prevent heart attacks, you can’t stop the failing muscle from failing.
This doesn't mean statins are useless in this population. They still prevent MIs and slow atherosclerosis progression. But when the primary drivers of morbidity and mortality are unrelated to what your intervention targets, even effective treatments can appear neutral in trials.
The lesson: CORONA doesn't tell us to skip statins in ischemic cardiomyopathy. It tells us that statins are not heart failure drugs, and when risk from other causes is really high, it can crowd out the possibility of detecting benefit from the outcome you can actually influence.
When the Intervention Doesn’t Match Real Practice
Example: TRAVERSE trial of testosterone replacement therapy
Testosterone replacement therapy (TRT) is more common than ever.
Low testosterone has been described as an epidemic that’s afflicting lots of men. And it’s true - a surprisingly large proportion of men have low testosterone levels, often deeloping alongside metabolic syndrome.2
There’s been a longstanding concern in the cardiovascular world that replacing testosterone might increase risk of heart disease, which is mostly based on low quality observational data and some extrapolation from anabolic steroid use (which is very different than TRT).
TRAVERSE was billed as the definitive look at TRT and cardiovascular safety. It found no significant effect (either good or bad) on major cardiovascular events. Sounds reassuring.
But the average testosterone level on treatment in TRAVERSE was about 350 ng/dL. In real-world TRT clinics, targets are often 500–800 ng/dL or higher.
The study tells us nothing about what happens at those higher levels.
The lesson: A trial can answer the question it asked. In TRAVERSE, that question was, “is TRT safe when dosed to ~350 ng/dL?”
But TRAVERSE doesn’t really answer the slightly different question that we would be facing in clinic when patients are treated to much higher levels.3
If your treatment is meaningfully different from the one in the study, the results can’t necessarily be extrapolated.
When Your Patient Isn’t the Study Population
Example: HOPE-2 testing homocysteine-lowering with folate and B vitamins
High levels of homocysteine in the blood is a clear risk factor for heart disease - high homocysteine increases inflammation, oxidation, and blood clotting.
You may have heard of the MTFHR gene, which is a popular test in the wellness world to look at vascular risk. MTFHR mediates its risk by increasing homocysteine, so if you have the MTHFR mutation but normal homocysteine levels, it doesn’t seem like that translates to increased cardiac risk.
Because elevated homocysteine levels can be reduced by taking folic acid and other B vitamins, it seems like that should be an easy win - test homocysteine, and if it’s elevated, treat to lower levels in order to reduce heart disease risk.
HOPE-2 tested whether lowering homocysteine with folic acid, B₆, and B₁₂ would reduce cardiovascular events. They randomized over 5,500 patients with vascular disease or diabetes to supplementation or placebo.
The supplementation group had lower homocysteine, but there was no difference in cardiovascular death, heart attacks, or strokes.
This result is often used to argue that homocysteine testing is pointless and that lowering it has no role in heart disease prevention.
But the trial didn’t enroll patients with particularly elevated homocysteine - patients had levels of about 12μmol/L, and the normal range goes up to about 16.
So HOPE-2 essentially tells us nothing about whether lowering high homocysteine levels in high-risk patients would change outcomes.
The lesson: If your patient’s baseline risk is much higher (or lower) than the trial population’s, the results may not translate.
The bigger picture
The point isn't that statins in ischemic cardiomyopathy, TRT, or homocysteine-lowering do or don't work.
The lesson is that trials test specific clinical questions, and it's hard to extrapolate results to different interventions or patient populations.
These limitations don't invalidate evidence-based medicine. They remind us that it often requires more nuance than many realize. While systematic reviews and meta-analyses can help overcome some individual trial limitations, publication bias, selective outcome reporting, and industry funding can still skew our evidence base.
Perfect information rarely exists.
This uncertainty isn't a failure of the system; it's an inherent feature of applying group data to individual patients. The challenge lies in recognizing when evidence applies directly and when clinical judgment must bridge the gaps.
Before you lean on “there’s a study for that” as the end of the conversation, ask yourself:
Does the trial's population match my patient?
Is the intervention the same as what I'm doing?
Are the outcomes relevant to my decision?
Does the timeline match the disease process I'm trying to influence?
What competing risks might be crowding out the benefit I'm looking for?
Sometimes evidence-based medicine means taking the top-line result literally.
But most of the time, it means recognizing the trial’s limits, recognizing the uncertainty of what the trial tell us, and using judgment to bridge the gap between what was studied and what’s in front of you.
Remember: "there's a study for that" should be the start of the conversation, not the end.
The real art of medicine lies not just in knowing the evidence, but in knowing how and when to apply it to the individual human being sitting across from you.
The nomenclature here can be confusing if you don’t live in the world of cardiology. Coronary artery disease = heart disease. When doctors say “heart disease” we’re actually talking about disease of the arteries, not disease of the heart. Cardiovascular = blood vessels around the heart. Other synonyms here: ischemic heart disease, coronary artery disease, atherosclerotic cardiovascular disease, atherosclerosis. Heart failure is a problem with the actual heart function. Heart disease can cause heart failure, but you can also have heart failure not caused by vascular issues (nonischemic cardiomyopathy), heart failure not caused by vascular issues but also have vascular issues (nonischemic cardiomyopathy with superimposed coronary artery disease), and vascular issues without heart failure.
Not always, of course, and there are plenty of doctors who know way more about this than I do. But abdominal obesity, insulin resistance, and low T go hand in hand quite often.
My read on the data - both from TRAVERSE as well as the observational literature, is that TRT is probably fine when not supplemented to supra-physiologic levels. I don’t have a super high level of confidence about that, but I also don’t really have concern that there’s a huge potential for harm to give people physiologic levels of a hormone back.
Interesting article. Based on the title, I thought it would be in reference to “evidence gaps”- patient profiles or scenarios that were not directly addressed or accounted for in landmark trials…in particular as pertains to age, racial makeup-up, sex distribution, and comorbidities…such that even when there is a “trial for that condition”, there may not be a “trial for that particular patient” you are seeing with that condition.
But this was a very informative summary of some EBM principles. In particular for me, it’s good insight into how I should look at some trials with “negative” results.
My brother, a "skeptic," asked me just this week, to defend statins and sent me an interview of someone on the Joe Rogan Experience who "taught" by misleading. My first inclination was to send - as you have astutely stated - a "study." Instead, I sat back and decided to write, in my own words, a response that I believed was what I would explain to a patient who was asking a legitimate question, and not "baiting" me. Surprisingly, he was satisfied. I absolutely agree that research, studies, and medicine in general is "complicated," and it is our responsibility to not complicate it further. Somehow, the idea that "emergent," changing scientific data means we didn't/don't know what we are talking about. For several years, I knew the last living Nobel Prize in Medicine winner in our medical school faculty. His opinion was quite to the contrary. He believed that this was simply the nature of science and medicine, ever evolving, and ever revealing, and his advice was, "A good scientist never paints himself into a corner." It is always in the best interest of a patient to say, "I honestly cannot answer your question now, but I will investigate it, and I will get back to you."