How to understand what that"new study" means
Essentially every newsletter that you’ve received from me comments on the quality of scientific research.
As with most things in life, the quality of scientific research exists on a continuum. It’s not that some studies are good or bad, per se, but that their findings are more or less useful based on the quality of the research.
You can’t simply assume that a published study is a good quality study.
And - this is really important - just because a good quality study demonstrates something doesn’t mean that you can extrapolate that finding to fit everyone.
While I have no plans to stop critiquing specific scientific research and presenting those distilled thoughts here, I wanted to spend this newsletter on what to look for in media reports (as well as in actual papers) to help you separate the signal from the noise when it comes to research.
This is important because most of the research you read about is nonsense
A landmark paper published in 2005 by the brilliant (and sometimes controversial) scientist John Ioannidis suggests that most published research findings are actually false.
The Ioannidis paper is one of the landmark pieces that has influenced the way that countless physicians and scientists approach new experimental findings. It’s certainly been influential for me.
But I would argue that Ioannidis actually understates the gravity of the problem when it comes to the way that science is communicated to the world.
My friends, family, and patients are always asking about a new research finding that they read about or wondering about how an article they read about in the New York Times applies to their own health.
Take a look at the recent article “scientists may have found the perfect bedtime to keep hearts healthy.”
I’ve been asked about this by 3 different patients this week on top of the emails, texts, and phone calls discussing this from the other people in my life.
While I’m not surprised that this got media attention - everyone in the media wants eyeballs and this is fundamentally clickbait - I am surprised by how credulously smart people take this stuff.
The amount that this study is applicable to you is exactly zero. Not only should you ignore this research, but you should fundamentally question whether a news outlet that reports on this as though it’s worthwhile information can be trusted on anything.
The first clue that this is total nonsense: they use the term “associated with” in the article
When you see any article about a scientific study, read it closely and look for the words “associated with.”
“Associated with” means that the researchers found a correlation. They studied different groups and found that one group differed from the other in a way that the researchers then reported.
I’ve lost count of how many times I’ve written the words “correlation does not equal causation,” but when you’re thinking about interpreting a scientific trial, it’s worth really understanding why that matters.
Go back to the example of what time you should go to bed. The researchers in this study used accelerometer data to look at what time people went to bed and then monitored their risk of heart attack, stroke, or death over the next few years. They found that a bedtime between 10-11pm was associated with the lowest heart disease incidence.
Is bedtime the thing that caused the lower incidence of heart disease? Or was it something about who these people are that both led them to go to bed between 10-11 and have a lower risk of heart disease.
In just a couple of minutes of thinking about it, I can come up with a few potential confounders:
Maybe people who prioritize going to bed between 10-11pm are healthier overall, prioritizing getting enough sleep along with healthy diet and exercise. The Healthy User Effect is a well known problem with drawing conclusions from observational research.
Maybe people who are depressed go to bed either early or late, rather than in the magical 10-11pm window. We know there’s a higher heart disease risk in those with depression.
Maybe people who work a lot of hours and have high levels of stress miss the 10-11pm window and go to bed later. Is it the sleep timing that causes their heart disease or is it the high levels of stress that they deal with?
And there are a hundred other potential confounders here.
Repeat after me: correlation does not equal causation.
The other clues to look for when you’re reading an article and wondering whether this was a correlation that’s being reported: “may be” and “linked to.” When you see those descriptors, it’s a good sign that you shouldn’t take the findings with a grain of salt.
The next thing to look at: what was the effect size?
When you’re thinking about making a fundamental change to your way of life, you need to know how much of an impact it’s going to make.
This type of analysis is important if you’re thinking about adjusting when you go to bed, what you eat, how (or whether) you exercise, what medications you take, or what medical procedures you get.
The news articles that you read are always going to report big numbers - like the 25% increased risk of heart disease from going to bed too late.
But that’s a relative change, it’s not an absolute change. If the risk goes up from 2% to 2.5%, that’s a 25% relative increase, but only a 0.5% absolute change. The 25% makes it sound like a really big deal, but the 0.5% actual change just isn’t impressive.
These numbers matter because they measure the magnitude of impact. If you ignore the absolute impact in favor of the relative impact, you will often by misled.
Take a look at the graph from the sleep study:
What do we see about the effect size? About 3% of people who went to bed at the “perfect” time had a cardiovascular event over 6 years versus about 5% of the people who went to bed the latest.
If you’re just looking at the shape of the graph and ignore the vertical axis, you’re going to miss that the scale goes from 95% to 100%, not 0% to 100%.
It’s a totally misleading way of depicting the effect size, designed to make the findings look much more impressive than they were.
Given all of the potential confounding variables, this miniscule difference between the groups means even less.
Don’t ignore the way that the study was done
If you’re going to draw conclusions about a behavior, a drug, or an intervention based on a study, you need to understand how that study was actually performed.
Do you know how they measured bedtimes for the people in this study?
The certainly didn’t monitor their bedtime every day for the years of follow up. The researchers had the participants wear an Axivity monitor for 7 days and then tracked their cardiovascular problems for several years.
In other words, the data on what time people go to bed was from only a 7 day sample while their cardiovascular risk was assessed over a period of years.
Do you see any problem in this?
Is a 7 day sample of when people go to bed all the information you need about sleep time and cardiovascular disease?
When I read the paper, I was actually incredulous that this was the method of data collection.
Why doesn’t a science editor pick up on this and force it to be included in the article?
Why would you publish this quote from the lead author of the study suggesting any validity to this information in the absence of robust data collection?
“We can’t help what we’ve evolved to be. We’ve evolved to be daytime creatures … that don’t live at night,” study author David Plans, the head of research at Huma, a British health-care technology company, told The Washington Post. “The circadian clock has a much stronger influence on overall health than we thought.”
Another red flag: a finding in mice that’s implicitly extrapolated to people
I can’t tell you how often someone sends me a report about a study in mice (or rats, or some other non-human animal) that is written as though we can draw any conclusions about people whatsoever.
It’s a really long road to take something that works on mice to finding that it has any impact in humans. We’ve known this for a long time.
For reasons that are unclear to me, these findings still get reported all the time without the necessary caveats in place.
You can’t forget to follow the dollars - who is making money from this?
When you look at the lead author, David Plans, you’ll see that he’s an entrepreneur who founded a company called BioBeats that was sold to Huma, a British healthcare company that provides remote tracking of various biomarkers as a means of health promotion.
When you use work like this to suggest that tracking bedtime leads to better health outcomes, it’s not a huge leap to think that it makes it easier for Huma to sell services to patients, health systems, and doctors.
Recognizing and measuring quality of care in medicine is really, really hard. Measuring unimportant metrics backed up by “science” doesn’t make it better.
Work like this that promotes mediocre data collection and suggests dubious “links” to clinical outcomes has the end result of making people more confused but likely spending more money on sleep trackers.
Publishing “research” like this is a pretty brilliant strategy if you’re trying to make the case that we need to track metrics like sleep timing to improve health outcomes.
The final thing to consider: the common sense aspect
Do you really believe that going to bed outside the 10-11pm window is going to make you die young?
Are you really persuaded that by going to bed between 10-11 instead of before 10 that you’ll reduce your chance of a heart attack.
Did you read that study and start changing your bedtime?
I honestly don’t understand how this low quality research gets pumped up into important findings by publications that have science editors. It certainly raises a number of important questions.
Doesn’t anyone who reports on this work actually read the trials? Do the editors?
Do the reporters just rephrase the press releases and then take quotes from the lead author?
If they do read the studies, why are they incapable of interpreting them critically?
So when you’re reading an article about new scientific research, remember what to look for:
“Associated with,” “may be,” and “linked to” all mean that they found a correlation, but no causation was proven.
Remember to look at the absolute effect size rather than just the relative effect size.
If the way that the study was conducted doesn’t provide high quality data, then you probably shouldn’t believe the results.
Mice (and other animals) aren’t the same as people and you can’t extrapolate findings in mice (or other animals) to assume that the treatment in question will work in people.
Think about who has a financial interest in reporting this finding.
Practice changing research doesn’t come along all that often. so your default position of any life-changing research should be skepticism.
I’m going to conclude with something I said at the top of this newsletter. I’m repeating it because it’s important:
Not only should you be skeptical of the way that research is presented in the press, you should fundamentally question whether a news outlet that reports on these low quality studies as worthwhile information can be trusted on anything.
Thank you for reading! Please share with friends and family and encourage them to subscribe!