How Should We Address Our Medical Illiteracy?
Exploring different "tiers" of research quality, to get at the crux of our human concerns
In the throes of the COVID-19 pandemic, when a vaccine first seemed to be within reach, a startling discovery about our general levels of medical literacy also made itself known: the average citizen didn’t seem to know as much about how healthcare and scientific research works as we might once have vaguely assumed we all did.
It wasn’t just that many people were articulating fears about vaccines that hadn’t gone through years of testing, for a condition that had only entered the scene a year prior.
It was that they did so while grasping at whatever quick-fix the internet offered up as a cure-all for a disease many also didn’t believe existed at all.
That cognitive dissonance—that distrust of “mainstream” science for not having undergone years of testing for a novel infection, joined with knee-jerk fealty to whatever cure alt-forums recommended even though it certainly hadn’t gone through rigorous testing either—highlighted something important about the human condition:
We are not rational critters.
We are not empirical critters, either.
What we are is easily spooked—and when spooked, easily driven toward whatever manifestation of authority seems to affirm our fears best.
During the height of pandemic, this didn’t just mean pulling away from confidence in state leaders and public institutions, either. Our panic also manifested in people pulling away from their churches, and other local ethnic binders. Even traditionally tight-knit religious orders, like the Church of Latter-day Saints, saw a huge drop in trust in leadership around panic over COVID-19. Isolation, and social media, had done strange work on our brains, and demolished a lot of common ground from which consensus around “good” medical science and civic practice needs to be built.
And I was put in mind of that rift this past week, as another extremely contentious news item has again illustrated how profoundly medically illiterate we are.
The news item is so contentious, in fact, that I suspect even sharing it directly will colour our ability to have a meaningful conversation about the scientific questions we need to know how to ask, to improve individual agency in a messy medical context.
So, we’re not going to hang today’s piece on a distraction of a news article.
I just want to offer a refresher on some important considerations to keep in mind, when trying to make sense of scientific authority claims when deciding what’s right for your personal health, and the health of those around you.
A wide range of study designs
In experimental science, the phrase “gold standard study design” is used to describe randomized controlled trials (RCTs). In medical science, the “controlled” part means that for every new treatment protocol being tested there is a “control” group that doesn’t receive the medication in question. The “randomized” part means that the control group also doesn’t know it’s not being treated; it receives a placebo, so that scientists can exclude any study results that also show up in the control group.
In theory, RCTs are ideal tests for us to run to assess the quality of a given treatment protocol. And if you can run a meta-study on a whole range of RCTs? Even better, because then you can also whittle out the sorts of statistical anomalies that are guaranteed to show up in any given trial on its own.
But wait! We’re talking about humans here, which means that our medical standards for study design are quite stringent. Can we ethically run RCTs for everything?
If there is sufficient reason to believe that withholding a drug might deny a patient vital care, in comparison with the population receiving actual treatment, the RCT is not an ideal testing protocol. You can’t simply expect one group of brain cancer patients, for example, to wait around hoping that their condition will improve without any treatment at all, just to offer a perfect “control” population to offset the patients given actual care for their condition. The medical world’s “duty of care” extends even (and especially) to people who have offered themselves up to scientific research.
Accommodation also has to be made for the fact that sometimes it isn’t possible to offer patients a scientifically meaningful placebo for the target treatment protocol. It would be entirely unethical, for instance, to make a cancer patient undergo a fake surgery, just so that they can have a scar and hospital experience that gives them to think they had their tumour removed, and thus offer a more perfect control group for patients who underwent similar experiences for an actual medical procedure.
Then there’s the raw fact that sometimes there aren’t enough people with a given condition to be able to run rigorous RCTs on every possible treatment. We have finite resources, including finite patient populations, to factor into all our grand, abstracted desires for perfect science to be on hand for every medical variation.
So, medical researchers don’t always use the gold standard; it’s a Platonic ideal that crashes too hard and too often into the material realities of our world. Instead of a randomized trial, we might have to settle for a basic controlled study—where one variable has been isolated for scrutiny, without excluding placebo effects—or for a mixed methods study (e.g., adding a new drug or medical procedure into a cocktail of existing cancer treatments—which won’t allow researchers to fully differentiate between all possible interactions between treatment components, but which can still offer meaningful data with respect to overall protocols).
The next “tier” of quality research involves cohort studies—and just like RCTs, they’re even better if we can then consider their results through a systematic review that involves meta-analysis. These broader analyses assess the strength of the relationship found between variables under scrutiny in individual trials and studies, and they do so by establishing clear, easily replicable parameters for including certain findings but not others, as relevant data sets for comparative purposes.
A cohort study is a longitudinal process, which observes a set population over time. It can involve qualitative or quantitative data-gathering (or both!), and it can be used to look for risk factors that cause a medical outcome over time, or to study the impact of having a medical diagnosis from the start. (A close but lower-tier cousin of the cohort study, the “case-controlled study”, targets specific subgroups for comparison: one with X concern, and one without it; or two subgroups with X from different causes.)
Different studies for different purposes
Now, cohort studies are actually at the top of the quality pyramid for certain domains of medical knowledge—and that’s part of the problem with our medical literacy: we in “gen pop” don’t fully grasp the epistemological difference between studying the efficacy of a single treatment, and looking at the holistic impact of environment on human outcomes. Nevertheless, the difference matters, and medical science is filled with researchers trying to juggle attention paid to individual therapeutic models and to the broader landscape in which treatment is said to be necessary and pursued at all.
Even then, though, there’s another serious “real-world” wrinkle impacting study design: institutional interest. Drug companies are highly incentivized to pursue very specific RCT models for their products, so that they can promote their creations and sell them—but what about treatment protocols that don’t line pharmaceutical pockets?
Mental health research that explores non-pharmaceutical therapeutic interventions, for instance, can struggle to find institutional backing—which in turn leads to a skewing of scientific outcomes toward whatever has initial funding.
This is why another “low” tier of research—qualitative—can actually be quite useful. Qualitative research allows professionals to build a useful sidebar narrative around what contemporary quantitative research is overlooking. The best form of this kind of study, too, is meta-analysis: in which self-reporting statements are measured in aggregate, under clear and replicable parameters, to lend further weight to the possibility that there are alternative protocols not receiving enough formal attention.
The lowest tier of research, when it comes to quantitative “quality”, is thus individual expert opinions, and related narrative statements issued by groups of relevant experts. But just because these are at the bottom of the formal scientific ladder doesn’t mean they don’t still offer an incredibly important offset to whatever hasn’t been addressed by other forms of medical research.
Rather, these are statements drawn from people who have worked in their sub-fields for a while, and they’re issued to try to articulate a broader picture of a given medical subdomain than what the quantitative evidence necessarily imparts. Often, it’s precisely from this body of data—the “lowest quality”—that we get the best sense of how information from finite quantitative research is being applied every day.
How well researched are our treatments?
Which brings us to the real kicker: in practice, huge swaths of applied medicine are not based on gold standard study design. There are, as any general practitioner will tell you, plenty of off-label treatment plans that did not go through direct, formal testing for the specific cases in which they are prescribed.
Now, if a drug company were to boast on labels and in advertising that its products could be used for conditions that haven’t gone through full RCT testing, that drug company would be on the line for some serious lawsuits.
But individual doctors are often using common sense, their own body of experience, and a general milieu of “this worked for others in my field with their patients; maybe it will work for you” that engulfs the whole medical practice. And we lay people know this! This is background knowledge given to us through any number of books and TV shows presenting physicians in the field: doctors are constant front-line experimenters, trying to meet unusual new presentations among patients as they arise.
This issue comes up especially with children, who are often prescribed off-label treatment protocols that for very good ethical reasons were tested on adults, not kids, before being put into general use. (To say nothing of how many feminized persons have also had to use drugs tested primarily or solely on masculinized persons!)
The ethical justification for fudging a recommended age for certain anti-depressants or pain medications (among other forms of childhood treatment) is that the child is being observed by a medical professional while undergoing this protocol—so if anything goes wrong, they’ll be able to identify the issue and course correct quickly.
But the normality of such scientific hand-waving in medical practice also comes down to the fact that a lot of people are hurting here and now from a vast number of special ailments that might be too rare to earn their own RCTs, or too new to have given scientists a chance to provide anything resembling longitudinal study results.
Here, then, is the real problem for medical illiteracy:
We are scared little irrational critters who want something now to appease our fears and our wounds—while also expecting capital-S Science to have All the Answers For Everything before anything is prescribed to us or our loved ones.
We so detest the idea of being at the whims of trial and error, in other words, that if faced with any medical uncertainty, we are very susceptible to choosing a different kind of trial and error—even a much worse or more radical one—just to feel like we were able to assert agency over the process after all.
And that’s dangerous, obviously.
But it’s also very human.
We’re okay with medical uncertainty being an everyday part of medical practice just so long as doctors successfully make us feel like it’s normal. Even when citing a range of risks to us, there’s a way in which those risks can feel less “real” somehow—more like the terms of service we also skim over whenever signing up for a new program or online app. But the moment that someone makes a greater fuss about those “terms of service”, the background ocean of risk factors that come with almost all of what we do when experimenting with our health on a case-by-case basis, then all hell breaks loose, and we rapidly come to question the entire social contract of medical science.
All of our lives are medical trials with a sample size of n=1.
And when that fact really sinks in—as it did during the throes of COVID-19, and as it has done around other recent medical questions, too—existential dread ensues.
But the only cure for it, really, is to accept its presence among us.
We will never be as rigorous and rational a species as so many of us ache to be.
And all good patient care must extend from this fact.
All good medical science, in practice, needs to address our fear of lost agency first.
Be well, be kind, and seek justice where you can.
ML
The facts about the basic biological reality of our brains that you adduced make it hard to be adopting common humanist or atheist slogans.
“In reason we trust” has been put forth, surely continents better then the farcical, absurd massive letters spelling out “In God We Trust” that idiot New York State judges adorn their courtrooms with, but still - humans are exactly what you say, not rational, not empirical, that much of the time.
So if we cannot, indeed, “trust” even our own powers of reason to constantly overcome all that other reptilian, fear-based, incoherent synaptic fire zone of thought deluge, where does “agency” lie, except in the lost possibility to have erected governing systems with actual power to save us from our worst selves and the collective madness of others?